Frontend Performance Optimization Techniques
Faster sites aren't just nice to have; they directly impact engagement, conversions, and infrastructure costs.

Frontend performance work often feels like a mix of detective work and small, deliberate improvements. Over the years, I’ve seen teams obsess over shaving 50ms off a TTFB while ignoring a 4MB JavaScript bundle, and the opposite too. If you are building for real users on real networks and devices, performance is not a checkbox; it’s a continuous practice that sits at the intersection of engineering, design, and product strategy.
In this article, I’ll walk through techniques that matter in today’s web, where mobile-first isn’t a slogan, it’s the default. We’ll look at where the wins come from, where they don’t, and how to decide what to tackle first. I’ll share concrete patterns and code that I’ve used in production, including project structure and configuration files that help teams keep the wins. We’ll also talk about tradeoffs, because not every technique is right for every project. By the end, you’ll have a grounded, practical set of tools you can apply to your own applications.
Context: Where frontend performance fits today
Modern frontend work spans a broad range. You might be building a content site for millions of readers, a dashboard for power users on desktop, or a mobile web app with a limited budget of data and CPU. The performance constraints differ, but the fundamentals stay the same: minimize the work the browser does, ship less code over the wire, and make interactions feel instant.
In production apps, performance is shaped by:
- The build tooling (Vite, webpack, or a framework’s internal bundler).
- The rendering strategy (SSR, SSG, CSR).
- The data fetching and caching layer (React Query, SWR, Apollo).
- The asset pipeline (images, fonts, CSS).
- The network (CDN, HTTP/2 or HTTP/3, TLS setup).
- The client device (low-end Android, mid-tier iOS, desktop).
When teams adopt new frameworks or libraries, they often inherit a performance profile that they don’t fully understand. That’s why it’s crucial to measure, not guess. A “fast” React app can become slow if you miss the hydration cost or over-fetch data. A static site can still feel sluggish if fonts block rendering. The techniques below are not theoretical; they are patterns I’ve used to make real sites faster.
Measuring before optimizing
Before you change anything, measure. If you don’t, you won’t know if you improved anything or just moved the slowness around. The browser is your best instrument.
Lighthouse and Core Web Vitals
Lighthouse is a great starting point, but it’s a lab tool. It’s useful for catching regressions in controlled conditions. For real-user data, look at Core Web Vitals:
- LCP (Largest Contentful Paint) measures loading performance.
- INP (Interaction to Next Paint) measures responsiveness.
- CLS (Cumulative Layout Shift) measures visual stability.
You can run Lighthouse in Chrome DevTools or use a CI job to track it over time. For real-user metrics, integrate your app with tools like Google’s web-vitals library or your analytics provider. You can also use RUM (Real User Monitoring) platforms that collect field data.
Performance tracing in DevTools
Use the Performance panel in Chrome DevTools to record a page load or interaction. Pay attention to:
- Long tasks (blocks main thread).
- Script evaluation and layout cost.
- Forced reflows and expensive style recalculations.
If you’re new to this, try recording a simple interaction like clicking a button that opens a modal. Look for tasks that take more than 50ms; that’s often where you’ll find jank.
Practical measuring setup with web-vitals
A lightweight way to start collecting RUM data is using the web-vitals library and sending beacons to your endpoint. Here’s a simple setup you can drop into your app:
// src/perf/web-vitals.js
import { onCLS, onINP, onLCP } from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // 'good', 'needs-improvement', 'poor'
id: metric.id,
// Add page and user context
url: location.href,
ua: navigator.userAgent,
});
// Use navigator.sendBeacon if available, fallback to fetch
if (navigator.sendBeacon) {
navigator.sendBeacon('/api/metrics', body);
} else {
fetch('/api/metrics', { body, method: 'POST', keepalive: true });
}
}
// Attach listeners
onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);
Import this early in your app entry point to ensure you capture metrics reliably. On the backend, store metrics in a time-series database (e.g., Prometheus) or even a simple log aggregator to visualize trends.
Reducing JavaScript bundle size
For most modern web apps, the biggest bottleneck is JavaScript size and execution cost. The browser must download, parse, compile, and execute JS, which blocks the main thread. Smaller bundles are faster on all devices and networks.
Code splitting and route-based chunks
If you’re using a bundler like Vite or webpack, split your code by route and by feature. Avoid importing heavy libraries on the critical path.
Example with dynamic imports in a React app using React Router:
// src/App.jsx
import { Suspense, lazy } from 'react';
import { BrowserRouter as Router, Routes, Route } from 'react-router-dom';
import Home from './pages/Home';
import Layout from './components/Layout';
// Lazy-load heavy routes
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Reports = lazy(() => import('./pages/Reports'));
function App() {
return (
<Router>
<Layout>
<Suspense fallback={<div>Loading...</div>}>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/reports" element={<Reports />} />
</Routes>
</Suspense>
</Layout>
</Router>
);
}
export default App;
This ensures that the Reports route, which might pull in charting libraries, only loads when a user navigates there.
Tree shaking and side effects
Make sure your bundler can tree-shake. In Vite (which uses Rollup), declare side effects in package.json to help the optimizer. Here’s a minimal Vite setup that favors production builds and tree shaking:
// vite.config.js
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
build: {
target: 'es2018',
minify: 'terser',
sourcemap: true,
rollupOptions: {
// Split vendor chunks if needed
output: {
manualChunks(id) {
if (id.includes('node_modules')) {
return 'vendor';
}
},
},
},
},
// Avoid parsing unused file types
assetsInclude: ['**/*.svg', '**/*.png'],
});
In your package.json, declare side effects to improve tree shaking:
{
"name": "my-app",
"version": "1.0.0",
"sideEffects": [
"*.css",
"*.scss"
],
"type": "module"
}
Avoid heavy libraries for small tasks
I once replaced a large date library (moment.js) with date-fns and cut 300 KB from the initial bundle. For small utilities, consider native APIs first. For example, you can often use Intl.DateTimeFormat or URL instead of helper libraries.
Webpack bundle analyzer in CI
Add a bundle analyzer to visualize your bundle in CI. This helps catch regressions and new dependencies creeping in.
// webpack-analyzer.js (if using webpack)
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer');
module.exports = {
plugins: [
new BundleAnalyzerPlugin({
analyzerMode: 'static',
openAnalyzer: false,
reportFilename: 'reports/bundle.html',
}),
],
};
Run it in a CI job and upload the report as an artifact.
Image and font optimization
Images and fonts are often the heaviest assets. Optimizing them has an outsized impact on LCP.
Modern formats and responsive images
Prefer AVIF or WebP with fallbacks. Use srcset and sizes to serve the right image for the viewport.
<img
src="/images/hero.webp"
srcset="/images/hero-320.webp 320w, /images/hero-640.webp 640w, /images/hero-1024.webp 1024w"
sizes="(max-width: 640px) 100vw, 640px"
width="640"
height="360"
alt="Hero image for the product page"
loading="eager"
/>
If you have a static site generator, pre-process images during build. For example, using sharp in a build script:
// scripts/build-images.js
const sharp = require('sharp');
const fs = require('fs');
const path = require('path');
const sizes = [320, 640, 1024];
const inputDir = path.join(__dirname, '../src/assets/images');
const outputDir = path.join(__dirname, '../public/images');
if (!fs.existsSync(outputDir)) fs.mkdirSync(outputDir, { recursive: true });
fs.readdirSync(inputDir).forEach(async (file) => {
if (!/\.(png|jpg)$/.test(file)) return;
const name = path.basename(file, path.extname(file));
for (const size of sizes) {
await sharp(path.join(inputDir, file))
.resize(size)
.toFormat('webp', { quality: 80 })
.toFile(path.join(outputDir, `${name}-${size}.webp`));
}
});
Preload critical images and use placeholders
For the hero image or LCP element, preload it to prioritize loading:
<link rel="preload" as="image" href="/images/hero-640.webp" imagesrcset="/images/hero-320.webp 320w, /images/hero-640.webp 640w" imagesizes="(max-width: 640px) 100vw, 640px" />
Consider a low-quality image placeholder (LQIP) or blurred placeholder to avoid layout shift. If you’re using Next.js, their built-in Image component handles much of this automatically. For custom setups, tools like sqip generate SVG placeholders.
Font loading strategy
Fonts often block rendering. Use font-display: swap in your @font-face and preload the critical font files.
/* styles/fonts.css */
@font-face {
font-family: "Inter";
src: url("/fonts/inter-var.woff2") format("woff2");
font-weight: 100 900;
font-display: swap;
}
/* Preload in HTML head */
/* <link rel="preload" href="/fonts/inter-var.woff2" as="font" type="font/woff2" crossorigin> */
Load only the weights you need, and consider system fonts for non-critical UI to reduce network requests.
Network and delivery
You can ship fewer bytes, but if the network is slow or the server is far from the user, you’ll still feel it. Delivery matters.
HTTP/2, HTTP/3, and CDN
HTTP/2 multiplexing helps with multiple requests, but avoid too many small files if you can combine them. HTTP/3 (QUIC) reduces latency, especially on flaky networks. Use a CDN to bring assets closer to users. If you’re on a cloud provider, enable their CDN and cache static assets aggressively with immutable cache headers.
Cache-Control: public, max-age=31536000, immutable
Compression and Brotli
Enable Brotli compression for text assets. Most hosting platforms support this out of the box if configured. If you have control over the server, set up compression middleware:
// server.js (Node + Express)
const express = require('express');
const compression = require('compression');
const app = express();
// Prefer Brotli, fallback to gzip
app.use(
compression({
threshold: 0, // compress everything
level: 11, // max compression for Brotli
})
);
app.use(express.static('dist'));
app.listen(3000, () => {
console.log('Server running on port 3000');
});
Resource hints
Use rel=preconnect for third-party origins and dns-prefetch where needed. Don’t overuse it; preconnect helps when you know you’ll need a resource soon.
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
<link rel="preconnect" href="https://api.example.com" />
Caching strategies
Effective caching prevents repeat network requests and accelerates repeat visits.
Service workers and runtime caching
A service worker can cache static assets and API responses. Below is a minimal example using Workbox. This pattern is particularly valuable for offline-first apps or dashboards with heavy API usage.
// public/sw.js
importScripts('https://storage.googleapis.com/workbox-cdn/releases/6.5.4/workbox-sw.js');
workbox.core.setCacheNameDetails({ prefix: 'my-app' });
// Precache files generated by build (workbox inject manifest)
workbox.precaching.precacheAndRoute(self.__WB_MANIFEST);
// Runtime cache for API responses
workbox.routing.registerRoute(
({ url }) => url.pathname.startsWith('/api/'),
new workbox.strategies.StaleWhileRevalidate({
cacheName: 'api-cache',
plugins: [
new workbox.expiration.ExpirationPlugin({
maxEntries: 50,
maxAgeSeconds: 5 * 60, // 5 minutes
}),
],
})
);
// Cache images
workbox.routing.registerRoute(
({ request }) => request.destination === 'image',
new workbox.strategies.CacheFirst({
cacheName: 'images',
plugins: [
new workbox.expiration.ExpirationPlugin({
maxEntries: 60,
maxAgeSeconds: 30 * 24 * 60 * 60, // 30 days
}),
],
})
);
In your build config (workbox.config.js):
// workbox.config.js
module.exports = {
globDirectory: 'dist/',
globPatterns: ['**/*.{html,js,css,woff2}'],
swSrc: 'public/sw.js',
swDest: 'dist/sw.js',
};
Register the service worker in your app:
// src/register-sw.js
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register('/sw.js')
.then((reg) => {
console.log('Service worker registered', reg.scope);
})
.catch((err) => {
console.warn('SW registration failed', err);
});
});
}
ETag and conditional requests
For dynamic API responses, use ETags and 304 Not Modified responses to avoid sending large payloads when nothing changed. Many HTTP frameworks support this out of the box.
Rendering strategies and hydration
Rendering strategy drives perceived performance. Choose the right one for the use case.
SSR vs SSG vs CSR
- SSR (Server-Side Rendering): Good for SEO and first paint, but adds server load and hydration cost.
- SSG (Static Site Generation): Best for content-heavy sites. Fast TTFB, but you need build-time rendering.
- CSR (Client-Side Rendering): Good for highly interactive apps, but heavy on initial JS.
Use SSR for dynamic pages that need SEO, SSG for blogs or docs, and CSR for dashboards. You can mix them: SSG for landing pages, SSR for dynamic content, and CSR for interactions.
Hydration cost and islands
Hydration can be expensive because the browser must attach event listeners to server-rendered HTML. Modern frameworks are exploring partial hydration or islands architecture. For React, you can reduce hydration cost by:
- Lazy-loading non-critical components.
- Using
react-lazyand Suspense for code splitting. - Deferring heavy interactions until after first render.
In a more radical approach, you can use Astro, which renders static HTML by default and hydrates only interactive components. This pattern has helped content sites cut JS by 80% and improve LCP.
Interaction performance and responsiveness
Perceived speed isn’t just about load; it’s about how the app feels when you click and type.
Debouncing and throttling
Use debouncing for expensive handlers like search or window resize:
// src/utils/debounce.js
export function debounce(fn, delay = 250) {
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => fn.apply(this, args), delay);
};
}
// Usage in a search input
import { debounce } from './utils/debounce';
const handleSearch = debounce((query) => {
// fetch or filter
console.log('Searching for:', query);
}, 300);
document.getElementById('search').addEventListener('input', (e) => {
handleSearch(e.target.value);
});
Throttling is better for continuous events like scroll:
// src/utils/throttle.js
export function throttle(fn, limit = 150) {
let inThrottle;
return function (...args) {
if (!inThrottle) {
fn.apply(this, args);
inThrottle = true;
setTimeout(() => (inThrottle = false), limit);
}
};
}
Use requestAnimationFrame for visual updates
Avoid layout thrashing. Batch DOM reads and writes, and use requestAnimationFrame for visual work:
// Instead of reading/writing in a loop, batch
function updatePosition() {
const elements = document.querySelectorAll('.box');
const positions = [];
// Read phase
for (const el of elements) {
positions.push(el.offsetTop);
}
// Write phase
requestAnimationFrame(() => {
for (let i = 0; i < elements.length; i++) {
elements[i].style.transform = `translateY(${positions[i] + 10}px)`;
}
});
}
Avoid forced reflows
Reading layout properties after writing can force a reflow:
// Bad: triggers reflow in loop
for (const el of elements) {
el.style.height = '100px';
const h = el.offsetHeight; // forces layout
}
// Better: read first, then write
const heights = [];
for (const el of elements) {
heights.push(el.offsetHeight);
}
requestAnimationFrame(() => {
for (let i = 0; i < elements.length; i++) {
elements[i].style.height = '100px';
}
});
CSS and rendering performance
CSS can be a hidden source of slowness. Overuse of expensive properties like box-shadow, filter, and backdrop-filter can hurt FPS on mobile.
Minimize repaints and compositing layers
Use will-change sparingly to promote elements to their own layers, but don’t overdo it; too many layers consume memory:
/* Promote an animated element to a layer */
.animated-card {
will-change: transform;
transition: transform 0.2s ease;
}
/* Avoid animating layout properties like width/height; use transform and opacity */
Use CSS containment
For large lists or grids, use contain to limit the scope of layout and paint work:
.grid-item {
contain: content; /* layout, paint, and style containment */
}
Accessibility and performance
Performance supports accessibility. Users on low-end devices and slow networks deserve a usable experience. Keyboard navigation and focus management need main-thread time; avoid long tasks that block them.
For example, avoid heavy computations in event handlers used for keyboard navigation. Use requestIdleCallback for non-critical work:
if ('requestIdleCallback' in window) {
requestIdleCallback(() => {
// non-critical tasks like analytics initialization
import('./analytics');
});
} else {
// Fallback
setTimeout(() => import('./analytics'), 100);
}
Honest evaluation: strengths and tradeoffs
Not every technique is a silver bullet. Here’s where they shine and where they don’t.
-
Code splitting and route-based chunks
- Strengths: Immediate reduction in initial bundle size, faster first paint.
- Tradeoffs: Slight overhead of additional network requests, need to manage loading states.
- When to skip: Very small apps where splitting adds complexity without meaningful gains.
-
Image optimization (AVIF/WebP, responsive srcset)
- Strengths: Big wins for LCP, especially on mobile.
- Tradeoffs: Build-time processing can slow CI; some older browsers need fallbacks.
- When to skip: If images are minimal or already optimized by a CDN, it’s fine to rely on them.
-
Service workers and caching
- Strengths: Great for repeat visits and offline capability.
- Tradeoffs: Adds complexity, cache invalidation is tricky; can mask issues if you’re not careful.
- When to skip: Static content sites where a CDN is sufficient and you don’t need offline.
-
SSR vs SSG
- Strengths: SSR boosts SEO and first render; SSG gives the fastest TTFB.
- Tradeoffs: SSR adds server cost and hydration overhead; SSG can be slow to build at scale.
- When to skip: CSR can be fine for authenticated dashboards where SEO isn’t a factor.
-
Interaction patterns (debounce, throttle, rAF)
- Strengths: Makes apps feel snappy and consistent.
- Tradeoffs: Can add latency if overused; need to pick the right pattern for the event type.
- When to skip: Simple apps without heavy user interaction.
Personal experience: lessons from the field
In one project, a marketing site had great SEO but a terrible LCP. The root cause wasn’t images; it was a heavy JS bundle that blocked rendering of the hero. We moved to an SSG setup with Astro, lazy-hydrated only the interactive components (a search bar and a carousel), and used AVIF images with placeholders. LCP dropped from 3.2s to 1.6s on mobile and INP improved by ~100ms. This wasn’t just a tooling change; it required rethinking what had to be interactive at first paint.
Another time, a dashboard app felt sluggish only on low-end Android devices. The issue was an expensive Intl.DateTimeFormat call inside a map function rendering a large list. We cached the formatted strings and offloaded non-urgent calculations to a web worker. The worker approach isn’t always worth it, but for CPU-heavy tasks it can be transformative.
A common mistake I see is over-optimizing before measuring. Teams add code splitting everywhere, but their bundle is already small, and the extra network round trips actually slow things down. The pattern I’ve learned is:
- Measure in the field (RUM) and in the lab (Lighthouse).
- Prioritize high-traffic pages and user journeys.
- Prefer simpler changes (image formats, fonts) before architectural shifts (SSR).
A fun fact that helped me was realizing that the browser will often skip parsing and compiling JS that never runs. Lazy loading isn’t just a runtime win; it saves CPU on the main thread at startup.
Getting started: workflow and mental models
If you’re starting from scratch or refactoring an existing project, focus on a repeatable workflow.
Recommended folder structure
Here’s a simple structure that supports performance work by isolating assets, performance utilities, and pages:
my-app/
├── src/
│ ├── assets/
│ │ ├── images/
│ │ └── fonts/
│ ├── components/
│ │ ├── Layout.jsx
│ │ └── Header.jsx
│ ├── pages/
│ │ ├── Home.jsx
│ │ └── Dashboard.jsx
│ ├── perf/
│ │ └── web-vitals.js
│ ├── utils/
│ │ ├── debounce.js
│ │ └── throttle.js
│ └── App.jsx
├── public/
│ ├── images/
│ └── sw.js
├── dist/ # Build output
├── scripts/ # Image processing, etc.
│ └── build-images.js
├── vite.config.js
├── workbox.config.js
└── package.json
Tooling setup
- Vite for fast dev server and optimized production builds.
- Lighthouse CI for regression detection.
- Workbox for service worker generation.
- Sharp for image transforms in CI.
Example of a simple CI job (conceptual, not tied to a specific platform):
# scripts/ci-audit.sh
#!/usr/bin/env bash
# Build the app
npm run build
# Run Lighthouse and fail if LCP is above 2.5s
npx lighthouse-ci autorun
# Generate bundle report
npm run analyze
# Upload reports as artifacts
Mental model for prioritization
- Start with the top user journeys (landing page, key conversion flow).
- Measure before and after.
- Optimize in this order:
- Images and fonts (high impact, low complexity).
- JavaScript bundle size (split, tree shake, avoid heavy libs).
- Network delivery (compression, CDN, caching).
- Rendering strategy (SSR/SSG if appropriate).
- Interaction performance (debounce, throttle, rAF, web workers).
Resist the urge to add premature complexity (workers, advanced caching) until you have data showing it’s necessary.
Free learning resources
- web.dev performance guides: Practical, step-by-step explanations of performance fundamentals and Core Web Vitals.
- Lighthouse documentation: Understand what Lighthouse measures and how to run it in CI.
- web-vitals library: Small, reliable library for collecting field metrics.
- Workbox documentation: Service worker patterns and caching strategies.
- MDN Web Docs on Performance: Authoritative reference on browser performance concepts.
These resources are grounded in real-world usage and maintained by the teams behind the tools.
Summary and takeaway
If your audience is mostly mobile users or you run high-traffic pages, frontend performance optimization should be a core part of your engineering practice. Start by measuring, then pick high-impact, low-complexity wins like image optimization, font loading, and code splitting. Move on to architectural changes like SSR or SSG when they align with your content and SEO needs. For highly interactive apps, focus on responsiveness and avoiding long tasks.
Who should prioritize these techniques:
- Teams building public-facing content sites, marketing pages, and e-commerce flows.
- Apps with global audiences on varied networks and devices.
- Projects where Core Web Vitals are business-critical (search ranking, conversions).
Who might skip or defer:
- Internal dashboards used on fast corporate networks where LCP/INP are less of a factor.
- Very small apps where the complexity of splitting and caching outweighs the benefits.
- Prototypes where speed of iteration is more important than production performance.
The takeaway is simple: measure first, optimize second, and always consider tradeoffs. A faster site is not just a technical win; it’s a better experience for your users. If you keep that as your guiding principle, you’ll make the right calls on what to improve and when to stop.




