Frontend Deployment Strategies
Modern teams ship faster and safer by choosing the right deployment approach for their frontends.

Frontend deployment used to be simple: FTP a folder of HTML, CSS, and JavaScript to a server. Today it is a nuanced decision space that blends build tooling, hosting platforms, performance expectations, and collaboration workflows. I have lost a Friday night release to a stale service worker and also seen a five-minute CDN cache purge turn a critical rollback into a non-event. These experiences shape the strategies below, which reflect practical patterns that teams use to ship reliably, not just theory.
You will find a pragmatic tour of common frontend deployment strategies, when each one makes sense, and how to set them up. We will cover static hosting for SPAs, server-rendered approaches for dynamic content, and edge rendering for hybrid needs. Along the way, I will include real configuration files and code you can adapt, along with tradeoffs and pitfalls I have learned the hard way.
Where Frontend Deployment Fits Today
Most web projects today are built with frameworks like React, Vue, Svelte, or Angular, and compiled to static assets or server-rendered bundles. Deployment choices often hinge on three factors: how dynamic your data is, your latency and SEO requirements, and your team’s operational comfort with infrastructure.
- Static site generation (SSG): Build time rendering. Great for content-heavy sites, blogs, docs, and marketing pages. Hosting on a CDN is cheap and fast.
- Server-side rendering (SSR): Render on request. Better for personalized content, A/B tests, or data that changes frequently.
- Incremental static regeneration (ISR): Hybrid approach popularized by Next.js. Static by default, revalidates in the background. Good for high-traffic pages with periodic updates.
- Edge rendering: Run server logic closer to users via edge functions. Useful for low-latency personalization and international audiences.
In practice, teams combine these. A marketing site might be SSG on a CDN while the app portion uses SSR or edge functions. The hosting landscape has matured too: Vercel, Netlify, Cloudflare Pages, and AWS Amplify all offer first-class frontend deployment experiences, while Kubernetes and Docker remain common for custom SSR hosting.
Core Strategies and When to Use Them
Static Hosting for Single-Page Applications (SPAs)
SPA deployment is essentially shipping a bundle of HTML, CSS, and JavaScript to a CDN. The catch is routing. If users navigate to /dashboard directly, the CDN must serve index.html, and the client-side router takes over.
A common pattern is a _redirects file for Netlify or _routes.json for Cloudflare Pages. Here is a minimal example that routes all requests to index.html for a client-side app:
# public/_redirects
# Netlify style: send everything to index.html for SPA routing
/* /index.html 200
And for Cloudflare Pages, a _routes.json that allows the asset folder but falls back to the SPA:
{
"version": 1,
"include": ["/*"],
"exclude": ["/assets/*", "/images/*", "/favicon.ico"]
}
When deploying to S3 + CloudFront, set the error document to index.html and configure a custom origin for the bucket. In Terraform:
resource "aws_s3_bucket" "frontend" {
bucket = "my-app-frontend"
}
resource "aws_s3_bucket_website_configuration" "spa" {
bucket = aws_s3_bucket.frontend.id
index_document {
suffix = "index.html"
}
error_document {
key = "index.html"
}
}
resource "aws_cloudfront_distribution" "app" {
origin {
domain_name = aws_s3_bucket.frontend.bucket_regional_domain_name
origin_id = "s3-frontend"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
enabled = true
default_root_object = "index.html"
default_cache_behavior {
target_origin_id = "s3-frontend"
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
forwarded_values {
query_string = false
cookies { forward = "none" }
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Error response for SPA routing
custom_error_response {
error_caching_min_ttl = 300
error_code = 404
response_code = 200
response_page_path = "/index.html"
}
}
This setup is straightforward and cost-effective. It suits product landing pages, dashboards behind auth, and docs. It is less suitable for highly dynamic or personalized content without additional APIs or edge functions.
Server-Rendered Apps with Containers
If your app needs server-side rendering, Node.js containers are common. A basic Express + React SSR setup might look like this:
// server/server.js
import express from 'express';
import path from 'path';
import React from 'react';
import ReactDOMServer from 'react-dom/server';
import App from '../src/App';
const app = express();
// Serve static assets built by your bundler
app.use('/assets', express.static(path.join(process.cwd(), 'dist/client')));
app.get('*', (req, res) => {
const appString = ReactDOMServer.renderToString(<App />);
const html = `
<!doctype html>
<html>
<head>
<title>SSR App</title>
</head>
<body>
<div id="root">${appString}</div>
<script src="/assets/main.js" defer></script>
</body>
</html>
`;
res.status(200).send(html);
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`SSR server listening on port ${PORT}`);
});
Dockerize it for predictable deployments:
# Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine AS runtime
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/server/server.js"]
Deploy to a container platform. If you use AWS ECS, you will typically push to ECR and run the container behind an Application Load Balancer. The minimal Terraform to register the task and service:
resource "aws_ecs_cluster" "frontend" {
name = "frontend-cluster"
}
resource "aws_ecs_task_definition" "ssr" {
family = "ssr-app"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = "256"
memory = "512"
execution_role_arn = aws_iam_role.ecs_task_exec.arn
container_definitions = jsonencode([
{
name = "ssr-app"
image = "${aws_ecr_repository.frontend.repository_url}:latest"
portMappings = [{ containerPort = 3000 }]
environment = [
{ name = "NODE_ENV", value = "production" }
]
}
])
}
resource "aws_ecs_service" "ssr" {
name = "ssr-service"
cluster = aws_ecs_cluster.frontend.id
task_definition = aws_ecs_task_definition.ssr.arn
desired_count = 2
launch_type = "FARGATE"
network_configuration {
subnets = data.aws_subnets.private.ids
security_groups = [aws_security_group.ecs.id]
assign_public_ip = false
}
load_balancer {
target_group_arn = aws_lb_target_group.app.arn
container_name = "ssr-app"
container_port = 3000
}
}
This pattern gives you full control and is ideal for teams comfortable with container ops. It is heavier than static hosting and requires monitoring, scaling policies, and logging.
Hybrid Rendering with Next.js and Edge
Next.js popularized hybrid rendering, where pages can be SSG, SSR, or ISR depending on the route. Vercel makes deployment frictionless, but the approach is portable to other platforms too.
A simple next.config.js shows ISR usage:
// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
appDir: true,
},
// Example: ISR for the blog, SSR for the dashboard
async rewrites() {
return [
{
source: '/blog/:slug',
destination: '/blog/[slug]',
},
];
},
};
module.exports = nextConfig;
A route that revalidates every 60 seconds:
// app/blog/[slug]/page.tsx
export const revalidate = 60;
export default async function BlogPost({ params }: { params: { slug: string } }) {
const post = await fetch(`https://api.example.com/posts/${params.slug}`, {
next: { revalidate: 60 },
}).then(res => res.json());
return (
<article>
<h1>{post.title}</h1>
<div>{post.content}</div>
</article>
);
}
If you need edge runtime for lower latency:
// app/edge/page.tsx
export const runtime = 'edge';
export default function EdgePage() {
return <div>Rendered at the edge with low latency.</div>;
}
Deploying to Cloudflare Pages with the Next.js adapter:
npm install @cloudflare/next-on-pages --save-dev
Add a build script:
{
"scripts": {
"build": "next build && npx @cloudflare/next-on-pages",
"deploy": "npm run build && wrangler pages deploy .vercel/output/static"
}
}
This hybrid approach reduces server load while keeping dynamic pages responsive. It suits content sites with interactive app sections and global audiences.
Edge Functions for Personalization and A/B Testing
Edge functions run close to users and can modify responses without hitting your origin. Cloudflare Workers are a common choice. A simple worker that injects content based on a cookie:
// workers/ab-test.js
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
const cookie = request.headers.get('Cookie') || '';
const variant = cookie.includes('ab=blue') ? 'blue' : 'green';
// Fetch the base HTML from origin or cache
const res = await fetch(request);
// Only modify HTML responses
if (res.headers.get('content-type')?.includes('text/html')) {
const html = await res.text();
const injected = html.replace(
'<body>',
`<body class="variant-${variant}">`
);
return new Response(injected, res);
}
return res;
}
};
Deploy with Wrangler:
# workers/wrangler.toml
name = "ab-test-worker"
main = "ab-test.js"
compatibility_date = "2024-11-01"
[env.production]
route = "example.com/*"
npx wrangler deploy
Edge functions are excellent for low-latency tweaks, redirects, and localization. However, they can complicate debugging and testing. Keep edge logic small and focused.
Progressive Web App (PWA) Caching and Offline Strategies
Service workers are part of deployment because they change how assets are fetched. If you deploy a new app version but the old service worker stays cached, users may see stale UI.
Workbox makes this manageable. A basic setup:
// public/sw.js
importScripts('https://storage.googleapis.com/workbox-cdn/releases/6.5.4/workbox-sw.js');
workbox.routing.registerRoute(
({ request }) => request.destination === 'script' || request.destination === 'style',
new workbox.strategies.StaleWhileRevalidate({
cacheName: 'static-resources',
})
);
workbox.routing.registerRoute(
({ request }) => request.destination === 'image',
new workbox.strategies.CacheFirst({
cacheName: 'images',
plugins: [
new workbox.expiration.ExpirationPlugin({
maxEntries: 60,
maxAgeSeconds: 30 * 24 * 60 * 60,
}),
],
})
);
self.addEventListener('install', (event) => {
self.skipWaiting();
});
self.addEventListener('activate', (event) => {
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cache) => {
if (cache.startsWith('static-') || cache.startsWith('images-')) {
return caches.delete(cache);
}
})
);
})
);
});
In your app build, generate a precache manifest:
// build.js (simplified example using Workbox CLI concept)
// In practice, many frameworks inject this for you
workbox.precaching.precacheAndRoute(self.__WB_MANIFEST);
When deploying, update the service worker version and include a prompt for users to refresh. I have found it helpful to show a subtle banner when an update is detected, letting users opt-in to reload. This avoids the dreaded “stuck on old version” problem.
CDN Caching and Cache Invalidation
Getting CDN caching right saves money and improves performance. A typical pattern:
- Cache static assets aggressively with long TTLs and content hashing in filenames (e.g.,
main.abc123.js). - Cache HTML lightly or not at all for authenticated apps. For SSG sites, cache HTML for short periods with revalidation.
- Purge selectively when necessary.
For CloudFront, invalidations can be targeted. Use a GitHub Actions workflow:
# .github/workflows/deploy.yml
name: Deploy Frontend
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
- run: npm ci
- run: npm run build
- name: Sync to S3
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- run: aws s3 sync dist/ s3://my-app-frontend --delete --cache-control "public, max-age=31536000, immutable"
# Only invalidate HTML files, keep assets cached
- name: Invalidation
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ secrets.CF_DIST_ID }} \
--paths "/index.html" "/app.html" "/en/*"
Using immutable asset caching and selective invalidation keeps performance high while making updates predictable.
Monitoring and Observability Post-Deploy
Deploying is not done until you know it works. Add lightweight observability:
- Client-side error reporting: Sentry or similar.
- Real User Monitoring (RUM) for Web Vitals: CLS, LCP, FID/INP.
- Build and deploy logs in your CI platform.
Example using web-vitals to report to an endpoint:
// src/reportWebVitals.js
import { getCLS, getFID, getLCP, getFCP, getTTFB } from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify(metric);
navigator.sendBeacon('/analytics/vitals', body);
}
getCLS(sendToAnalytics);
getFID(sendToAnalytics);
getLCP(sendToAnalytics);
getFCP(sendToAnalytics);
getTTFB(sendToAnalytics);
If you are on a serverless platform like Vercel, you can also inspect function invocations and cold starts. For SSR containers, track CPU, memory, and request latency and set autoscaling thresholds.
Strengths, Weaknesses, and Tradeoffs
Static hosting (SPA or SSG)
- Strengths: Fast, cheap, globally distributed, simple rollback via CDN.
- Weaknesses: Limited dynamic content without extra APIs; client-side routing can be tricky.
- Best for: Docs, marketing sites, internal dashboards, and SPAs with robust APIs.
Server-rendered containers
- Strengths: Full control, easy to integrate with internal services, works with any stack.
- Weaknesses: Operational overhead, scaling costs, slower cold starts compared to edge.
- Best for: Apps with complex server logic, legacy integrations, or teams comfortable with container ops.
Hybrid rendering (Next.js, Nuxt, SvelteKit)
- Strengths: Balances performance and flexibility, great DX, built-in routing and data fetching patterns.
- Weaknesses: Lock-in to framework patterns, platform-specific features might tie you to one host.
- Best for: Product apps with mixed static and dynamic needs, teams prioritizing DX.
Edge functions
- Strengths: Low latency, near originless updates, good for experimentation.
- Weaknesses: Debugging complexity, smaller runtime constraints, limited state.
- Best for: Global personalization, A/B tests, lightweight APIs, redirects.
PWA with service workers
- Strengths: Offline capability, reduced network dependency, engagement.
- Weaknesses: Update propagation issues, cache invalidation pitfalls.
- Best for: Apps used in unreliable networks, mobile-first experiences.
Personal Experience and Common Pitfalls
I once shipped a PWA update that introduced a breaking change to a local data schema. Users on the old service worker started hitting runtime errors because the cached app tried to read data in the new format. We fixed it by adding versioned storage migrations and a two-phase rollout: first we deployed the new app with backward-compatible schema, waited a day, then forced a service worker update. Lesson: treat the service worker and local caches as part of your deployment surface.
Another learning came from CDN caching on a marketing site. We hashed our JavaScript filenames but forgot to hash our CSS. After a design update, some users saw old styles because the CSS was cached for a year. The fix was to enable filename hashing in our bundler and update our build pipeline to ensure all assets had unique hashes.
For SSR containers, I underestimated the impact of cold starts during a product launch. Traffic spiked, and the container startup time added noticeable latency. We solved it by increasing minimum instances and introducing edge caching for static parts of the page. If your app is latency-sensitive, avoid heavy synchronous work at startup and consider platform features like keep-warm or provisioned concurrency.
Finally, I have seen teams over-engineer deployment by choosing Kubernetes for a simple SSG site. The maintenance cost quickly outweighed the benefits. When the content is mostly static, a CDN-first approach is usually the right call.
Getting Started: Workflow and Project Structure
A practical starter structure for a hybrid app might look like this:
my-app/
├── apps/
│ ├── web/ # Next.js app
│ │ ├── app/
│ │ │ ├── layout.tsx
│ │ │ ├── page.tsx
│ │ │ └── blog/
│ │ │ └── [slug]/
│ │ │ └── page.tsx
│ │ ├── public/
│ │ ├── next.config.js
│ │ └── package.json
│ └── docs/ # Static docs (Astar, Vitepress)
│ ├── .vitepress/
│ ├── guide/
│ └── package.json
├── packages/
│ ├── ui/ # Shared components
│ └── config/ # Shared ESLint/Prettier
├── infra/
│ ├── terraform/
│ │ ├── main.tf
│ │ └── variables.tf
│ └── docker/
│ └── Dockerfile
├── .github/
│ └── workflows/
│ └── deploy.yml
└── package.json # Workspace root
Workflow mental model:
- Build once: produce static assets for SSG/SPA and a server bundle for SSR. Use a single build pipeline to ensure consistency.
- Test in CI: run unit tests, lint, and a small E2E suite (e.g., Playwright) against a preview environment.
- Preview deployments: create ephemeral URLs for pull requests. This catches routing and environment issues early.
- Deploy to staging: mirror production settings (CDN rules, cache headers, edge configs). Run smoke tests.
- Production rollout: use blue-green or canary for SSR containers; for static sites, deploy to CDN and monitor. Keep a rollback plan (e.g., previous build artifact in S3 or a previous Docker image tag).
- Observe: track errors and Web Vitals; set alerts for regressions.
For teams that want to stay purely static but need dynamic features, prefer API-first design and edge functions. This keeps the frontend deployment simple while allowing advanced behavior where needed.
Free Learning Resources
- Vercel Documentation: https://vercel.com/docs – clear guidance for Next.js deployments, preview environments, and serverless functions.
- Netlify Docs: https://docs.netlify.com – excellent for static hosting, redirects, and edge functions.
- Cloudflare Pages + Workers: https://developers.cloudflare.com/pages and https://developers.cloudflare.com/workers – practical examples for edge rendering and workers.
- AWS S3 + CloudFront Guide: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html – official steps for static site hosting.
- Web Vitals: https://web.dev/vitals/ – essential reference for measuring user experience post-deploy.
- Playwright: https://playwright.dev – reliable E2E testing for CI pipelines.
- Workbox: https://developer.chrome.com/docs/workbox/ – service worker strategies and caching patterns.
Who Should Use These Strategies and Who Might Skip
- Choose static hosting (SPA/SSG) if your app can rely on APIs and you want the simplest, most cost-effective deployment. Docs, marketing sites, and internal tools fit here.
- Choose SSR containers if you need server-side logic, have legacy integrations, or want full control over infrastructure. Be ready to invest in observability and scaling.
- Choose hybrid rendering if you want strong DX with performance benefits. Next.js, Nuxt, and SvelteKit reduce friction, but be mindful of platform lock-in.
- Choose edge functions for personalization, localization, and lightweight logic close to users. Keep the surface small and test thoroughly.
- Consider PWA if your users are on mobile or unreliable networks. Plan for cache invalidation and user notifications for updates.
If you are a solo developer shipping a small project, static hosting or a platform like Vercel is often the fastest path. If you are on a platform team managing many apps, standardize on a hybrid or container approach that you can observe and secure uniformly.
Summary
Frontend deployment is a set of practical decisions that balance performance, complexity, and collaboration. Static hosting is fast and cheap. Server-rendered containers give you control. Hybrid rendering blends the best of both. Edge functions and PWAs extend your capabilities but require careful testing and invalidation strategies.
The best strategy is the one that fits your data dynamics, your audience distribution, and your team’s operational appetite. Start simple, add complexity only when you feel real pain, and invest in observability from day one. When in doubt, ship smaller changes more often and keep your rollback plan within arm’s reach.
References used to shape this guide include Vercel’s deployment docs, Cloudflare’s Pages and Workers documentation, Netlify’s redirect and edge function guides, AWS S3 and CloudFront hosting documentation, web.dev’s Web Vitals overview, Workbox’s service worker patterns, and Playwright’s E2E testing recommendations.




