Frontend Monitoring and Error Tracking
Because users will always find a way to break your app in ways you never imagined

When you ship frontend code, you’re shipping to an environment you do not fully control. Different browsers, device capabilities, browser extensions, flaky networks, and real user behavior constantly stress your assumptions. On a small project, you might get away with watching console logs and a few Sentry emails. In the real world, especially with multiple teams and frequent releases, you need a coherent strategy: a way to capture errors, track performance regressions, and understand user impact without drowning in noise.
This post explains how I approach frontend monitoring and error tracking in production applications. It’s not about chasing every minor exception; it’s about building a feedback loop that surfaces what matters. I’ll cover the practical context, the building blocks, what I use in real projects, pitfalls I’ve hit, and how to set up a maintainable approach. If you’ve ever stared at a stack trace pointing to a minified file and wondered how to find the real cause, this is for you.
Where frontend monitoring fits today
Most production apps use a mix of client-side frameworks and build tools like Vite, Next.js, or Remix, and they rely heavily on third-party scripts and APIs. That means errors can originate from application code, vendor bundles, or integration points you don’t directly control. Frontend monitoring sits between your build pipeline and your on-call process, turning raw events into actionable insights.
Typical teams using this are:
- Product engineers building features in React, Vue, Svelte, or Angular.
- Platform or SRE teams responsible for release stability and performance.
- QA and support teams triaging user reports and correlating them with incidents.
Compared to backend observability, frontend monitoring has unique constraints: low signal-to-noise ratios, privacy concerns, and limited control over the runtime. Backends can rely on server logs and controlled environments; frontends must be resilient and instrumented with user context. Alternatives include building in-house solutions, using open-source stacks like OpenTelemetry with a custom collector, or adopting hosted platforms like Sentry, LogRocket, or Datadog. In most mid-sized projects, a hosted error tracking service paired with RUM (Real User Monitoring) offers the best ROI. In-house is feasible if you have strict data residency requirements or a dedicated platform team.
The industry is coalescing around the W3C Trace Context standard for distributed tracing and OpenTelemetry for instrumentation, but browser support and practical integrations still favor vendor SDKs for error tracking. For performance, the Performance API and Core Web Vitals are standard references that most tooling is built upon. See MDN’s guides on the Performance API and Web Vitals for foundational context:
Core concepts and practical building blocks
Error tracking vs. performance monitoring
Error tracking focuses on exceptions, promise rejections, and unhandled errors. Performance monitoring focuses on load times, interactivity, and runtime jank. Both are necessary, but their workflows differ. Errors need symbolication (mapping minified stacks to source code), while performance needs baselines and percentiles.
Client-side vs. server-side reporting
Client-side reporting captures errors happening in the browser. Server-side reporting might capture API failures and relate them to a session. A robust strategy links both: a frontend error should correlate to a backend trace using a shared trace ID.
Sampling and privacy
You can’t record everything without impacting users and costs. Sampling at the client level (for example, 10% of sessions) is common. You should also avoid logging PII and carefully consider fingerprinting. Be transparent about telemetry in your privacy policy.
Signal hygiene and noise reduction
- Ignore third-party script errors you cannot action.
- Group errors by stack trace fingerprint.
- Use release tags to track regressions by version.
- Rate-limit repetitive errors to avoid floods.
When to alert vs. when to record
Alert on spikes in unique errors per release, and on critical flows like checkout. Record low-frequency errors to a log for trend analysis without waking anyone.
A practical setup: project structure and configuration
Below is a minimal but realistic project structure you might adopt. It’s front-end focused with configuration for error tracking, performance, and environment control.
project/
├── public/
│ └── robots.txt
├── src/
│ ├── app/
│ │ ├── components/
│ │ ├── pages/
│ │ └── lib/
│ ├── instrumentation/
│ │ ├── browser-tracing.ts
│ │ ├── error-handling.ts
│ │ └── reporting.ts
│ ├── types/
│ ├── main.tsx
│ └── env.ts
├── .env.development
├── .env.production
├── vite.config.ts
├── sentry.release.config.js
└── package.json
Key environment file examples:
# .env.development
VITE_ENVIRONMENT=development
VITE_SENTRY_DSN=https://public-key@o123456.ingest.sentry.io/123456
VITE_SAMPLE_RATE=1.0
VITE_RELEASE=1.4.0-dev
# .env.production
VITE_ENVIRONMENT=production
VITE_SENTRY_DSN=https://public-key@o123456.ingest.sentry.io/123456
VITE_SAMPLE_RATE=0.25
VITE_RELEASE=1.4.0
Wiring the error tracking SDK
We’ll use Sentry as an example, focusing on patterns you can adapt to other providers. The goal is to initialize early, attach context, and set up integrations that reduce noise. If you’re using OpenTelemetry, you’d configure a browser exporter and span processors, but for typical frontend apps, vendor SDKs provide better ergonomics for stack trace symbolication.
// src/instrumentation/reporting.ts
import * as Sentry from '@sentry/react';
import { BrowserTracing } from '@sentry/tracing';
import { env } from '../env';
export function initMonitoring() {
if (!env.SENTRY_DSN) {
console.warn('Sentry DSN missing; error reporting disabled.');
return;
}
Sentry.init({
dsn: env.SENTRY_DSN,
environment: env.ENVIRONMENT,
release: env.RELEASE,
// Adjust sampling by environment; production uses a lower rate to manage costs.
sampleRate: env.SAMPLE_RATE,
// Filter out noisy third-party errors.
beforeBreadcrumb(breadcrumb) {
// Example: ignore breadcrumbs from analytics scripts
if (breadcrumb.category === 'console' && breadcrumb.message?.includes('analytics')) {
return null;
}
return breadcrumb;
},
// Attach user context only if available and compliant with privacy policy.
integrations: [
new BrowserTracing({
tracePropagationTargets: ['localhost', /^\//],
// Custom routing instrumentation for SPA frameworks
routingInstrumentation: Sentry.reactRouterV6Instrumentation(
React.useEffect,
React.useLocation,
React.useNavigationType,
React.createRoutesFromChildren,
React.matchRoutes
),
}),
],
// Error filters to avoid reporting expected browser extension errors.
denyUrls: [
// Extensions often inject scripts that throw; we rarely control these.
/extensions\//,
/^chrome:\/\//,
],
});
}
Integrating with React Router v6
Router instrumentation helps trace navigation spans and tie errors to the route where they occurred. This is invaluable when users report issues without clear steps to reproduce.
// src/main.tsx
import React from 'react';
import ReactDOM from 'react-dom/client';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
import { App } from './app/App';
import { initMonitoring } from './instrumentation/reporting';
import { env } from './env';
initMonitoring();
const root = ReactDOM.createRoot(document.getElementById('root') as HTMLElement);
root.render(
<React.StrictMode>
<BrowserRouter>
<Routes>
<Route path="/" element={<App />} />
{/* Additional routes here */}
</Routes>
</BrowserRouter>
</React.StrictMode>
);
Error boundary for graceful handling
Error boundaries capture render errors and prevent entire pages from breaking. Use them around critical sections.
// src/app/components/ErrorBoundary.tsx
import React from 'react';
import * as Sentry from '@sentry/react';
interface Props {
children: React.ReactNode;
fallback?: React.ReactNode;
}
interface State {
hasError: boolean;
error?: Error;
}
export class ErrorBoundary extends React.Component<Props, State> {
constructor(props: Props) {
super(props);
this.state = { hasError: false };
}
static getDerivedStateFromError(error: Error): State {
return { hasError: true, error };
}
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
Sentry.withScope((scope) => {
scope.setContext('react', errorInfo);
scope.setLevel('error');
Sentry.captureException(error);
});
}
render() {
if (this.state.hasError) {
return this.props.fallback ?? (
<div role="alert">
<h2>Something went wrong</h2>
<button onClick={() => window.location.reload()}>Reload</button>
</div>
);
}
return this.props.children;
}
}
Environment-aware configuration
Avoid hardcoding DSNs or sampling rates. Use env vars to control behavior across environments, and never bundle secrets in client code. DSNs are public by design, but you should still restrict project permissions in the provider console.
// src/env.ts
export const env = {
ENVIRONMENT: import.meta.env.VITE_ENVIRONMENT as string,
SENTRY_DSN: import.meta.env.VITE_SENTRY_DSN as string,
SAMPLE_RATE: Number(import.meta.env.VITE_SAMPLE_RATE || '1.0'),
RELEASE: import.meta.env.VITE_RELEASE as string,
};
Build and release mapping
Source maps and release tags allow the provider to map minified stacks back to your source code. In Vite, you can generate source maps for production if you store them securely. Avoid publicly exposing source maps to the internet; upload them to the error tracking provider instead.
Example Vite config snippet:
// vite.config.ts
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
build: {
sourcemap: true, // Needed for symbolication
},
});
When deploying, upload source maps to Sentry using their CLI and attach the release tag. This ensures error grouping works correctly and helps avoid noisy, duplicated issues.
Real-world code context: instrumenting a data layer and handling async errors
Most bugs hide in asynchronous code: fetch calls, promise chains, and event handlers. Below is a pattern I use frequently: a wrapper that centralizes error reporting and adds request IDs for trace correlation.
// src/app/lib/api.ts
import * as Sentry from '@sentry/react';
import { env } from '../../env';
// Generate a unique ID per request; could match a backend trace ID if provided.
function generateId() {
return Math.random().toString(36).slice(2) + Date.now().toString(36);
}
// Lightweight fetch wrapper with error tracking
export async function apiRequest<T>(
url: string,
options: RequestInit = {}
): Promise<T> {
const requestId = generateId();
const start = performance.now();
const breadcrumb = {
category: 'http',
data: { url, method: options.method || 'GET', requestId },
level: 'info' as const,
};
Sentry.addBreadcrumb(breadcrumb);
try {
const response = await fetch(url, {
...options,
headers: {
'X-Request-ID': requestId,
...options.headers,
},
});
const duration = Math.round(performance.now() - start);
if (!response.ok) {
const error = new Error(`HTTP ${response.status} at ${url}`);
Sentry.withScope((scope) => {
scope.setTag('http.status_code', response.status);
scope.setTag('request_id', requestId);
scope.setExtra('duration_ms', duration);
scope.setLevel(response.status >= 500 ? 'error' : 'warning');
Sentry.captureException(error);
});
throw error;
}
const data: T = await response.json().catch(() => ({} as T));
return data;
} catch (error) {
// Network errors or aborts; report only in production or dev as needed
if (env.ENVIRONMENT === 'production') {
Sentry.withScope((scope) => {
scope.setTag('request_id', requestId);
scope.setLevel('error');
Sentry.captureException(error);
});
} else {
console.error('API Error:', error);
}
throw error;
}
}
Usage in a React component:
// src/app/pages/ProductPage.tsx
import React, { useEffect, useState } from 'react';
import { apiRequest } from '../lib/api';
import { ErrorBoundary } from '../components/ErrorBoundary';
interface Product {
id: string;
name: string;
price: number;
}
export function ProductPage() {
const [product, setProduct] = useState<Product | null>(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
async function load() {
try {
const data = await apiRequest<Product>('/api/product/123');
setProduct(data);
} finally {
setLoading(false);
}
}
load();
}, []);
if (loading) return <div>Loading...</div>;
if (!product) return <div>Not found</div>;
return (
<ErrorBoundary>
<h1>{product.name}</h1>
<p>${product.price}</p>
</ErrorBoundary>
);
}
This pattern gives you:
- A consistent place to enrich errors with request IDs and durations.
- A breadcrumb trail in Sentry showing the HTTP call that led to an issue.
- Separation of concerns between data fetching and error handling.
Performance instrumentation: Core Web Vitals and custom spans
Performance issues often present as slow LCP (Largest Contentful Paint) or high CLS (Cumulative Layout Shift). Using the web-vitals library is a pragmatic way to capture these metrics and send them to your observability backend.
// src/instrumentation/browser-tracing.ts
import { onCLS, onFID, onLCP, Metric } from 'web-vitals';
import * as Sentry from '@sentry/react';
function sendToSentry(metric: Metric) {
Sentry.metrics.increment(`vitals.${metric.name}`, {
unit: metric.rating, // 'good', 'needs-improvement', 'poor'
tags: { path: window.location.pathname },
// Store the numeric value as an extra field
// Note: Sentry's metrics API evolves; if unavailable, use captureEvent with custom context.
});
// Also capture an event for alerting on poor vitals
if (metric.rating === 'poor') {
Sentry.captureMessage(`Poor web vital: ${metric.name}`, {
level: 'warning',
tags: { vital: metric.name },
extra: { value: metric.value },
});
}
}
export function initPerformance() {
onCLS(sendToSentry);
onFID(sendToSentry);
onLCP(sendToSentry);
}
You can also add custom performance spans for critical interactions. For example, measuring the time to render a product list after a data fetch:
// src/app/lib/performance.ts
export function measure(name: string, startMark: string, endMark: string) {
try {
performance.mark(endMark);
const measureName = `${name}-${startMark}-${endMark}`;
performance.measure(measureName, startMark, endMark);
const entries = performance.getEntriesByName(measureName);
const duration = entries[0]?.duration ?? 0;
return duration;
} finally {
// Clean up to avoid memory leaks
performance.clearMarks(startMark);
performance.clearMarks(endMark);
performance.clearMeasures();
}
}
Usage:
// src/app/pages/ProductList.tsx
import { useEffect } from 'react';
import { measure } from '../lib/performance';
function useMeasureRender(tag: string) {
useEffect(() => {
performance.mark(`${tag}-start`);
return () => {
const duration = measure(tag, `${tag}-start`, `${tag}-end`);
// Report duration to your telemetry backend
console.log(`Render duration for ${tag}: ${duration.toFixed(2)}ms`);
};
}, [tag]);
}
export function ProductList() {
useMeasureRender('product-list');
// ... rest of component
return <div>Products</div>;
}
Honest evaluation: strengths, weaknesses, and tradeoffs
Strengths
- Immediate visibility: You’ll know about errors before users report them.
- Faster triage: Stack trace symbolication, breadcrumbs, and release tags reduce debugging time.
- Performance awareness: Core Web Vitals tie directly to user experience and SEO.
Weaknesses
- Noise: Without careful filtering and grouping, dashboards become overwhelming.
- Cost: Ingest volume and retention can get expensive quickly.
- Privacy: Recording user interactions must be handled carefully to comply with regulations like GDPR or CCPA.
Tradeoffs
- Sampling: Aggressive sampling might miss rare but critical errors. A balanced approach uses high sampling for error tracking (e.g., 50–100%) and low sampling for performance telemetry (e.g., 5–10%).
- Source maps: Uploading source maps is essential but raises security considerations. Store them securely and restrict access.
- Alerting: Alert on trends rather than single occurrences. A single error might be harmless; a spike indicates a real issue.
When not to use
- If your app handles highly sensitive data and you cannot guarantee anonymization, consider on-prem or self-hosted solutions with strict controls.
- If you’re building a static site with minimal interactivity, basic analytics might be enough. However, even then, unhandled script errors from third-party embeds can degrade UX, so a lightweight error tracker could still help.
Personal experience: lessons from production
A few years ago, I introduced error tracking to a React e-commerce app. The first week felt like drinking from a firehose: thousands of errors grouped under generic stack traces. The fix wasn’t more alerts, it was better context. We added:
- Release tags tied to our CI build, so regressions were obvious after deployments.
- Source map uploads with a CI step that ran only on production builds.
- A simple Sentry filter to ignore errors from a chat widget we couldn’t control.
- An error boundary around checkout that captured user actions as breadcrumbs.
One memorable issue came from an iOS Safari bug where a CSS animation caused intermittent layout shifts and script timeouts. The stack trace pointed to a third-party library. Without the performance metrics and user agent context, we would have spent days chasing our code. With the data, we updated the library, added a fallback for the problematic animation, and watched the CLS metric drop.
Another lesson: alerts must have clear owners. We tied error alerts to the owning team’s Slack channel and set a threshold rule: only alert when unique issues per release exceed a baseline by 10x. That cut false alarms while keeping real incidents visible.
Getting started: workflow and mental models
1. Define what matters
List critical user flows: login, checkout, search. Decide which errors and performance thresholds matter for each.
2. Choose your tooling
If you’re new, start with a hosted provider like Sentry for error tracking and the web-vitals library for performance. If you have platform resources and data residency needs, explore OpenTelemetry with a self-hosted collector. For a comparison of approaches, see Sentry’s documentation on performance and OpenTelemetry’s browser instrumentation docs:
- https://docs.sentry.io/platforms/javascript/performance/
- https://opentelemetry.io/docs/instrumentation/js/
3. Instrument early
Initialize your monitoring in the app entry point. Capture errors globally, instrument routing, and wrap data fetches. Keep configuration environment-aware.
4. Normalize context
Add consistent tags: release, environment, route, user ID (if appropriate and privacy-compliant). This turns raw events into useful data.
5. Build a feedback loop
- Review new issues during release windows.
- Correlate errors with support tickets.
- Use dashboards to track trends by release and route.
6. Iterate and refine
Start with broad capture, then refine sampling and filters. Use groups and fingerprints to deduplicate. Tune alert thresholds based on real noise levels.
Free learning resources
-
MDN Web Docs: Performance API
- https://developer.mozilla.org/en-US/docs/Web/API/Performance_API
- Why: Foundational browser APIs for timing and measurement.
-
web.dev: Core Web Vitals
- https://web.dev/vitals/
- Why: Practical guidance on metrics that impact user experience.
-
Sentry JavaScript SDK documentation
- https://docs.sentry.io/platforms/javascript/
- Why: Hands-on examples for error and performance instrumentation.
-
OpenTelemetry JavaScript documentation
- https://opentelemetry.io/docs/instrumentation/js/
- Why: For teams exploring standardized observability and cross-service tracing.
-
Web.dev: Best practices for performance
- https://web.dev/fast/
- Why: Holistic strategies beyond instrumentation.
Summary and takeaway
Frontend monitoring and error tracking are essential when you need to understand how your app behaves in the wild. For most teams, a hosted error tracking service plus Core Web Vitals instrumentation provides the right mix of depth and usability. For teams with strict compliance or custom needs, OpenTelemetry with self-hosted backends can be a strong alternative.
Who should use it:
- Teams running production apps with active users.
- Products where UX stability and performance tie directly to business metrics.
- Engineering orgs that want faster incident response and clearer release quality signals.
Who might skip it:
- Prototypes or internal tools with limited user base and low risk.
- Apps handling sensitive data where compliance barriers are insurmountable without heavy investment.
- Static sites with minimal interactivity and no third-party scripts.
The real value of monitoring isn’t in dashboards; it’s in the feedback loop that informs decisions. Start small, focus on critical flows, refine context, and keep your signal-to-noise ratio healthy. When you ship the next feature, you’ll know within minutes whether users are hitting the happy path or the one that ends in a red alert. And that confidence is worth the effort.




