Performance Monitoring in Production
Why keeping an eye on live systems is the difference between quiet releases and chaotic nights

You ship the feature, close the laptop, and five minutes later notifications start blooming. CPU spikes, p99 latency jumps, and your database pool is exhausted. This is where performance monitoring in production earns its keep. It is not just about dashboards, it is about moving from guessing to knowing, from "it worked on my machine" to "here is the exact request path and payload causing the slowdown." In my experience, adding the right monitoring early has turned potential incidents into quiet fixes and made postmortems less about blame and more about clarity.
In this post, I will walk through what performance monitoring in production really looks like for modern teams. We will cover why it matters right now, where it fits in the ecosystem, and how to implement it with practical patterns. I will show code examples in Go and Python because those are common in production services, and share a small project structure you can adapt. I will also highlight strengths and tradeoffs, share a few personal lessons learned, and list resources that helped me. If you have ever felt blind after a deploy, this is for you.
Context: where performance monitoring fits today
Most teams building web APIs, background workers, or data pipelines rely on observability to keep systems healthy. Performance monitoring is a pillar of observability, alongside logging and tracing. In production, the goal is not to collect everything, but to collect enough signals to answer three questions fast: is the system healthy, where is it slow, and why did it slow down here.
Common roles that rely on this:
- Backend engineers building services in Go, Python, Java, Node.js, or Rust.
- SREs and platform engineers who own reliability and capacity.
- Data engineers running batch or streaming jobs where throughput and backpressure matter.
Alternatives typically fall into three buckets:
- Built-in metrics and logs on cloud platforms (for example, AWS CloudWatch, GCP Cloud Monitoring).
- Open standards like OpenTelemetry that are vendor-agnostic.
- SaaS observability tools (for example, Prometheus and Grafana for metrics, Jaeger or Tempo for traces, and Honeycomb or Datadog for all-in-one).
Compared to these, the approach I emphasize here is open-source first with OpenTelemetry for instrumentation, Prometheus for metrics, and optionally a tracing backend. This stack is flexible, portable, and avoids lock-in while scaling from a small service to a distributed system.
What performance monitoring means in practice
Performance monitoring in production is about tracking three signal types in context:
- Metrics: aggregations over time (for example, request rate, error rate, latency percentiles, CPU and memory).
- Logs: structured, queryable events with request IDs for correlation.
- Traces: end-to-end paths across services, with spans marking operations.
Each has a role. Metrics tell you if something is off and how bad it is. Logs tell you what happened for a specific request. Traces tell you where time is spent across services.
A mental model I use:
- Metrics are your heartbeat, read periodically.
- Logs are your notes, written when notable things happen.
- Traces are your journey map, following a single request through the system.
Core building blocks: instrumentation, collection, visualization, and alerting
Instrumentation is adding code that emits metrics, logs, and traces. Collection is pulling or pushing data to a central place. Visualization is dashboards that show signals clearly. Alerting is rules that page you only when necessary.
OpenTelemetry gives you a single way to instrument across languages. For Go and Python, it integrates well with popular frameworks. Prometheus scrapes metrics on HTTP endpoints. Grafana visualizes Prometheus metrics and can also show traces. For logs, Loki or Elasticsearch can store and query them.
If you want a minimal local setup, you can run:
- Prometheus for metrics
- Grafana for dashboards
- Tempo or Jaeger for traces
- Loki for logs
This setup is practical for development and small teams. For larger scale, managed services can reduce operational burden.
Real-world patterns and code examples
A minimal Go service with Prometheus metrics
Imagine you are building a service that fetches user profiles. You want to track request rate, error rate, and latency. Below is a minimal setup using the Prometheus Go client. This is the kind of code I add early to new services.
Project structure:
cmd/server/main.go
internal/handlers/profile.go
internal/metrics/metrics.go
go.mod
prometheus.yml
internal/metrics/metrics.go defines metrics once, so all packages reuse them:
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
RequestsTotal = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total number of HTTP requests by method, path, and status.",
},
[]string{"method", "path", "status"},
)
RequestDuration = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "Duration of HTTP requests in seconds.",
Buckets: prometheus.DefBuckets, // or custom buckets like []float64{.01, .05, .1, .25, .5, 1, 2.5, 5, 10},
},
[]string{"method", "path"},
)
)
internal/handlers/profile.go instruments an HTTP handler:
package handlers
import (
"context"
"encoding/json"
"net/http"
"time"
"github.com/yourorg/yourproject/internal/metrics"
)
type ProfileStore interface {
FetchProfile(ctx context.Context, userID string) (string, error)
}
func ProfileHandler(store ProfileStore) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
userID := r.URL.Query().Get("id")
if userID == "" {
metrics.RequestsTotal.WithLabelValues(r.Method, "/profile", "400").Inc()
http.Error(w, "missing id", http.StatusBadRequest)
return
}
profile, err := store.FetchProfile(r.Context(), userID)
if err != nil {
metrics.RequestsTotal.WithLabelValues(r.Method, "/profile", "500").Inc()
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
// Simulate work to demonstrate metrics
time.Sleep(20 * time.Millisecond)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{"user_id": userID, "profile": profile})
duration := time.Since(start).Seconds()
metrics.RequestsTotal.WithLabelValues(r.Method, "/profile", "200").Inc()
metrics.RequestDuration.WithLabelValues(r.Method, "/profile").Observe(duration)
}
}
cmd/server/main.go exposes a /metrics endpoint for Prometheus:
package main
import (
"context"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/yourorg/yourproject/internal/handlers"
)
type simpleStore struct{}
func (s simpleStore) FetchProfile(ctx context.Context, userID string) (string, error) {
// In real code, fetch from DB or cache
return "active", nil
}
func main() {
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.Handler())
mux.HandleFunc("/profile", handlers.ProfileHandler(simpleStore{}))
server := &http.Server{Addr: ":8080", Handler: mux}
go func() {
log.Println("server listening on :8080")
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("server error: %v", err)
}
}()
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
if err := server.Shutdown(context.Background()); err != nil {
log.Printf("server shutdown error: %v", err)
}
}
Prometheus configuration (prometheus.yml) to scrape the service:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'profile-service'
static_configs:
- targets: ['localhost:8080']
With this, Prometheus collects request counts and latencies. You can graph p95 or p99 latency in Grafana and alert when error rates spike. For alerting, Prometheus Alertmanager is a good fit. A simple rule might be:
groups:
- name: service-health
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) / rate(http_requests_total[5m]) > 0.05
for: 5m
labels:
severity: page
annotations:
summary: "High error rate on profile service"
description: "Error rate is above 5% for 5 minutes"
A Python service with OpenTelemetry traces and metrics
For Python, I often use FastAPI with OpenTelemetry to emit traces and metrics. This pattern is great for spotting slow endpoints or database calls.
Project structure:
app/
main.py
metrics.py
traces.py
requirements.txt
requirements.txt lists dependencies:
fastapi
uvicorn
prometheus-client
opentelemetry-api
opentelemetry-sdk
opentelemetry-instrumentation-fastapi
opentelemetry-exporter-otlp
app/metrics.py sets up Prometheus counters and histograms:
from prometheus_client import Counter, Histogram, generate_latest
from fastapi import Response
REQUEST_COUNT = Counter(
"http_requests_total",
"Total HTTP requests",
["method", "path", "status"]
)
REQUEST_LATENCY = Histogram(
"http_request_duration_seconds",
"HTTP request latency",
["method", "path"]
)
def metrics_endpoint():
return Response(content=generate_latest(), media_type="text/plain")
app/traces.py configures OpenTelemetry to send traces to a local collector:
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
def init_tracing(service_name: str = "profile-service"):
provider = TracerProvider()
# In production, configure OTLP endpoint to your collector
exporter = OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
processor = BatchSpanProcessor(exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
return trace.get_tracer(service_name)
app/main.py wires it together and adds instrumentation:
import time
from contextlib import asynccontextmanager
from fastapi import FastAPI, HTTPException
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from app.metrics import REQUEST_COUNT, REQUEST_LATENCY, metrics_endpoint
from app.traces import init_tracing
tracer = init_tracing()
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup: nothing special here in this example
yield
# Shutdown: cleanup if needed
pass
app = FastAPI(lifespan=lifespan)
# Instrument FastAPI to create spans automatically
FastAPIInstrumentor.instrument_app(app)
@app.get("/profile")
def get_profile(user_id: str):
start = time.time()
with tracer.start_as_current_span("fetch_profile") as span:
if not user_id:
REQUEST_COUNT.labels(method="GET", path="/profile", status="400").inc()
raise HTTPException(status_code=400, detail="missing user_id")
# Simulate DB call
time.sleep(0.02)
span.set_attribute("user.id", user_id)
span.set_attribute("profile.status", "active")
REQUEST_COUNT.labels(method="GET", path="/profile", status="200").inc()
REQUEST_LATENCY.labels(method="GET", path="/profile").observe(time.time() - start)
return {"user_id": user_id, "profile": "active"}
@app.get("/metrics")
def prom_metrics():
return metrics_endpoint()
Run with uvicorn app.main:app --host 0.0.0.0 --port 8000. Prometheus can scrape http://localhost:8000/metrics. For traces, run an OpenTelemetry Collector locally and point the exporter to it. The collector can then forward traces to Tempo or Jaeger.
Handling errors and backpressure in background workers
Performance issues often come from background workers processing queues. I once migrated a Python worker from synchronous processing to async with bounded queues, and tail latency dropped by 60%. Here is a pragmatic pattern using asyncio with Prometheus metrics.
worker.py:
import asyncio
import random
from prometheus_client import Counter, Histogram, start_http_server
JOBS_PROCESSED = Counter("worker_jobs_total", "Total jobs processed", ["status"])
PROCESS_LATENCY = Histogram("worker_job_duration_seconds", "Job duration seconds")
async def process_job(job: dict):
with PROCESS_LATENCY.time():
# Simulate CPU or I/O work
await asyncio.sleep(random.uniform(0.01, 0.05))
if random.random() < 0.05:
raise ValueError("simulated failure")
JOBS_PROCESSED.labels(status="ok").inc()
async def worker(queue: asyncio.Queue):
while True:
job = await queue.get()
try:
await process_job(job)
except Exception:
JOBS_PROCESSED.labels(status="error").inc()
finally:
queue.task_done()
async def main():
# Start Prometheus metrics server on port 8000
start_http_server(8000)
queue = asyncio.Queue(maxsize=100) # Backpressure bound
workers = [asyncio.create_task(worker(queue)) for _ in range(4)]
# Producer: simulate incoming jobs
async def producer():
for i in range(1000):
await queue.put({"id": i})
# If the queue is full, this will block and provide backpressure
await asyncio.sleep(0.001)
await producer()
await queue.join()
for w in workers:
w.cancel()
if __name__ == "__main__":
asyncio.run(main())
This pattern shows how metrics can guide tuning: if queue_full events happen often, you might scale workers or adjust batch sizes. The bounded queue protects downstream systems from overload.
Infrastructure metrics and OS-level visibility
Sometimes the bottleneck is not your code but the environment. I recall investigating a service that looked slow but was actually I/O bound due to disk contention. Tools like vmstat, iostat, and pidstat helped pinpoint it.
Basic OS metrics to collect:
- CPU utilization and saturation
- Memory usage and swap
- Disk I/O and latency
- Network throughput and retransmits
For containerized workloads, cgroup metrics matter. Prometheus node_exporter can expose OS-level metrics. For Kubernetes, kube-state-metrics and cAdvisor fill gaps.
Example node_exporter scrape config:
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
Honest evaluation: strengths, weaknesses, and tradeoffs
Strengths:
- OpenTelemetry provides consistent instrumentation across languages and reduces vendor lock-in.
- The Prometheus + Grafana stack is mature, widely adopted, and cost-effective for metrics.
- Distributed traces dramatically reduce mean time to resolution when issues span services.
Weaknesses:
- Instrumentation requires discipline and consistent conventions. Without it, data is noisy and hard to query.
- Tracing adds overhead if not sampled wisely; high-cardinality labels can blow up metric cardinality and memory usage.
- Logs are easy to over-collect, increasing storage costs and making queries slower.
When to use:
- Use this stack for microservices and any API-heavy system where latency and error rates matter.
- Great for teams that want to own their observability stack and avoid lock-in.
- Suitable for data pipelines when you add worker-specific metrics like queue depth, batch size, and processing latency.
When to skip:
- If you are prototyping a single script or local CLI, heavy instrumentation is overkill. Simple logging may suffice.
- If your organization already has a managed, standardized observability platform and you are not allowed to run your own, align with the existing tooling.
Key tradeoffs:
- Cardinality: Labels like user_id or request_id are useful but can explode metrics cardinality. Use them in logs and traces, not high-frequency counters.
- Sampling: Tracing all requests can be expensive. Start with head-based sampling and adjust based on error rates and latency spikes.
- Cost: Self-hosted stacks save license fees but require maintenance. Managed stacks reduce ops but can be expensive at scale.
Personal experience: lessons from the trenches
I learned the hard way that dashboards without context are decorative. Early in a project, I had panels showing request rate and latency, but I lacked a correlation field to join metrics with logs. When latency spiked, I could not find the specific request path causing it. Adding a trace ID or request ID to logs and traces solved this. It turned a vague alert into a clear root cause.
Another lesson: start small and evolve. I often begin with just request counts and latencies, plus error rates. Then I add resource metrics (CPU, memory). Finally, I introduce traces for critical endpoints. This incremental approach avoids paralysis and helps the team learn the tooling gradually.
Common mistakes I see:
- Logging too much without structure. JSON logs with a few consistent fields (request_id, user_type, route) are easier to query.
- Missing alert hygiene. Alerts should be actionable, testable, and tied to runbooks.
- Ignoring background jobs. Workers often dominate latency. Instrument them early.
Moments where monitoring proved invaluable:
- During a deployment, we saw p99 latency increase by 200 ms. Traces pointed to a new database query missing an index. A quick schema fix resolved it.
- In a queue-backed pipeline, monitoring queue depth and consumer lag helped us adjust worker count before users felt the delay.
- When a memory leak appeared in a Python worker, heap metrics plus periodic profiles revealed a cache that never rotated keys.
Getting started: setup and workflow
Workflow and mental model:
- Identify critical user journeys (for example, login, checkout, search).
- For each, define RED metrics: Rate, Errors, Duration.
- Instrument entry points and slow dependencies (DB, cache, HTTP clients).
- Emit traces for cross-service calls.
- Stand up Prometheus and Grafana locally or in a dev environment.
- Create a minimal dashboard with p50, p95, p99 latencies, error rate, and throughput.
- Write one or two alerts and test them by injecting faults.
- Iterate: add resource metrics and traces for the next critical path.
Folder structure for a service:
cmd/service/main.go # or app/main.py
internal/handlers/ # HTTP endpoints
internal/metrics/ # Prometheus metrics definitions
internal/tracing/ # OpenTelemetry setup
internal/store/ # DB or cache access
deploy/prometheus/ # Prometheus config and alerts
deploy/grafana/ # Dashboard JSON
Local dev stack:
- Run Prometheus with the config file pointing to your service.
- Run Grafana and connect Prometheus as a data source.
- Optional: run Jaeger or Tempo for traces, and Loki for logs.
- Start with a single endpoint, verify metrics appear, then expand.
What makes this approach stand out
- Developer experience: Instrumentation is code, which means you can unit test and review it. For example, you can assert that certain code paths increment counters in tests.
- Maintainability: OpenTelemetry provides semantic conventions, so teams can share conventions across services and reduce cognitive load.
- Real outcomes: Faster incident resolution, better capacity planning, and clearer performance budgets. In practice, a good dashboard plus a few well-placed alerts will beat a dozen noisy ones every time.
Free learning resources
- OpenTelemetry documentation: https://opentelemetry.io/docs/
- Comprehensive guides for Go, Python, and other languages with examples.
- Prometheus documentation: https://prometheus.io/docs/
- Clear explanations on metric types, PromQL, and alerting.
- Grafana University: https://grafana.com/tutorials/
- Practical tutorials for dashboards and visualization.
- CNCF Observability WG resources: https://github.com/cncf/tag-observability
- Community-curated content and best practices.
- Honeycomb Field Guides: https://www.honeycomb.io/field-guide
- Excellent, vendor-neutral guidance on tracing and observability culture.
For a deeper dive into SRE practices and alerting, see Google’s SRE book: https://sre.google/sre-book/table-of-contents/
Summary and final takeaways
Who should use this approach:
- Teams building APIs or distributed systems who need visibility into performance and reliability.
- Engineers who want a portable, open-source-first observability stack.
- Organizations aiming to reduce MTTR and set performance budgets.
Who might skip it:
- Solo developers building small scripts where logging is sufficient.
- Projects with strict organizational mandates to use a single managed vendor platform.
- Extremely low-latency systems where every microsecond counts and you need specialized profiling tools beyond standard metrics and traces.
The core takeaway: performance monitoring in production turns uncertainty into action. Start with RED metrics, add traces where complexity grows, and keep your dashboards lean. Instrument code where it matters, not everywhere. With a consistent approach, you can ship confidently and sleep better after deploys.
If you want a starting point, take the Go or Python examples and stand them up locally. Add a single dashboard and one alert. You will feel the difference the first time something goes wrong and you know exactly where to look.




