Media Streaming Implementation Strategies

·20 min read·Specialized Domainsintermediate

Scaling video delivery and real-time playback in a world of rising user expectations

Illustration of adaptive streaming: a client player requesting segmented video chunks from an edge CDN, with bandwidth fluctuations and codec variants displayed

Every time you click play, a distributed system springs to life: the client negotiates with an edge server, a manifest guides codec selection, chunks flow over unpredictable networks, and the player juggles buffers to keep the picture moving. The stakes feel simple to users but they are complex under the hood. If you are building a streaming feature today, you are balancing video quality, latency, device diversity, and infrastructure costs in a single feature set. I have shipped streaming experiences on web, mobile, and smart TVs, and the moments that stuck with me were not the glamorous UI polish but the quiet reliability wins, like a player recovering from a 4G network swap without a spinner, or a live stream holding 3 seconds of glass-to-glass latency during a product launch.

In this post, I will share practical strategies for implementing media streaming, covering both adaptive HTTP delivery and real-time streaming. The focus is on architectural decisions, client behavior, codec choices, and infrastructure patterns that show up in real products. We will include configuration examples and project structure so you can see how these choices look in code. If you are deciding between HLS and DASH, trying to understand why your CDN bill spiked, or wondering when to reach for WebRTC, this guide should help you avoid common pitfalls and build a resilient playback experience.

Where streaming fits today and who uses it

Streaming touches nearly every product that shows moving pictures, from on-demand courses to live auctions. In modern teams, the responsibility often sits across three roles: client engineers implementing the player UI and buffering logic, platform or backend engineers building the ingest and packaging pipeline, and SREs tuning CDN and observability. The client side is where user experience lives and dies, but the server and CDN decisions determine what is possible in terms of scale, quality, and cost.

Adaptive bitrate streaming dominates long-form video on demand, with HLS and MPEG-DASH as the workhorses. Apple’s ecosystem strongly favors HLS, and the platform guidelines shape app store approvals and device compatibility. For interactive or low-latency needs, WebRTC is the tool you reach for, even as it introduces infrastructure complexity. Newer entrants like Low-Latency HLS and CMAF-based chunked transfer encoding try to close the gap between HLS/DASH and real-time, and they are increasingly viable for events that need a few seconds of latency without abandoning traditional CDN delivery.

Compared to alternatives, the tradeoffs are fairly clear. HLS and DASH deliver broad compatibility, mature caching, and robust ABR logic, but they are not built for true real-time. WebRTC can deliver sub-second latency but is sensitive to NAT traversal, network quality, and requires a different operational footprint. RTMP still shows up in ingest pipelines, but it is not a modern delivery format for clients. Custom UDP approaches exist for specialized use cases, but they rarely beat the economics of CDN-backed HTTP delivery unless you are solving a niche problem.

Core concepts and practical patterns

If you are new to streaming, the mental model starts with a manifest and segments. The player downloads a manifest (an m3u8 for HLS or an mpd for DASH) that describes the available variants, then fetches small media segments. As bandwidth changes, the client switches variants to maintain smooth playback. That is adaptive streaming in a nutshell.

HLS basics with a working example

HLS works well across most platforms and is often the easiest to integrate for on-demand and live. For on-demand, you can create a simple HLS stream with FFmpeg. Here is a minimal command that produces a HLS playlist with two bitrate variants and 6-second segments, which helps balance startup time and CDN cache hit ratio.

# On-demand HLS packaging with two variants
# Input: input.mp4
# Outputs: index.m3u8, variant_1.m3u8, variant_2.m3u8 and segments

ffmpeg -i input.mp4 \
  -filter_complex "split=2[s1][s2]; \
    [s1]scale=w=1280:h=720:force_original_aspect_ratio=decrease[v1]; \
    [s2]scale=w=854:h=480:force_original_aspect_ratio=decrease[v2]" \
  -map "[v1]" -c:v:0 libx264 -b:v:0 4000k -maxrate:v:0 4400k -bufsize:v:0 8000k \
  -map "[v2]" -c:v:1 libx264 -b:v:1 1500k -maxrate:v:1 1650k -bufsize:v:1 3000k \
  -map a:0 -c:a aac -b:a 128k \
  -f hls \
  -hls_time 6 \
  -hls_playlist_type vod \
  -hls_segment_filename "segment_%v_%03d.ts" \
  -master_pl_name index.m3u8 \
  -var_stream_map "v:0,a:0 v:1,a:0" \
  output/%v/index.m3u8

This produces a master playlist and two variant playlists, each with its own segment files. In practice, you will want to set -hls_time between 2 and 6 seconds depending on whether you prioritize fast startup or stable caching. For live streaming, you would switch to -hls_list_size and a sliding window, and you would enable chunked transfer encoding or packaging to avoid waiting for a full segment duration before clients can request it.

When you deploy these files, a typical folder structure looks like this:

webroot/
  vod/
    index.m3u8
    segment_0_000.ts
    segment_0_001.ts
    ...
  live/
    index.m3u8
    segment_0_000.ts
    segment_0_001.ts
    ...

For live streams, the server must continually write new segments and update the playlist. The client polls the playlist and downloads new segments as they appear. For low latency, many teams use LL-HLS with partial segments and preload hints, though client support varies by platform.

Client player patterns and error handling

Most teams use a battle-tested player library rather than writing raw MediaSource Extensions code. In the browser, hls.js is a solid choice for HLS, while dash.js provides robust DASH support. On native mobile, you often use AVPlayer on iOS and ExoPlayer on Android. The player choice matters less than the patterns you apply around it.

A practical pattern is to implement a reconnection strategy for transient network failures and to expose ABR metrics for observability. Below is a small JavaScript example using hls.js that wires up basic error handling and logs network and level switching events. This is the type of code that reduces time-to-resolution for user reports and helps tune segment length and bitrate ladders.

// web/client.js
// Demonstrates hls.js integration with error handling and analytics
import Hls from 'https://cdn.jsdelivr.net/npm/hls.js@latest';

const video = document.querySelector('video#player');
const sourceUrl = 'https://cdn.example.com/vod/index.m3u8';

function setupPlayer() {
  if (video.canPlayType('application/vnd.apple.mpegurl')) {
    // Native HLS on Safari and some mobile browsers
    video.src = sourceUrl;
    video.addEventListener('loadedmetadata', () => video.play());
  } else if (window.Hls) {
    const hls = new Hls({
      enableWorker: true,
      lowLatencyMode: false, // Enable LL-HLS if supported by your stream and client
      backBufferLength: 90,
      maxBufferLength: 30,
      maxMaxBufferLength: 60,
      liveSyncDurationCount: 3,
    });

    hls.on(Hls.Events.MEDIA_ATTACHED, () => {
      console.log('Media attached');
    });

    hls.on(Hls.Events.MANIFEST_PARSED, (event, data) => {
      console.log('Manifest parsed with variants:', data.levels.length);
    });

    hls.on(Hls.Events.LEVEL_SWITCHED, (event, data) => {
      const level = hls.levels[data.level];
      console.log('Level switched', {
        level: data.level,
        bitrate: level?.bitrate,
        height: level?.height,
      });
      // Emit to your analytics pipeline
      // sendMetric('abr_switch', { level: data.level, bitrate: level?.bitrate });
    });

    hls.on(Hls.Events.ERROR, (event, data) => {
      // Decide if error is fatal
      if (data.fatal) {
        console.warn('Fatal error', data);
        switch (data.type) {
          case Hls.ErrorTypes.NETWORK_ERROR:
            // Attempt recovery
            hls.startLoad();
            break;
          case Hls.ErrorTypes.MEDIA_ERROR:
            hls.recoverMediaError();
            break;
          default:
            // Destroy and re-initiate or fall back to a different stream
            hls.destroy();
            break;
        }
      } else {
        // Non-fatal but informative
        console.warn('Non-fatal error', data);
      }
    });

    hls.loadSource(sourceUrl);
    hls.attachMedia(video);
    video.addEventListener('loadedmetadata', () => video.play());
  } else {
    console.error('HLS is not supported in this browser.');
  }
}

setupPlayer();

This snippet is intentionally small, but in a real app you would tie ABR decisions to user preferences, device capability, and battery state. For mobile, you may want to default to a lower top bitrate to conserve data, and you should expose a manual quality selector. On TVs, the player should avoid switching too aggressively to prevent visible oscillation on a large screen.

Packaging considerations and codec selection

Your codec and container choices have ripple effects on compatibility, packaging time, and CDN caching efficiency. For on-demand, AVC/H.264 is a safe choice with wide support. HEVC/H.265 reduces bandwidth but has licensing complexities and patchy browser support. AV1 is gaining traction for web, with good support in Chrome and Firefox, but requires significant compute and is not yet universal. For audio, AAC-LC is ubiquitous; Opus shines in real-time contexts like WebRTC.

CMAF (Common Media Application Format) is worth mentioning because it allows a single set of video chunks to be used for both HLS and DASH, reducing storage and cache overhead. This is particularly valuable for live events where you want to minimize duplication across formats. In practice, you can generate CMAF chunks with a packaging tool and serve them with chunked transfer encoding for LL-HLS or DASH low-latency modes.

Real-world projects often maintain two or three renditions per adaptive ladder for on-demand (e.g., 480p, 720p, 1080p) and add a 144p or 240p for constrained networks. For live, you may reduce the number of renditions to lower encoding costs. The key is to pick bitrates that align with your audience’s devices and network profiles. A 720p stream at 2–3 Mbps is often a safe middle ground, while 1080p can sit around 5–6 Mbps. The specific numbers depend on your encoder and content complexity; test with short clips and measure rebuffering rates.

Live streaming with low latency

Low-latency streaming requires moving from segment-based delivery to chunked transfer. The idea is to cut segments into smaller chunks and stream them as soon as they are encoded, without waiting for a full segment. The player can request the tail of a segment before it is complete, reducing end-to-end latency.

Below is a conceptual FFmpeg command for LL-HLS chunked delivery. This is not a turnkey command for production; it demonstrates the core ideas. You would typically sit behind a server that supports chunked transfer and playlist updates, like nginx with proper HLS configuration or a dedicated packager.

# Conceptual LL-HLS packaging: small segments with chunked transfer
# Server must support writing and updating the playlist in near real time

ffmpeg -i rtmp://ingest.example.com/live/stream \
  -c:v libx264 -preset veryfast -b:v 3500k \
  -c:a aac -b:a 128k \
  -f hls \
  -hls_time 1 \
  -hls_list_size 5 \
  -hls_flags split_by_time+program_date_time+independent_segments \
  -hls_segment_type mpegts \
  -hls_fragments_per_seg 4 \
  -master_pl_name index.m3u8 \
  output/live/index.m3u8

From the client perspective, you can enable LL-HLS in hls.js via lowLatencyMode: true and ensure the server delivers partial segments with preload hints. In a production deployment, we have seen latency drop to around 3–6 seconds using LL-HLS, which is often acceptable for live events where interaction is not required. For sub-second interaction, WebRTC is the better choice.

WebRTC for real-time use cases

WebRTC is ideal for video chat, interactive streaming, and any scenario where latency must be as low as possible. The tradeoff is complexity and infrastructure. You need signaling servers, STUN/TURN for NAT traversal, and careful bandwidth management. Here is a minimal example of a WebRTC peer connection that negotiates audio and video with a remote peer. This code is intended for learning; a production system adds ICE server selection, renegotiation, and robust error handling.

// web/webrtc.js
// Minimal peer connection for local camera to remote peer

async function startWebRTC() {
  const localVideo = document.querySelector('video#local');
  const remoteVideo = document.querySelector('video#remote');

  const constraints = { video: true, audio: true };
  const localStream = await navigator.mediaDevices.getUserMedia(constraints);
  localVideo.srcObject = localStream;

  const pc = new RTCPeerConnection({
    iceServers: [
      { urls: 'stun:stun.l.google.com:19302' },
      // In production, include TURN servers for reliability behind NAT
    ],
  });

  localStream.getTracks().forEach(track => pc.addTrack(track, localStream));

  pc.ontrack = event => {
    const [stream] = event.streams;
    remoteVideo.srcObject = stream;
  };

  pc.onicecandidate = event => {
    if (event.candidate) {
      // Send candidate to remote peer via your signaling channel
      // signaling.send({ type: 'candidate', candidate: event.candidate });
    }
  };

  // Signaling exchange would go here. For brevity, assume you exchange
  // offer/answer via a WebSocket or fetch-based signaling endpoint.

  const offer = await pc.createOffer();
  await pc.setLocalDescription(offer);
  // signaling.send({ type: 'offer', sdp: pc.localDescription.sdp });
}

startWebRTC();

In practice, you integrate a signaling service (often a WebSocket) and a TURN server (e.g., Coturn) for robust connectivity. For scale, you will look into SFU (selective forwarding unit) architecture to relay streams to multiple viewers without duplicating bandwidth at the source. Media servers like mediasoup or Janus provide the SFU logic, while platforms like AWS Chime, Agora, or Twilio Video abstract some of the operational burden.

Strengths, weaknesses, and tradeoffs

It is useful to evaluate streaming strategies by the product requirements rather than the technology hype. Here are tradeoffs I have faced repeatedly:

  • HLS/DASH

    • Strengths: Broad compatibility, CDN-friendly, proven ABR logic, lower operational complexity for large audiences.
    • Weaknesses: Latency typically in the 6–30 second range without LL features; LL-HLS support varies.
    • Best for: On-demand libraries, live events without real-time interaction, high-scale broadcasts.
  • LL-HLS and CMAF chunked streaming

    • Strengths: Latency in the 3–6 second range, shared packaging for HLS and DASH, leverages existing CDN caching.
    • Weaknesses: Requires careful playlist management, partial segment support on clients, and server-side chunked transfer.
    • Best for: Live events where a few seconds of latency is acceptable and CDN economics matter.
  • WebRTC

    • Strengths: Sub-second latency, robust for interactive use cases.
    • Weaknesses: Complex signaling and NAT traversal, higher operational cost, less efficient for massive viewership without SFU and tiered distribution.
    • Best for: Video chat, auctions, remote collaboration, live sports commentary with real-time interaction.
  • RTMP ingest and HLS/DASH delivery

    • Strengths: Stable ingest protocol, widely supported by encoders and media servers.
    • Weaknesses: RTMP is not suitable for modern client delivery; you need to transcode and package to ABR formats.
    • Best for: Getting video from an encoder or mobile app into your pipeline.

A common mistake is starting with WebRTC when the product needs are met by LL-HLS. Conversely, launching an interactive feature on HLS alone will frustrate users. Always write down your latency target first; if it is under 2 seconds, plan for WebRTC or a custom UDP pipeline. If it is above 5 seconds, HLS/DASH will be easier to maintain.

Real-world implementation structure and workflows

In production, you rarely run a single ffmpeg command from a laptop. Instead, you build a pipeline that handles ingest, transcoding, packaging, and distribution. Below is a typical project layout for a streaming service that supports both on-demand and live. This is a simplified view, but it reflects how teams separate concerns and automate workflows.

streaming-service/
  ingest/
    rtmp/
      ingest_server.conf        # nginx-rtmp or similar config
      ingest_stream_keys.json   # Mapping keys to content IDs
    srt/
      srt_receiver.py           # SRT ingest for high-reliability scenarios
  encode-packager/
    ffmpeg/
      templates/
        ondemand_hls.sh         # On-demand packaging script
        live_ll_hls.sh          # Live LL-HLS packaging script
      profiles.json             # Bitrate, resolution, codec profiles
    cmaf/
      packager_config.json      # Chunked packaging for CMAF
  storage/
    raw/                        # Source files before encoding
    encoded/                    # Packaged HLS/DASH folders
  cdn/
    origins.conf                # Origin shielding and cache rules
    purge_api.sh                # Script for cache invalidation
  client/
    web/
      public/
        index.html
      src/
        player.js               # Player setup and error handling
        analytics.js            # ABR and buffering metrics
    ios/
      Player/Player.swift       # AVPlayer wrapper
    android/
      app/src/main/java/.../PlayerViewModel.kt
  observability/
    logs/
      ingest.log
      packaging.log
    metrics/
      rebuffering.csv
      abr_switches.csv
  docs/
    runbooks/
      live_event.md
      incident_response.md

Here is a simplified packaging script used for on-demand HLS. In the wild, you would pass environment variables for profiles, input paths, and output destinations, and you would run this in a container to ensure consistent FFmpeg versions.

#!/usr/bin/env bash
# encode-packager/ffmpeg/templates/ondemand_hls.sh
# Usage: ./ondemand_hls.sh <input_file> <output_dir>

set -euo pipefail

INPUT="$1"
OUTPUT_DIR="$2"
mkdir -p "$OUTPUT_DIR"

ffmpeg -i "$INPUT" \
  -filter_complex "split=2[s1][s2]; \
    [s1]scale=w=1280:h=720:force_original_aspect_ratio=decrease[v1]; \
    [s2]scale=w=854:h=480:force_original_aspect_ratio=decrease[v2]" \
  -map "[v1]" -c:v:0 libx264 -b:v:0 4000k -maxrate:v:0 4400k -bufsize:v:0 8000k \
  -map "[v2]" -c:v:1 libx264 -b:v:1 1500k -maxrate:v:1 1650k -bufsize:v:1 3000k \
  -map a:0 -c:a aac -b:a 128k \
  -f hls \
  -hls_time 6 \
  -hls_playlist_type vod \
  -hls_segment_filename "$OUTPUT_DIR/segment_%v_%03d.ts" \
  -master_pl_name index.m3u8 \
  -var_stream_map "v:0,a:0 v:1,a:0" \
  "$OUTPUT_DIR/%v/index.m3u8"

echo "Packaging complete: $OUTPUT_DIR/index.m3u8"

For live events, you will orchestrate ingest (RTMP or SRT) to a packager that writes segments and updates the playlist continuously. You may use a message queue to trigger cache purges and playlist updates. A common reliability pattern is origin shielding, where a small set of edge caches pull from your origin, and the rest of the CDN pulls from the shield. This reduces origin load during viral spikes.

CDN configuration and cache strategy

Content delivery is as much about caching as it is about encoding. For VOD, long-lived segments benefit from aggressive caching and immutable segment names. For live, you need short TTLs and proper Cache-Control headers for playlists and partial segments.

If your segments are versioned or immutable, you can cache them for a long time. If you update segments in place, you will run into stale delivery issues. A good practice is to include an identifier in the segment path or query for live streams. Here is a simple nginx snippet to illustrate cache behavior for HLS playlists and segments. This is not a full production config, but it captures the intent.

# nginx snippet for HLS caching
server {
  listen 80;
  server_name cdn.example.com;

  location /vod/ {
    root /var/www/streaming;
    # VOD segments are immutable; cache long term
    location ~* \.(ts|m3u8)$ {
      expires 1h;
      add_header Cache-Control "public, immutable";
    }
  }

  location /live/ {
    root /var/www/streaming;
    # Live playlists update frequently; short TTL
    location ~* \.m3u8$ {
      expires 10s;
      add_header Cache-Control "public, no-cache";
    }
    # Live segments are short-lived but can be cached briefly
    location ~* \.ts$ {
      expires 30s;
      add_header Cache-Control "public";
    }
  }
}

For LL-HLS and CMAF chunked transfer, the server must support streaming partial responses. Modern CDNs support this, but you should test thoroughly, as cache key design becomes critical. Avoid caching partial segments unless you understand the implications for seek behavior and player robustness.

Observability and instrumentation

Streaming issues are often intermittent and hard to reproduce. Instrumentation helps you diagnose whether a problem is network, encoding, or client-side. Key metrics include:

  • Startup time: Time from play click to first frame.
  • Rebuffering rate: Percentage of time the player is waiting for data.
  • ABR behavior: Bitrate distribution and frequency of switches.
  • Error rate: Fatal errors and recovery success.
  • CDN hit ratio: Cache misses can cause latency spikes.

Collect these metrics client-side and aggregate them in your observability stack. In the player code, you can subscribe to ABR events and track buffer health. For a concrete example, the hls.js snippet above logs level switches; you can add buffering counters like this:

// web/client.js (continued)
let rebufferCount = 0;
let bufferingStart = null;

video.addEventListener('waiting', () => {
  if (!bufferingStart) bufferingStart = performance.now();
  rebufferCount++;
});

video.addEventListener('playing', () => {
  if (bufferingStart) {
    const duration = performance.now() - bufferingStart;
    // sendMetric('rebuffer', { duration, count: rebufferCount });
    bufferingStart = null;
  }
});

On the server side, collect packager logs and track segment generation latency. If segments are late, players will stall. In live, aim to generate segments at a consistent cadence. If you notice variability, look at encoder load, I/O bottlenecks, or network congestion to the origin.

Personal experience: lessons from the trenches

I once tuned a live event stream that started with 12-second glass-to-glass latency. The product team wanted under 5 seconds. Rather than immediately jumping to WebRTC, we evaluated LL-HLS against the audience profile. The viewers were spread globally with a heavy mobile mix. Switching to LL-HLS with chunked segments and a 1-second segment size got us to 4–5 seconds. We added a conservative ABR ladder (three variants) and capped the top bitrate to avoid choking mobile networks. The result was a stable stream that did not spike rebuffering despite a 10x traffic spike.

Another lesson came from CDN misconfiguration. We had a manifest and segments on the same path with identical caching rules. During a live show, the manifest updated every second, but our CDN cached it for 60 seconds. Players kept requesting segments that no longer existed, causing stalls. Fixing the TTL on playlists to 10 seconds and using immutable segment names resolved the issue. The cost was minimal; the value in user experience was significant.

On the client side, I learned that aggressive ABR switching looks flashy in metrics but can degrade perceived quality, especially on TVs. We implemented a hysteresis strategy: only switch up if bandwidth stays above the threshold for a few seconds, and only switch down quickly. We also gave users a manual quality selector, which reduced support tickets about "grainy video."

Getting started: workflow and mental models

If you are building a streaming feature, start with your target latency and audience size. For most projects, HLS is the right baseline. Here is a general workflow:

  • Choose your codecs: AVC/H.264 for video, AAC for audio. Consider HEVC or AV1 if your audience and devices align.
  • Design your ABR ladder: 3–4 variants, with sensible resolutions and bitrates. Test with representative content.
  • Package: Use FFmpeg or a packager to produce HLS or DASH. For on-demand, generate variants and segment files. For live, set up chunked delivery.
  • Distribute: Configure CDN caching and origin shielding. Ensure playlists have short TTLs for live; segments can be cached briefly.
  • Implement the client: Use hls.js or dash.js for web and native players on mobile. Add error handling, ABR analytics, and manual quality controls.
  • Observe: Track startup time, rebuffering, ABR behavior, and CDN performance. Iterate.

A simple project setup in a team might look like this for the client:

client/
  web/
    src/
      player/
        hls-setup.js          # Player initialization and ABR tuning
        errors.js             # Error handling and recovery
        analytics.js          # Metrics emission
      ui/
        QualitySelector.jsx   # Manual quality control
    public/
      index.html              # Video element and player script tags

A minimal build setup (no step-by-step commands) might rely on a bundler like Vite or Webpack to include hls.js, and you would deploy the static assets alongside the HLS manifests. On native platforms, you would wrap AVPlayer and ExoPlayer in a shared interface to expose quality selection and analytics events.

Free learning resources

  • HLS Specification (Apple): Apple’s official HLS authoring specification provides guidance on segment sizes, encryption, and live behaviors. See the Apple Developer documentation on HLS.
  • MPEG-DASH Standard (ISO/IEC 23009-1): The DASH standard is essential for understanding adaptation sets and segment timing.
  • hls.js repository: A practical reference for client-side HLS implementation, including error events and ABR controls. See the hls.js GitHub project.
  • dash.js reference: The reference player for DASH with examples of configuration and metrics.
  • WebRTC documentation (MDN): Client-side API overview and samples for peer connections and media constraints.
  • nginx-rtmp module and nginx HLS modules: Useful for understanding server-side packaging and caching behavior.
  • SRT (Secure Reliable Transport) overview: SRT is increasingly used for ingest in noisy or long-haul networks.
  • CMAF overview (media community articles): Helpful for understanding shared chunked packaging for HLS and DASH.

Summary: who should use what, and when

If you are building on-demand video or large-scale live events where latency above 5 seconds is acceptable, HLS and DASH are your most reliable tools. They offer predictable CDN behavior, mature players, and broad device coverage. If you need lower latency but can tolerate a few seconds, LL-HLS or CMAF chunked streaming are strong contenders and reduce operational duplication. If your product demands real-time interaction, WebRTC is the right choice, but you should budget time for signaling, NAT traversal, and infrastructure tuning.

In practice, many teams start with HLS and only reach for WebRTC when product requirements explicitly call for sub-second latency. The tools and ecosystem around HLS and DASH are well understood, and the performance gains from careful packaging, ABR design, and CDN configuration are substantial. Build observability early, be deliberate about segment sizes and cache rules, and keep the client experience resilient with solid error handling. The result is a streaming system that feels effortless to users, even when the underlying engineering is complex.