Event Sourcing vs. Traditional Persistence

·21 min read·Architecture and Designintermediate

Why choosing the right data model matters for modern, event-driven systems

A simple diagram showing an append-only stream of events flowing into a read model projection, with a snapshot database alongside for fast queries

When teams start building new features, the first architectural choice that often sneaks up on them is how to persist state. In many projects, the default path is straightforward: model the current state in a database table, update rows in place, and occasionally add audit columns. That works well for simple CRUD applications, but as systems grow, teams begin asking uncomfortable questions: How do we know why a record changed? How do we rebuild state for a new use case? What happens when we want to time-travel to a specific moment and reproduce a decision? These questions push us toward Event Sourcing, a different way of thinking about data that records changes as a series of events rather than snapshots of current state.

In this article, I will walk you through the practical differences between Event Sourcing and traditional persistence (often called snapshot or CRUD-based persistence). I will explain where each approach fits, show real-world code examples, and share tradeoffs based on experience in production systems. If you are evaluating how to structure data for auditability, temporal queries, or resilient domain modeling, this guide will help you decide whether to embrace event streams or stick with familiar table rows.

Where Event Sourcing fits today

Event Sourcing is no longer an exotic pattern. It is used in fintech to meet compliance requirements, in logistics to track shipments through each status change, in retail to model inventory movements, and in collaborative tools to preserve editing history. The pattern has matured alongside the rise of event-driven architectures, CQRS (Command Query Responsibility Segregation), and streaming platforms such as Apache Kafka. You will also find it in systems that require deterministic replay, like simulation engines or fraud detection pipelines.

In practice, Event Sourcing is adopted by teams who need more than just a current state snapshot. They want a durable record of all changes that occurred in the domain. This record serves multiple purposes: audit trails, time-travel debugging, domain analytics, and data recovery. While traditional persistence remains the dominant choice for most CRUD applications, Event Sourcing shines where the history of changes carries business meaning and where new projections or read models are expected to be built over time.

Comparing the two at a high level:

  • Traditional persistence stores current state. To understand how the state arrived at its current form, you often rely on additional audit columns or separate logging mechanisms.
  • Event Sourcing stores state transitions. The current state is derived by replaying events, and you can derive multiple projections for different consumers by applying the same events in different ways.

The choice is not just technical; it influences how teams discuss the domain. Events tend to align with domain language (e.g., OrderPlaced, PaymentAuthorized), while tables align more with data structures. In many projects, that linguistic alignment helps developers and domain experts communicate better.

Core concepts and practical patterns

At its heart, Event Sourcing is simple: every change to the system is captured as an immutable event and appended to an event log. The event log is the single source of truth. The current state is a fold over the event stream. The same event stream can be projected to multiple read models for fast queries.

Traditional persistence in practice

In traditional systems, you store the current state directly. A row in an orders table might contain columns like id, customer_id, status, total, and updated_at. When an order is placed, you insert a row. When payment is authorized, you update the status and set updated_at. This works well for quick reads and simple writes, but history is secondary. If you need auditability, you might add a history table, trigger-based audit rows, or write logs to a separate system. This often leads to duplicated data and fragile mechanisms.

Below is a simple traditional approach using a relational database with an ORM. It models an order with basic lifecycle updates.

# Example: Traditional persistence with SQLAlchemy (Python)
# This stores current state; history is not preserved in the same model.

from sqlalchemy import create_engine, Column, Integer, String, DateTime, Numeric
from sqlalchemy.orm import declarative_base, Session
from datetime import datetime

Base = declarative_base()

class Order(Base):
    __tablename__ = "orders"
    id = Column(Integer, primary_key=True)
    customer_id = Column(Integer, nullable=False)
    status = Column(String(50), nullable=False)  # e.g., "placed", "paid", "shipped"
    total = Column(Numeric(10, 2), nullable=False)
    updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)

# Setup (in-memory SQLite for demonstration)
engine = create_engine("sqlite:///:memory:", echo=False)
Base.metadata.create_all(engine)

session = Session(engine)

# Place an order
order = Order(id=1, customer_id=101, status="placed", total=99.99)
session.add(order)
session.commit()

# Update status on payment
order.status = "paid"
session.commit()

# Read current state
current = session.query(Order).filter(Order.id == 1).first()
print("Current status:", current.status)  # Output: paid

Notice how the current state overwrites previous state. If you need to know when the order was placed, you need separate logging. In many teams, that becomes an afterthought.

Event Sourcing in practice

In Event Sourcing, you append events like OrderPlaced, PaymentAuthorized, and OrderShipped. The current state is built by applying these events in order. The event store is append-only and immutable. You typically maintain a small projection (read model) for fast queries, but the events themselves are the canonical source.

Let’s build a small event-sourced model for the same domain. We will define event classes, an aggregate that applies events, and an event store. For simplicity, we will use an in-memory store, but the pattern maps directly to event stores like EventStoreDB or Kafka.

# Example: Basic Event Sourcing in Python
# Event classes, an aggregate, and an in-memory event store.

from dataclasses import dataclass
from typing import List, Dict, Any
from datetime import datetime

@dataclass(frozen=True)
class Event:
    stream_id: str
    version: int
    timestamp: datetime

@dataclass(frozen=True)
class OrderPlaced(Event):
    customer_id: int
    total: float

@dataclass(frozen=True)
class PaymentAuthorized(Event):
    transaction_id: str

@dataclass(frozen=True)
class OrderShipped(Event):
    carrier: str
    tracking_number: str

class OrderAggregate:
    def __init__(self, stream_id: str):
        self.stream_id = stream_id
        self.version = 0
        self.customer_id = None
        self.total = None
        self.status = None  # Derived state
        self.payment_tx = None
        self.shipping = None

    def apply(self, event: Event):
        # Update state based on event type
        if isinstance(event, OrderPlaced):
            self.customer_id = event.customer_id
            self.total = event.total
            self.status = "placed"
        elif isinstance(event, PaymentAuthorized):
            self.payment_tx = event.transaction_id
            self.status = "paid"
        elif isinstance(event, OrderShipped):
            self.shipping = (event.carrier, event.tracking_number)
            self.status = "shipped"
        self.version = event.version

class InMemoryEventStore:
    def __init__(self):
        self.events: Dict[str, List[Event]] = {}

    def append(self, event: Event):
        stream = self.events.setdefault(event.stream_id, [])
        # Basic optimistic concurrency check
        if stream and stream[-1].version != event.version - 1:
            raise ValueError("Version mismatch")
        stream.append(event)

    def load_stream(self, stream_id: str) -> List[Event]:
        return self.events.get(stream_id, [])

# Usage
store = InMemoryEventStore()
stream_id = "order-123"

# Place an order (append events)
e1 = OrderPlaced(stream_id=stream_id, version=1, timestamp=datetime.utcnow(), customer_id=101, total=99.99)
e2 = PaymentAuthorized(stream_id=stream_id, version=2, timestamp=datetime.utcnow(), transaction_id="tx-abc")
store.append(e1)
store.append(e2)

# Build current state by replaying events
order = OrderAggregate(stream_id)
for ev in store.load_stream(stream_id):
    order.apply(ev)

print("Derived status:", order.status)  # Output: paid
print("Total:", order.total)           # Output: 99.99

The important idea is that the events are never mutated. If you want to change how you compute the status, you can change the apply logic and replay events into a new projection. In a real system, you would persist events to a durable store and maintain separate read models for queries.

CQRS and projections

Event Sourcing pairs well with CQRS. Commands (e.g., PlaceOrder) validate intent and produce events. Queries read from projections, which are built from events. This separation allows you to optimize writes and reads independently.

Consider a simple projection that maintains a view of orders by customer for fast queries:

# Example: Building a read model projection from events
from collections import defaultdict

class CustomerOrdersView:
    def __init__(self):
        self.customer_orders = defaultdict(list)

    def handle(self, event: Event):
        if isinstance(event, OrderPlaced):
            self.customer_orders[event.customer_id].append({
                "stream_id": event.stream_id,
                "total": event.total,
                "placed_at": event.timestamp.isoformat()
            })

# Apply events to the projection
view = CustomerOrdersView()
view.handle(e1)
view.handle(e2)

print("Orders for customer 101:", view.customer_orders[101])

In production, you might build projections in multiple ways: application-level in-memory handlers that persist to a relational database, stream processors like Kafka Streams, or specialized projection libraries. For example, EventStoreDB has built-in subscriptions that can feed projection logic, and library ecosystems like Axon Framework (Java) or NEventStore (C#) provide scaffolding for aggregates and projections.

Schema evolution and versioning

One of the hardest parts of Event Sourcing is evolving the event schema over time. Since events are immutable, you cannot change historical events. Instead, you apply migration strategies:

  • Upcasting: read old events into a new structure by transforming them on the fly during replay.
  • Event versioning: include a schema version in each event and handle multiple versions in your apply logic.
  • Shadow fields: add new optional fields to events gradually and default them in the apply function.

For example, imagine we need to add a discount field to OrderPlaced:

@dataclass(frozen=True)
class OrderPlacedV2(Event):
    customer_id: int
    total: float
    discount: float  # New field

def apply(order: OrderAggregate, event: Event):
    if isinstance(event, OrderPlaced):
        order.customer_id = event.customer_id
        order.total = event.total
        order.status = "placed"
    elif isinstance(event, OrderPlacedV2):
        order.customer_id = event.customer_id
        order.total = event.total - event.discount
        order.status = "placed"
    # ... other events

During replay, you would route both OrderPlaced and OrderPlacedV2 to the same apply logic. If you have many old events, you might upcast them once at load time to normalize the structure.

Idempotency and deduplication

Commands may be retried. Events should be idempotent. Include an operation ID in commands and track processed IDs in the event store or projection to avoid duplicate events. This is particularly important when integrating with external systems, like payment gateways.

Evaluating tradeoffs: strengths and weaknesses

Strengths of Event Sourcing

  • Auditability: You have a complete, immutable history of changes. This is invaluable for compliance and debugging.
  • Temporal queries: You can reconstruct the state at any point in time. This is useful for backtesting, reporting, and resolving disputes.
  • Multiple projections: Different consumers can build tailored views from the same event stream without affecting the write model.
  • Domain alignment: Events often match business language, improving communication with domain experts.
  • Replay and simulation: You can replay events to test new logic or to reproduce production issues.

Weaknesses of Event Sourcing

  • Complexity: You must manage event schemas, projections, and eventual consistency. Debugging can be harder when writes and reads are separate.
  • Storage growth: Event logs can grow large. You need strategies for archiving and snapshotting aggregates to avoid long replays.
  • Learning curve: Teams new to Event Sourcing often struggle with concepts like aggregates, idempotency, and eventual consistency.
  • Tooling: While mature, the ecosystem is less standardized than traditional relational databases. You may need to build custom tooling for projections or migrations.

Strengths of Traditional Persistence

  • Simplicity: Modeling current state in tables is intuitive and maps directly to UI needs.
  • Mature tooling: ORMs, migration systems, and analytics tools are widely available and well understood.
  • Performance: Direct queries against indexed tables are often faster for read-heavy workloads without the overhead of building projections.

Weaknesses of Traditional Persistence

  • Limited history: Audit trails require extra mechanisms, which are often bolted on and can be inconsistent.
  • Rigid schema changes: Evolving schemas can be costly and require data migrations that touch large tables.
  • Harder to time-travel: Reconstructing historical state is possible with snapshots and versioned rows, but it’s more complex and error-prone than with event streams.

When to choose Event Sourcing

  • You need strong auditability and traceability (finance, healthcare, compliance-heavy industries).
  • Your domain is event-driven by nature (logistics, collaborative editing, workflow engines).
  • You anticipate multiple read models or need to reproject data for new analytics.
  • You want to support deterministic replay for testing or simulation.

When to choose traditional persistence

  • Your application is primarily CRUD with simple queries and low audit requirements.
  • You have a small team and need to move quickly with minimal architectural overhead.
  • Your data model is stable, and you do not expect frequent new projections.
  • You are building internal tools where detailed history is not a primary concern.

Real-world code context: order fulfillment with both patterns

Below, I will show a more realistic scenario that includes command handling, validation, and error handling. The example is in Python for readability, but the patterns are language-agnostic.

Event Sourcing example with command handling

This example illustrates how commands produce events, how optimistic concurrency is enforced, and how to handle errors. It also shows a simple projection to a relational read model for queries.

# Example: Command handling and projection in an Event Sourced system

import uuid
from dataclasses import dataclass, asdict
from typing import List, Optional
from datetime import datetime
import json

@dataclass(frozen=True)
class Command:
    order_id: str

@dataclass(frozen=True)
class PlaceOrder(Command):
    customer_id: int
    total: float

@dataclass(frozen=True)
class AuthorizePayment(Command):
    transaction_id: str

@dataclass(frozen=True)
class ShipOrder(Command):
    carrier: str
    tracking_number: str

class OrderFSM:
    """Finite state machine for order lifecycle."""

    VALID_TRANSITIONS = {
        None: {"OrderPlaced"},
        "placed": {"PaymentAuthorized"},
        "paid": {"OrderShipped"},
    }

    def __init__(self):
        self.state = None

    def can_transition(self, event_name: str) -> bool:
        allowed = self.VALID_TRANSITIONS.get(self.state, set())
        return event_name in allowed

    def transition(self, event_name: str):
        if not self.can_transition(event_name):
            raise ValueError(f"Invalid transition: {self.state} -> {event_name}")
        # Update state based on event name
        if event_name == "OrderPlaced":
            self.state = "placed"
        elif event_name == "PaymentAuthorized":
            self.state = "paid"
        elif event_name == "OrderShipped":
            self.state = "shipped"

class OrderProjector:
    """Projection to a simple relational read model (SQLite in-memory)."""

    def __init__(self, engine):
        self.engine = engine
        self._ensure_schema()

    def _ensure_schema(self):
        from sqlalchemy import text
        with self.engine.connect() as conn:
            conn.execute(text("""
                CREATE TABLE IF NOT EXISTS orders_read (
                    order_id TEXT PRIMARY KEY,
                    customer_id INTEGER,
                    status TEXT,
                    total REAL,
                    updated_at TEXT
                )
            """))
            conn.commit()

    def handle(self, event: Event):
        from sqlalchemy import text
        if isinstance(event, OrderPlaced):
            with self.engine.connect() as conn:
                conn.execute(text("""
                    INSERT INTO orders_read (order_id, customer_id, status, total, updated_at)
                    VALUES (:order_id, :customer_id, :status, :total, :updated_at)
                """), {
                    "order_id": event.stream_id,
                    "customer_id": event.customer_id,
                    "status": "placed",
                    "total": event.total,
                    "updated_at": datetime.utcnow().isoformat()
                })
                conn.commit()
        elif isinstance(event, PaymentAuthorized):
            with self.engine.connect() as conn:
                conn.execute(text("""
                    UPDATE orders_read
                    SET status = :status, updated_at = :updated_at
                    WHERE order_id = :order_id
                """), {
                    "order_id": event.stream_id,
                    "status": "paid",
                    "updated_at": datetime.utcnow().isoformat()
                })
                conn.commit()
        elif isinstance(event, OrderShipped):
            with self.engine.connect() as conn:
                conn.execute(text("""
                    UPDATE orders_read
                    SET status = :status, updated_at = :updated_at
                    WHERE order_id = :order_id
                """), {
                    "order_id": event.stream_id,
                    "status": "shipped",
                    "updated_at": datetime.utcnow().isoformat()
                })
                conn.commit()

class OrderCommandHandler:
    def __init__(self, event_store, projector):
        self.event_store = event_store
        self.projector = projector
        self.fsm = OrderFSM()

    def handle(self, cmd: Command):
        stream_id = cmd.order_id
        # Load existing events to validate state transitions
        events = self.event_store.load_stream(stream_id)
        for ev in events:
            self.fsm.transition(ev.__class__.__name__)

        if isinstance(cmd, PlaceOrder):
            if self.fsm.state is not None:
                raise ValueError("Order already exists")
            event = OrderPlaced(
                stream_id=stream_id,
                version=len(events) + 1,
                timestamp=datetime.utcnow(),
                customer_id=cmd.customer_id,
                total=cmd.total
            )
        elif isinstance(cmd, AuthorizePayment):
            if self.fsm.state != "placed":
                raise ValueError("Order must be placed before payment authorization")
            event = PaymentAuthorized(
                stream_id=stream_id,
                version=len(events) + 1,
                timestamp=datetime.utcnow(),
                transaction_id=cmd.transaction_id
            )
        elif isinstance(cmd, ShipOrder):
            if self.fsm.state != "paid":
                raise ValueError("Order must be paid before shipping")
            event = OrderShipped(
                stream_id=stream_id,
                version=len(events) + 1,
                timestamp=datetime.utcnow(),
                carrier=cmd.carrier,
                tracking_number=cmd.tracking_number
            )
        else:
            raise ValueError("Unknown command")

        # Append event and project
        self.event_store.append(event)
        self.projector.handle(event)
        return event

# Setup
from sqlalchemy import create_engine
engine = create_engine("sqlite:///:memory:", echo=False)
store = InMemoryEventStore()
projector = OrderProjector(engine)
handler = OrderCommandHandler(store, projector)

# Execute commands
handler.handle(PlaceOrder(order_id="order-123", customer_id=101, total=149.50))
handler.handle(AuthorizePayment(order_id="order-123", transaction_id="tx-777"))
handler.handle(ShipOrder(order_id="order-123", carrier="UPS", tracking_number="1Z999AA"))

# Query the read model
from sqlalchemy import text
with engine.connect() as conn:
    result = conn.execute(text("SELECT order_id, status, total FROM orders_read WHERE order_id = :oid"), {"oid": "order-123"}).fetchone()
    print("Read model:", dict(result._mapping))  # Output: {'order_id': 'order-123', 'status': 'shipped', 'total': 149.5}

This example shows:

  • Command validation based on current state (FSM).
  • Optimistic concurrency by tracking versions.
  • A projection to a relational table for fast queries.
  • Clear separation between write model (events) and read model (table).

Traditional persistence example with the same domain

For comparison, here’s how the same scenario might look using only current-state persistence. Note how audit and history become secondary concerns.

# Example: Traditional CRUD-based order service

from sqlalchemy import create_engine, Column, Integer, String, Numeric, DateTime, Text
from sqlalchemy.orm import declarative_base, Session
from datetime import datetime

Base = declarative_base()

class Order(Base):
    __tablename__ = "orders"
    id = Column(Integer, primary_key=True)
    order_id = Column(String(50), unique=True, nullable=False)
    customer_id = Column(Integer, nullable=False)
    status = Column(String(50), nullable=False)
    total = Column(Numeric(10, 2), nullable=False)
    updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)

class OrderAudit(Base):
    __tablename__ = "order_audit"
    id = Column(Integer, primary_key=True)
    order_id = Column(String(50), nullable=False)
    action = Column(String(50), nullable=False)
    details = Column(Text)
    created_at = Column(DateTime, default=datetime.utcnow)

engine = create_engine("sqlite:///:memory:", echo=False)
Base.metadata.create_all(engine)

def place_order(session, order_id, customer_id, total):
    order = Order(order_id=order_id, customer_id=customer_id, status="placed", total=total)
    session.add(order)
    session.commit()
    audit = OrderAudit(order_id=order_id, action="placed", details=f"total={total}")
    session.add(audit)
    session.commit()

def authorize_payment(session, order_id, transaction_id):
    order = session.query(Order).filter(Order.order_id == order_id).first()
    if not order or order.status != "placed":
        raise ValueError("Order must be placed before payment authorization")
    order.status = "paid"
    session.add(OrderAudit(order_id=order_id, action="paid", details=f"tx={transaction_id}"))
    session.commit()

def ship_order(session, order_id, carrier, tracking_number):
    order = session.query(Order).filter(Order.order_id == order_id).first()
    if not order or order.status != "paid":
        raise ValueError("Order must be paid before shipping")
    order.status = "shipped"
    session.add(OrderAudit(order_id=order_id, action="shipped", details=f"carrier={carrier}, tracking={tracking_number}"))
    session.commit()

# Usage
session = Session(engine)
place_order(session, "order-456", 202, 299.99)
authorize_payment(session, "order-456", "tx-xyz")
ship_order(session, "order-456", "FedEx", "FX123456")

# Query current state
order = session.query(Order).filter(Order.order_id == "order-456").first()
print("Traditional read:", {"order_id": order.order_id, "status": order.status, "total": float(order.total)})

While this works, notice how we added a separate audit table to capture history. In practice, keeping audit records consistent with state changes can be tricky, especially under concurrent loads and failures. Event Sourcing avoids this by design.

Personal experience: lessons from the trenches

I have used both patterns in production systems. In one project, we migrated from a traditional audit log to Event Sourcing for a payments service. The initial learning curve was real: the team had to rethink how to model commands and events, and we underestimated the effort needed to manage schema evolution. Our first attempt had too many fine-grained events, which made replay slow and projections noisy. Consolidating events into higher-level domain events (e.g., PaymentAuthorized instead of multiple low-level status changes) simplified the model and improved clarity for non-technical stakeholders.

Another lesson was around projections. We built a projection to a relational database for reporting and dashboards. In early stages, we struggled with eventual consistency: the UI sometimes showed stale data because the projection lagged behind the event stream. Adding version stamps to read models and using a small “write-after-read” cache solved most issues, but it required deliberate design. In contrast, traditional persistence made UI updates immediate, which felt simpler.

Event Sourcing proved invaluable during a production incident. We could replay the event stream for a specific account and reproduce the exact sequence that led to a faulty state. This time-travel capability dramatically shortened the mean time to resolution. Traditional audit logs would have required piecing together scattered rows and external logs, which is slower and less reliable.

One more pragmatic observation: Event Sourcing adds operational complexity. You need monitoring for event store throughput, projection lag, and error handling in stream processors. Traditional persistence has fewer moving parts and is easier to reason about for small teams. So while Event Sourcing offers powerful capabilities, it also introduces new responsibilities.

Getting started: workflow and mental models

If you are considering Event Sourcing, start small and focus on mental models rather than tooling. The core loop is: command → validation → event → append → project → query.

Folder structure and workflow

Here’s a typical project layout for an event-sourced service. The focus is on clear separation between domain, application, infrastructure, and projections.

orders-service/
├── domain/
│   ├── events.py          # Event definitions (immutable)
│   ├── aggregates.py      # Aggregates and state transitions
│   └── errors.py          # Domain-specific exceptions
├── application/
│   ├── commands.py        # Command definitions
│   ├── handlers.py        # Command handlers
│   └── services.py        # Domain services
├── infrastructure/
│   ├── event_store.py     # Persistence layer for events
│   ├── projections.py     # Projection builders
│   └── messaging.py       # Optional: event bus or stream client
├── projections/
│   ├── read_models.py     # Read models for queries
│   └── migrations/        # Schema changes for read models
├── tests/
│   ├── test_events.py     # Unit tests for aggregates
│   └── test_projections.py
├── docker-compose.yml     # Local dependencies (event store, db)
└── README.md

Workflow:

  • Define events that capture business intent with clear names and minimal fields.
  • Build aggregates that apply events and enforce invariants. Avoid storing transient state in events.
  • Implement command handlers that validate against current state (by replaying events) and produce new events.
  • Create projections tailored to specific query needs. Keep them denormalized for performance.
  • Test by replaying events. Write deterministic tests that assert final state and side effects.

Configuration and tooling

For a real project, you will likely rely on an event store and a relational database for projections. A local setup can include EventStoreDB and Postgres via Docker. The event store is responsible for durability and subscriptions, while the relational DB supports fast, indexed queries.

# docker-compose.yml (simplified)
version: "3.8"
services:
  eventstoredb:
    image: eventstore/eventstore:latest
    ports:
      - "2113:2113"  # Admin UI
      - "1113:1113"  # TCP
    environment:
      - EVENTSTORE_RUN_PROJECTIONS=ALL
      - EVENTSTORE_INSECURE=true

  postgres:
    image: postgres:15
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_USER=app
      - POSTGRES_PASSWORD=secret
      - POSTGRES_DB=orders_read

In Python, you might use a library like eventstore-client or a simple HTTP client to write and read events. For projections, use SQLAlchemy or asyncpg depending on concurrency needs. If you are in the .NET ecosystem, NEventStore provides a mature abstraction over event persistence. In Java, Axon Framework offers a comprehensive solution for aggregates, sagas, and projections.

Handling failures and idempotency

Idempotency is essential. Pass an idempotency key (or operation ID) in commands and track processed keys. If a command is retried, ignore duplicates. This prevents double payment authorization or duplicate order placements.

# Example: Idempotency check (pseudo-code)

class IdempotentHandler:
    def __init__(self, event_store, projector, idempotency_store):
        self.event_store = event_store
        self.projector = projector
        self.idempotency_store = idempotency_store  # e.g., Redis or DB

    def handle(self, command: Command, op_id: str):
        # Check if operation already processed
        if self.idempotency_store.exists(op_id):
            return self.idempotency_store.get(op_id)  # Return previous result

        # Process command
        event = ...  # produce event
        self.event_store.append(event)
        self.projector.handle(event)

        # Record idempotency
        self.idempotency_store.set(op_id, {"event_id": event.stream_id})
        return event

In high-throughput systems, consider a message broker like Kafka to distribute events to multiple projections. Ensure at-least-once delivery and build handlers that can cope with duplicates.

What makes Event Sourcing stand out

  • Reconstructability: You can rebuild any projection from scratch by replaying events. This is a superpower for analytics and migrations.
  • Time-travel debugging: Bugs that depend on state transitions become easier to reproduce and fix.
  • Domain clarity: Events encourage modeling business processes rather than database schemas.
  • Flexibility: New read models can be added later without touching the write path.

Developers often report improved maintainability when the domain is complex. However, the developer experience depends on good tooling for event storage, schema evolution, and projection management. Without this, teams may feel slowed down.

Free learning resources

  • EventStoreDB documentation: https://developers.eventstore.com/ — Practical guidance on event storage, subscriptions, and projections.
  • Martin Fowler’s article on Event Sourcing: https://martinfowler.com/eaaDev/EventSourcing.html — A clear conceptual overview that holds up well.
  • Axon Framework reference: https://docs.axoniq.io/ — Comprehensive patterns for aggregates, sagas, and projections in Java.
  • NEventStore documentation: https://neventstore.org/ — .NET-focused guidance for event persistence and storage engines.
  • Greg Young’s classic talks on Event Sourcing (available on YouTube) — Although dated, they remain valuable for understanding the core mindset shift.

Summary: who should use Event Sourcing and who might skip it

Event Sourcing is a strong choice when you need auditability, temporal queries, and the ability to create new read models over time. It aligns well with domains where changes are meaningful business events, and it excels in systems that require deterministic replay. If your team is building compliance-heavy features or complex workflows with multiple stakeholders, Event Sourcing can be transformative.

Traditional persistence remains a better fit for straightforward CRUD applications, small teams with tight deadlines, or systems where the current state is sufficient and history is secondary. The simplicity and tooling maturity of relational databases speed up development and lower operational overhead.

A balanced takeaway: adopt Event Sourcing when the cost of not knowing “why” and “when” is high, and when you can invest in the additional complexity. Otherwise, stick with traditional persistence and add audit logs selectively. In many organizations, a hybrid approach works well, using Event Sourcing for core domain workflows and traditional persistence for auxiliary data. Whatever you choose, keep the domain language at the center, and design your data model to serve the questions your business needs to answer.