Mobile App Architecture for Offline-First Apps
Users don’t care about your backend when the elevator has no signal.

Offline-first is not a buzzword anymore. It’s an expectation. People open apps on subways, in airplanes, in rural areas, and in crowded venues where networks collapse. If your app shows a spinner and then fails, they uninstall. The architecture of an offline-first mobile app is not just about caching; it is about how you model data, how you handle synchronization, and how you keep the UI responsive while the network is absent or intermittent. In this post, I’ll walk through the decisions that matter, patterns that work in production, and the tradeoffs that bite you later if you ignore them. We will focus on cross-platform stacks because that is where most teams ship today, but the principles apply to native apps too. Expect practical code you can adapt, a clear mental model for sync, and an honest look at where offline-first is and isn’t the right choice.
Why offline-first matters more than ever
Real-world connectivity is spotty. Even with 5G expanding, switching between Wi-Fi and cellular, entering elevators, and traveling through areas with poor coverage is common. Enterprise apps often operate in warehouses, plants, or field sites with blocked signals. Travel and logistics apps need to function on the road. When you design for offline-first, you move from treating the network as a dependency to treating it as an optimization. The app writes to local storage immediately, allowing the user to keep working. Sync runs in the background to reconcile with the server when possible.
This approach reduces latency, increases perceived performance, and improves reliability. It also introduces complexity: conflict resolution, schema migrations, and more state to manage. But this complexity is manageable with the right architecture. Offline-first has matured. Tools like WatermelonDB, RxDB, Realm, and libraries like MongoDB Realm Sync or Firebase’s offline capabilities have made it easier to adopt. In native Android, Room with WorkManager is a strong combination. On iOS, Core Data with background tasks and Swift Concurrency can get you there. The ecosystem has converged around similar patterns: local-first data stores, evented writes, and asynchronous sync engines.
Where offline-first fits today
Offline-first apps are common in field service, logistics, retail, healthcare, travel, and collaboration tools. You will find it in apps like Notion, which keeps a local copy and syncs changes, and in ride-hailing apps that cache maps and routes. In enterprise settings, apps often run on managed devices with unreliable networks. Developers choose offline-first when user experience and data integrity are critical, even if the network is unstable.
Compared to a pure online-first approach, offline-first shifts the write path to the device, and moves sync to a background process. An online-first app typically sends a request, waits for a server response, and then updates the UI. That is simpler for basic CRUD apps, but it fails in weak networks. In contrast, offline-first apps write to a local database immediately, then push changes via a sync engine that handles retries, batching, and conflict resolution.
At a high level, offline-first architectures use:
- A local database on the device as the source of truth for the UI.
- A synchronization layer that mirrors server state into local tables and pushes local changes out.
- A conflict resolution strategy that defines what wins when edits collide.
- A network-aware job queue that retries and backoffs gracefully.
For mobile teams, this implies a mental shift from request-response to event-driven flows. The UI observes local data, the user writes locally, and the sync engine deals with the server. This pattern is similar to how distributed systems work, because your app is effectively a distributed node with a network partition.
Core concepts and practical patterns
The source of truth problem
In offline-first, the local database is the immediate source of truth for the UI. The server is the eventual source of truth for the organization. This matters because the user should not wait on the network to see and edit data. But you must still reconcile changes. A common design is to keep server IDs, timestamps, and version fields to track changes. Local rows have a sync state: synced, dirty, conflicting, or deleted.
A simple approach is to maintain two tables or sets of tables:
- Local mirror of server data.
- Local pending changes queue.
For example, you might store todos as:
- todos table: server data with id, title, completed, updated_at, version.
- todos_pending table: unsynced changes, with operation type (create, update, delete), and client-side timestamps.
Conflict resolution strategies
Conflicts occur when the same record is edited on the device and the server. Strategies include:
- Last write wins: Use timestamps to pick the newest change. Simple but can lose data.
- Server wins: Always accept server changes. Safe for reference data, frustrating for user edits.
- Merge: Combine changes when possible. For example, merging fields from different edits. This requires field-level granularity and careful logic.
- User prompt: Present conflicts to the user and let them choose. Good for collaboration, complex to implement.
In practice, most apps use a hybrid: last write wins for simple fields, user prompt for high-stakes edits. For example, a field service app might use last write wins for “status” updates but ask the user when there are overlapping edits to notes.
Sync engine architecture
Your sync engine is the glue. It typically includes:
- A change tracker: Watch local writes and mark records dirty.
- A network queue: Batch pending changes and push to the server with retry and backoff.
- A pull process: Fetch server changes since last sync token, apply to local store.
- Conflict handler: Detect conflicts, resolve or escalate.
In a cross-platform app using React Native, I often see a pattern like this:
- Use a local database like WatermelonDB or SQLite via react-native-sqlite-storage.
- Maintain a sync queue table for operations.
- Run sync using a background task or a foreground service triggered by network changes.
Network awareness and backoff
Blind retries kill batteries and frustrate users. Use exponential backoff and jitter. On mobile, batch operations to reduce radio wakeups. Android’s WorkManager and iOS background tasks can help, but you still need to control frequency. When the network returns, sync aggressively for critical workflows and lazily for non-urgent data.
Schema migrations and data integrity
Offline-first means long-lived local data. Your schema will evolve. You must ship migrations alongside server changes. Consider:
- Versioned schemas in the client and server.
- Backfill logic when fields change.
- Safe rollback strategies.
For example, if you rename a field, plan to:
- Support both old and new fields during migration.
- Migrate local data at app startup.
- Update server API to accept both fields temporarily.
Real-world code examples
Let’s look at a practical offline-first todo app using React Native and WatermelonDB. This setup is intentionally minimal but reflects production patterns. We will define a schema, implement a local-first repository, and sketch a sync engine that handles dirty flags and basic conflict resolution using last write wins.
Project structure
src/
components/
TodoList.tsx
TodoItem.tsx
db/
schema.ts
migrations.ts
repository.ts
sync.ts
services/
network.ts
queue.ts
types/
todo.ts
Install dependencies
yarn add @nozbe/watermelondb
yarn add @nozbe/watermelondb/decorators
yarn add react-native-sqlite-storage
yarn add axios
Define the database schema and migrations
We will keep a simple todos table with a sync state and version. The pending table stores unsynced operations.
// src/db/schema.ts
import { appSchema, tableSchema } from '@nozbe/watermelondb';
export const appDbSchema = appSchema({
version: 1,
tables: [
tableSchema({
name: 'todos',
columns: [
{ name: 'server_id', type: 'string', isIndexed: true },
{ name: 'title', type: 'string' },
{ name: 'completed', type: 'boolean' },
{ name: 'updated_at', type: 'number' },
{ name: 'version', type: 'number' },
{ name: 'dirty', type: 'boolean' },
{ name: 'synced_at', type: 'number', isOptional: true },
{ name: 'deleted', type: 'boolean' },
],
}),
tableSchema({
name: 'todo_pending_ops',
columns: [
{ name: 'todo_id', type: 'string', isIndexed: true },
{ name: 'operation', type: 'string' }, // 'create' | 'update' | 'delete'
{ name: 'payload', type: 'string' }, // JSON string
{ name: 'created_at', type: 'number' },
],
}),
],
});
// src/db/migrations.ts
import { Migrations } from '@nozbe/watermelondb';
export const migrations: Migrations = {
// Future: add fields like 'priority' here
// Example: from version 0 to 1
// 1: () => {},
};
Database adapter and layer setup
This example focuses on structure; in production you would wrap the adapter creation in a platform-specific module and add logging.
// src/db/index.ts
import { Database } from '@nozbe/watermelondb';
import SQLiteAdapter from '@nozbe/watermelondb/adapters/sqlite';
import { appDbSchema } from './schema';
import { migrations } from './migrations';
import Todo from './models/Todo'; // model defined next
import TodoPendingOp from './models/TodoPendingOp';
const adapter = new SQLiteAdapter({
dbName: 'offline_todo_db',
schema: appDbSchema,
migrations, // optional, needed when schema evolves
});
export const database = new Database({
adapter,
modelClasses: [Todo, TodoPendingOp],
});
Model definitions
We use decorators for WatermelonDB models. These models encapsulate field access and provide convenience methods.
// src/db/models/Todo.ts
import { Model } from '@nozbe/watermelondb';
import { field, date, boolean, readonly } from '@nozbe/watermelondb/decorators';
export default class Todo extends Model {
static table = 'todos';
@field('server_id') serverId!: string;
@field('title') title!: string;
@field('completed') completed!: boolean;
@field('updated_at') updatedAt!: number;
@field('version') version!: number;
@field('dirty') dirty!: boolean;
@field('deleted') deleted!: boolean;
@date('synced_at') syncedAt?: number;
// Update local todo and mark dirty
async updateLocal(changes: Partial<Pick<Todo, 'title' | 'completed'>>) {
await this.update(record => {
if (typeof changes.title === 'string') record.title = changes.title;
if (typeof changes.completed === 'boolean') record.completed = changes.completed;
record.updatedAt = Date.now();
record.dirty = true;
record.version = record.version + 1;
});
}
// Soft delete and mark dirty
async softDelete() {
await this.update(record => {
record.deleted = true;
record.dirty = true;
record.updatedAt = Date.now();
record.version = record.version + 1;
});
}
}
// src/db/models/TodoPendingOp.ts
import { Model } from '@nozbe/watermelondb';
import { field, date, readonly } from '@nozbe/watermelondb/decorators';
export default class TodoPendingOp extends Model {
static table = 'todo_pending_ops';
@field('todo_id') todoId!: string;
@field('operation') operation!: 'create' | 'update' | 'delete';
@field('payload') payload!: string; // JSON string
@date('created_at') createdAt!: number;
}
Repository layer for local writes
The repository encapsulates the local-first API. It writes to the database and records pending operations.
// src/db/repository.ts
import { database } from './index';
import Todo from './models/Todo';
import TodoPendingOp from './models/TodoPendingOp';
export class TodoRepository {
private todos = database.get<Todo>('todos');
private pending = database.get<TodoPendingOp>('todo_pending_ops');
// Observe todos for UI rendering
observeTodos() {
return this.todos.query().observe();
}
// Create a new todo locally
async createTodo(title: string) {
await database.write(async () => {
const id = `todo_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`;
const newTodo = await this.todos.create(record => {
record.serverId = id; // temporary local id; server will assign real id later
record.title = title;
record.completed = false;
record.updatedAt = Date.now();
record.version = 1;
record.dirty = true;
record.deleted = false;
});
await this.pending.create(op => {
op.todoId = newTodo.id;
op.operation = 'create';
op.payload = JSON.stringify({
title: newTodo.title,
completed: newTodo.completed,
tempId: newTodo.serverId,
});
op.createdAt = Date.now();
});
});
}
// Update an existing todo
async updateTodo(id: string, changes: Partial<Pick<Todo, 'title' | 'completed'>>) {
const todo = await this.todos.find(id);
await database.write(async () => {
await todo.updateLocal(changes);
await this.pending.create(op => {
op.todoId = id;
op.operation = 'update';
op.payload = JSON.stringify({
title: changes.title ?? todo.title,
completed: changes.completed ?? todo.completed,
updatedAt: Date.now(),
});
op.createdAt = Date.now();
});
});
}
// Soft delete
async deleteTodo(id: string) {
const todo = await this.todos.find(id);
await database.write(async () => {
await todo.softDelete();
await this.pending.create(op => {
op.todoId = id;
op.operation = 'delete';
op.payload = JSON.stringify({ deleted: true, updatedAt: Date.now() });
op.createdAt = Date.now();
});
});
}
// Mark as synced
async markAsSynced(todoId: string, serverId?: string) {
const todo = await this.todos.find(todoId);
await database.write(async () => {
await todo.update(record => {
record.dirty = false;
record.syncedAt = Date.now();
if (serverId) record.serverId = serverId;
});
});
}
// Fetch pending ops for sync
async getPendingOps() {
return this.pending.query().fetch();
}
// Clear pending op
async clearPendingOp(opId: string) {
const op = await this.pending.find(opId);
await database.write(async () => {
await op.destroyPermanently();
});
}
}
Sync engine with conflict handling
The sync engine pulls server changes and pushes local pending operations. For conflicts, we use last write wins based on updatedAt.
// src/db/sync.ts
import axios from 'axios';
import { TodoRepository } from './repository';
import { database } from './index';
import Todo from './models/Todo';
export class SyncEngine {
private repo = new TodoRepository();
private baseUrl = 'https://api.example.com';
// Push pending ops
async push() {
const pendingOps = await this.repo.getPendingOps();
for (const op of pendingOps) {
try {
if (op.operation === 'create') {
const payload = JSON.parse(op.payload);
const res = await axios.post(`${this.baseUrl}/todos`, {
title: payload.title,
completed: payload.completed,
});
// Update local record with server ID and mark synced
await this.repo.markAsSynced(op.todoId, res.data.id);
} else if (op.operation === 'update') {
const payload = JSON.parse(op.payload);
await axios.patch(`${this.baseUrl}/todos/${op.todoId}`, payload);
await this.repo.markAsSynced(op.todoId);
} else if (op.operation === 'delete') {
await axios.delete(`${this.baseUrl}/todos/${op.todoId}`);
await this.repo.markAsSynced(op.todoId);
}
await this.repo.clearPendingOp(op.id);
} catch (e) {
// In a real app, implement backoff and retry policy here
console.warn('Sync push failed, will retry later', e);
break;
}
}
}
// Pull server changes and apply with last-write-wins
async pull() {
const lastSyncedAt = 0; // In production, store this token per user/device
const res = await axios.get(`${this.baseUrl}/todos`, {
params: { since: lastSyncedAt },
});
const serverTodos = res.data as Array<{
id: string;
title: string;
completed: boolean;
updated_at: number;
deleted: boolean;
}>;
await database.write(async () => {
for (const serverTodo of serverTodos) {
const local = await database
.get<Todo>('todos')
.query()
.fetch()
.then(list => list.find(t => t.serverId === serverTodo.id));
if (!local) {
// Insert new
await database.get<Todo>('todos').create(record => {
record.serverId = serverTodo.id;
record.title = serverTodo.title;
record.completed = serverTodo.completed;
record.updatedAt = serverTodo.updated_at;
record.version = 1;
record.dirty = false;
record.deleted = serverTodo.deleted;
record.syncedAt = Date.now();
});
} else {
// Conflict resolution: last write wins
const localIsNewer = local.updatedAt > serverTodo.updated_at;
if (!localIsNewer) {
await local.update(record => {
record.title = serverTodo.title;
record.completed = serverTodo.completed;
record.updatedAt = serverTodo.updated_at;
record.dirty = false;
record.deleted = serverTodo.deleted;
record.syncedAt = Date.now();
});
} else if (local.dirty) {
// We keep local changes, but we might log a conflict
console.warn('Conflict detected: local changes kept', local.id);
}
}
}
});
}
// Run both push and pull
async run() {
await this.push();
await this.pull();
}
}
Usage in a React Native component
This component observes local data and allows offline edits. Sync runs on demand or via background tasks.
// src/components/TodoList.tsx
import React, { useEffect, useState } from 'react';
import { View, TextInput, Button, FlatList } from 'react-native';
import { TodoRepository } from '../db/repository';
import { SyncEngine } from '../db/sync';
import TodoItem from './TodoItem';
export const TodoList = () => {
const [todos, setTodos] = useState<any[]>([]);
const [title, setTitle] = useState('');
const repo = new TodoRepository();
const sync = new SyncEngine();
useEffect(() => {
const subscription = repo.observeTodos().subscribe(setTodos);
return () => subscription.unsubscribe();
}, []);
const addTodo = async () => {
if (!title.trim()) return;
await repo.createTodo(title.trim());
setTitle('');
// Try to sync immediately if network is available
sync.run().catch(console.warn);
};
const toggleTodo = async (id: string, completed: boolean) => {
await repo.updateTodo(id, { completed });
sync.run().catch(console.warn);
};
const deleteTodo = async (id: string) => {
await repo.deleteTodo(id);
sync.run().catch(console.warn);
};
return (
<View>
<TextInput
value={title}
onChangeText={setTitle}
placeholder="What needs to be done?"
/>
<Button title="Add" onPress={addTodo} />
<FlatList
data={todos}
keyExtractor={item => item.id}
renderItem={({ item }) => (
<TodoItem
item={item}
onToggle={() => toggleTodo(item.id, !item.completed)}
onDelete={() => deleteTodo(item.id)}
/>
)}
/>
<Button title="Sync Now" onPress={() => sync.run().catch(console.warn)} />
</View>
);
};
Sync patterns that scale
Batching and queuing
In the example above, we push one op at a time. In production, batch operations to reduce overhead. Maintain a queue with an operation ID and a timestamp. Group by endpoint when possible. Consider idempotency keys for retries. For example:
// Example queued batch payload
{
"batchId": "batch_12345",
"operations": [
{ "op": "create", "endpoint": "/todos", "body": { "title": "Buy milk" }, "idempotencyKey": "create_todos_12345" },
{ "op": "update", "endpoint": "/todos/abc", "body": { "completed": true }, "idempotencyKey": "update_todos_abc" }
]
}
Server-side, process the batch and return per-operation status codes. Client-side, clear only successful operations and keep failed ones for retry.
Network-aware scheduling
Do not run sync every time the user taps. Use a debounce or schedule sync when the network state changes. On Android, you can use ConnectivityManager to listen to network changes and schedule WorkManager. On iOS, you can use URLSession background tasks or a simple timer. In a React Native app, libraries like react-native-netinfo help. For example:
import NetInfo from '@react-native-community/netinfo';
NetInfo.addEventListener(state => {
if (state.isConnected && state.isInternetReachable) {
// Trigger sync with backoff
runSyncWithBackoff();
}
});
Conflict resolution in practice
A field service app I worked on kept an “offline snapshot” of a work order. Several technicians could edit the same order while offline. When they reconnected, last write wins caused lost updates to notes. We switched to a merge strategy: server fields were merged by field-level timestamps. Notes were concatenated with a header indicating the contributor. We also added a UI for conflicts when merges were ambiguous. This increased development effort but improved user trust.
Strengths, weaknesses, and tradeoffs
Strengths
- Resilience: Apps work under weak or no connectivity.
- Speed: UI responds immediately because writes hit local storage.
- Better UX: No spinners for basic edits; sync is background.
- Testable flows: You can simulate offline scenarios and write deterministic tests for the sync engine.
Weaknesses
- Complexity: Sync, conflict resolution, and migrations add overhead.
- Storage growth: Local databases can become large; require cleanup and indexing.
- Battery and network cost: Poorly designed sync can be chatty and drain battery.
- Data consistency: Offline-first is distributed by nature; perfect consistency is impossible.
Tradeoffs
- Source of truth: Local for UX, server for org truth. Choose which wins in conflicts.
- Sync frequency: Aggressive sync gives freshness but costs resources; lazy sync saves battery but risks stale reads.
- Data scope: Not everything needs offline. Cache mission-critical data, stream the rest.
- Schema changes: Plan migrations carefully. Coordinate with server and release windows.
When offline-first might not be the right choice
- Real-time collaboration with strong consistency needs (e.g., collaborative cursors, multiplayer). Offline-first can add lag and complex merges.
- Apps with ephemeral UIs (e.g., single-session forms) where data lives server-side and local persistence is unnecessary.
- Very small data sets where network reliability is high; the added complexity may not pay off.
Personal experience and common mistakes
In a retail inventory app, we underestimated the size of the catalog. We used a single large table without indexes on search fields. Queries slowed down after a few thousand items, making the UI laggy. The fix was pragmatic: index critical columns, avoid SELECT *, and paginate large lists. We also introduced a background refresh that pruned old records from the local store.
Another common mistake is ignoring the first sync. When a user installs the app, they have nothing local. If you present an empty state and then lazily sync, it feels broken. We added a “seed” flow that pulls a small subset immediately and shows a progress indicator. It improved perception even though the network hadn’t changed.
A third pitfall is forgetting to test offline conflicts. Developers often test with perfect networks. We introduced chaos testing: we used a proxy to throttle, drop packets, and simulate offline windows. This revealed several edge cases where pending operations were not persisted correctly. We fixed these by wrapping writes in transactions and ensuring pending ops were saved before network calls.
On the learning curve, offline-first requires a mindset shift. If you come from request-response, you will initially try to “await” sync before updating the UI. Resist that. Embrace local-first writes. The UI should update immediately; sync should be a background activity. Also, be careful with optimistic updates when the server has constraints. If the server rejects a duplicate, you need to roll back the local change or show an error. We implemented an “error state” on the local record, which the UI can render differently to prompt the user.
Getting started: workflow and mental model
Start with the data model. Identify which entities are read-heavy, write-heavy, and reference. Decide which need offline persistence. Draw a small diagram showing the flow: user edit -> local database -> pending queue -> sync -> server -> pull -> local merge.
Then set up your local database. Choose one that fits your stack: Room for Android native, Core Data for iOS native, WatermelonDB or SQLite for cross-platform. Define a schema with fields for sync metadata: server ID, version, updated_at, dirty, and deleted. Plan migrations early.
Build a repository layer that wraps database operations. Keep the repository simple and focused on local reads/writes. Do not mix network logic here. Create a sync engine that uses the repository to fetch pending ops and apply server changes. Add a backoff policy and batching. Expose sync triggers based on network state and app lifecycle.
In your UI, observe local data. Use local writes for immediate responsiveness. Use optimistic updates only when you can handle rollbacks. Show sync status subtly: an icon or a small badge indicating unsynced changes. Do not block user actions on sync.
Finally, test offline behavior aggressively. Simulate weak networks. Verify that pending ops survive app restarts. Check that schema migrations work. Measure storage usage. Profile battery consumption. It’s easier to fix these issues early than to rewrite the sync engine later.
Free learning resources
- WatermelonDB docs: https://watermelondb.dev/ - Practical guide to local-first databases in React Native.
- Android Room guide: https://developer.android.com/training/datastorage/room - Official documentation for Room persistence library.
- WorkManager guide: https://developer.android.com/topic/libraries/architecture/workmanager - Background tasks for Android.
- Apple Core Data: https://developer.apple.com/documentation/coredata - Official docs for iOS local persistence.
- Firebase offline docs: https://firebase.google.com/docs/firestore/manage-data/enable-offline - Reference for Firebase offline patterns.
- React Native NetInfo: https://github.com/react-native-netinfo/react-native-netinfo - Network state detection for mobile.
- CRDT reading (optional but insightful): https://crdt.tech/ - A portal on conflict-free replicated data types, useful for advanced conflict resolution.
Summary and takeaways
Offline-first is about making your app resilient and responsive. You move from a request-response model to a local-first, event-driven one. The local database becomes the UI’s source of truth, and the sync engine handles reconciliation with the server. This architecture benefits users who travel, work in challenging environments, or simply expect instant feedback. It adds complexity but pays off in reliability and perceived performance.
Who should adopt offline-first? Teams building apps for logistics, field service, retail, healthcare, travel, and collaboration should strongly consider it. If your app handles data that users need to create or edit regardless of connectivity, offline-first is a solid choice. Who might skip it? Apps that rely on real-time strict consistency, ephemeral forms, or very small datasets may not need the overhead. If your network is stable and data is short-lived, a simpler approach may suffice.
At the end of the day, the best architecture is the one that matches your constraints. Start small. Pick one critical feature to make offline-first. Build the repository, add a sync engine, and test it under flaky networks. Iterate from there. Your users will feel the difference when they can keep working on the bus, in the airport, or in the middle of nowhere.




