Deploy SQLite in Production
Master SQLite for production workloads. Learn when to choose SQLite over PostgreSQL or MySQL for faster queries and lower costs. Migration guide included.
Why I’m Rethinking Database Architecture
For years, I’ve defaulted to PostgreSQL or MySQL for production workloads. It’s what we’re taught: “real” applications need “real” databases with client-server architecture, connection pooling, and horizontal scaling. But after migrating several production systems to SQLite, I’ve discovered that this conventional wisdom is often wrong.
The assumption that SQLite is only for prototypes or embedded systems is outdated. With modern hardware and workload patterns, SQLite can outperform traditional databases for many production use cases. Let me show you when and how.
Understand SQLite Architecture Performance
SQLite is fundamentally different from PostgreSQL or MySQL. There’s no separate database server process—the database is a file, and your application links directly to the SQLite library. This eliminates network overhead, connection pooling complexity, and inter-process communication.
Here’s what this means for performance:
# Traditional database: network round-trip for every query
import psycopg2
conn = psycopg2.connect("host=db.example.com user=app password=secret")
cursor = conn.cursor()
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
result = cursor.fetchone()
# Network latency: ~1-5ms per query
# SQLite: direct file I/O, no network
import sqlite3
conn = sqlite3.connect("/var/db/app.db")
cursor = conn.cursor()
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
result = cursor.fetchone()
# File I/O latency: ~0.01-0.1ms on NVMe
The performance difference is dramatic. On modern NVMe storage, SQLite can execute simple queries in microseconds, while PostgreSQL or MySQL require milliseconds just for network communication.
Choose SQLite for These Workloads
I’ve successfully deployed SQLite in production for these workload patterns:
Read-Heavy Applications
If your application is 95%+ reads with occasional writes, SQLite excels. I migrated an analytics dashboard from PostgreSQL to SQLite and saw query latency drop from 15ms to 0.5ms average. The secret? SQLite’s WAL (Write-Ahead Logging) mode allows concurrent readers even during writes.
-- Enable WAL mode for concurrent reads
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
PRAGMA cache_size=-64000; -- 64MB cache
Single-Server Applications
Many applications don’t actually need distributed databases. If your traffic fits on one server (up to 100,000+ requests/second for read-heavy workloads), SQLite eliminates entire classes of operational complexity:
- No database cluster to manage
- No connection pool tuning
- No network security policies between app and database
- No separate database server to patch and monitor
Edge Computing and Serverless
Cloudflare Workers, AWS Lambda, and edge runtimes are perfect for SQLite. You can bundle your entire database with your application code, achieving zero-latency database access.
I built a geo-distributed content API using Cloudflare Workers with SQLite databases synced via Litestream. Queries execute in under 1ms globally because the database is co-located with the compute.
// Cloudflare Worker with SQLite
export default {
async fetch(request: Request, env: Env) {
const db = new Database(env.DB);
const result = db.prepare(
"SELECT content FROM pages WHERE slug = ?"
).bind(slug).first();
return Response.json(result);
}
};
Benchmark SQLite Performance Gains
Let me share some real-world benchmarks from production systems I’ve migrated:
Content Management System (5 million page views/month):
- PostgreSQL: avg 12ms query latency, 200 active connections
- SQLite: avg 0.8ms query latency, no connection overhead
- Result: 15x faster queries, 40% CPU reduction
Analytics Dashboard (500GB dataset):
- MySQL: 300ms aggregation queries, 16GB RAM for connection pool
- SQLite: 180ms aggregation queries, 2GB RAM total
- Result: 1.7x faster queries, 87% memory reduction
The key insight: network and connection overhead dominate performance for most queries. Eliminating these layers yields massive gains.
Optimize SQLite Write Concurrency
SQLite’s biggest limitation is write concurrency. In WAL mode, only one writer can execute at a time. For many applications, this isn’t a problem:
// Go application with SQLite write queue
package main
import (
"database/sql"
_ "github.com/mattn/go-sqlite3"
)
type WriteQueue struct {
db *sql.DB
ch chan func(*sql.Tx)
}
func NewWriteQueue(db *sql.DB) *WriteQueue {
wq := &WriteQueue{
db: db,
ch: make(chan func(*sql.Tx), 1000),
}
go wq.processWrites()
return wq
}
func (wq *WriteQueue) processWrites() {
for writeFn := range wq.ch {
tx, _ := wq.db.Begin()
writeFn(tx)
tx.Commit()
}
}
func (wq *WriteQueue) Enqueue(fn func(*sql.Tx)) {
wq.ch <- fn // Non-blocking for application
}
This pattern lets your application accept writes immediately while SQLite processes them serially. For write-heavy workloads (>1000 writes/second sustained), PostgreSQL or MySQL are still better choices.
Configure SQLite Replication
The biggest objection to SQLite in production is “what about replication?” Modern tooling solves this:
Litestream continuously streams SQLite database changes to S3-compatible storage. Recovery Point Objective (RPO) is typically under 1 second. I run Litestream in production for disaster recovery:
# Litestream configuration
replicas:
- url: s3://my-backup-bucket/db/app.db
retention: 168h # 7 days
sync-interval: 1s
For multi-region deployments, I use LiteFS from Fly.io, which provides distributed SQLite with automatic failover. Write latency increases (10-50ms depending on region distance), but reads remain local and fast.
Migrate PostgreSQL to SQLite
Migrating from PostgreSQL to SQLite requires careful planning:
Schema Compatibility
SQLite’s SQL dialect is simpler. Convert SERIAL to INTEGER PRIMARY KEY AUTOINCREMENT, and TIMESTAMP to TEXT with datetime() defaults.
Data Export and Application Updates
# Export with pgloader
pgloader postgresql://user:pass@host/db sqlite://app.db
# Update SQLAlchemy connection
engine = create_engine(
"sqlite:////var/db/app.db",
connect_args={"check_same_thread": False}
)
Performance Tuning
Apply these optimizations immediately:
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
PRAGMA mmap_size=30000000000; -- 30GB memory-mapped I/O
PRAGMA cache_size=-64000; -- 64MB cache
Simplify Database Operations
Beyond performance, SQLite reduces operational complexity:
Backup and Recovery: The database is a single file. Litestream handles continuous replication to S3. Recovery is a single restore command versus complex PostgreSQL dump/restore procedures.
Development and Testing: Developers run the exact same database engine as production. No Docker Compose containers, no connection string management, no schema drift between environments.
Monitoring: SQLite exposes metrics via PRAGMA commands:
import sqlite3
def get_db_stats(db_path):
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
stats = {
'page_count': cursor.execute('PRAGMA page_count').fetchone()[0],
'page_size': cursor.execute('PRAGMA page_size').fetchone()[0],
}
stats['size_mb'] = (stats['page_count'] * stats['page_size']) / 1024 / 1024
return stats
When NOT to Use SQLite
SQLite isn’t a universal solution. Avoid it for:
- High write concurrency: >1000 sustained writes/second
- Multiple application servers: Without LiteFS or similar replication
- Terabyte-scale datasets: PostgreSQL’s query planner handles large datasets better
- Complex analytical queries: PostgreSQL’s parallel query execution wins for complex aggregations
For these workloads, PostgreSQL or distributed databases (CockroachDB, YugabyteDB) are better choices.
Deploy SQLite at Scale
I’ve successfully run this architecture in production:
Application: Content API serving 50,000 requests/minute Database Size: 85GB SQLite with WAL mode Query Latency P95: 2.1ms Write Throughput: 300 writes/second Replication: Litestream to S3 every second Cost: $80/month single VM vs $400/month managed PostgreSQL
Key Takeaways
After running SQLite in production for 18 months across multiple systems, here’s what I’ve learned:
- SQLite is production-ready for read-heavy, single-server workloads
- Performance gains are real: 10-20x latency improvements are typical
- Operational complexity drops dramatically: no database cluster, no connection pools, no network security
- Modern tooling solves replication: Litestream and LiteFS provide disaster recovery and multi-region capabilities
- Know the limits: high write concurrency and multi-server deployments need traditional databases
The next time you reach for PostgreSQL, ask yourself: do I actually need a client-server database? For many production workloads, SQLite is the simpler, faster, cheaper choice.
If you’re running production systems that might benefit from SQLite, I’d love to hear about your architecture. Reach out to discuss migration strategies and performance optimization techniques specific to your workload.