In 2026, avoiding CGO when using SQLite in Go is a very common and recommended practice. Bypassing standard CGO-based drivers (like mattn/go-sqlite3) allows you to easily cross-compile your binaries (e.g., building a Linux binary from a Mac) without needing a C toolchain (GCC/Musl) and significantly speeds up build times.
Currently, there are two major, production-ready "Pure Go" approaches to solve this: the Transpiled approach and the WebAssembly (WASM) approach.
Here is the breakdown of the best ways to use SQLite without CGO today.
1. The WebAssembly Approach: ncruces/go-sqlite3 (Highly Recommended)
This has become a massive favorite in the Go community over the last couple of years. Instead of translating SQLite's C code to Go, this library compiles the original C SQLite code into WebAssembly (WASM) and runs it entirely inside Go using the blazing-fast pure-Go wazero runtime.
Why choose it:
- It runs the exact, original C SQLite source code, guaranteeing 100% bug-for-bug compatibility with upstream SQLite.
- Extremely fast performance that often beats transpiled alternatives.
- Very active maintenance and frequent updates matching new SQLite releases.
Installation:
go get github.com/ncruces/go-sqlite3
Standard database/sql Usage:
package main
import (
"database/sql"
"log"
// Import the database/sql driver and embed the WASM build
_ "github.com/ncruces/go-sqlite3/driver"
_ "github.com/ncruces/go-sqlite3/embed"
)
func main() {
// Note the driver name is "sqlite3"
db, err := sql.Open("sqlite3", "file:my_database.db")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Use standard db.Query, db.Exec, etc.
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)`)
if err != nil {
log.Fatal(err)
}
}
Note: If you use GORM, ncruces provides a built-in driver via github.com/ncruces/go-sqlite3/gormlite.
2. The Transpiled Approach: modernc.org/sqlite (The Battle-Tested Standard)
This is the most mature pure-Go SQLite driver. The author wrote a custom C-to-Go transpiler that converts the SQLite C source code directly into Go code.
Why choose it:
- Drop-in replacement for the standard library (
database/sql). - Incredibly mature with a massive ecosystem of wrappers built around it.
- Requires no external WASM runtimes; just standard Go execution.
Installation:
go get modernc.org/sqlite
Standard database/sql Usage:
package main
import (
"database/sql"
"log"
_ "modernc.org/sqlite"
)
func main() {
// Note the driver name is "sqlite" (different from mattn's "sqlite3")
db, err := sql.Open("sqlite", "my_database.db")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec(`CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)`)
if err != nil {
log.Fatal(err)
}
}
How to use Pure-Go SQLite with GORM?
If you are using GORM, you should not use the standard gorm.io/driver/sqlite because it relies on CGO under the hood. Instead, use pure-Go GORM drivers.
Option A (Using ModernC): github.com/glebarez/sqlite This is the officially recommended CGO-free driver by the GORM team.
import (
"github.com/glebarez/sqlite" // Pure Go ModernC wrapper
"gorm.io/gorm"
)
db, err := gorm.Open(sqlite.Open("gorm.db"), &gorm.Config{})
Option B (Using Ncruces/WASM): github.com/ncruces/go-sqlite3/gormlite
import (
"github.com/ncruces/go-sqlite3/gormlite"
"gorm.io/gorm"
)
db, err := gorm.Open(gormlite.Open("gorm.db"), &gorm.Config{})
Honorable Mentions for Extreme Performance
If you need maximum throughput and are willing to write non-standard code:
zombiezen.com/go/sqlite: This library wrapsmoderncbut ditches the standarddatabase/sqlAPI in favor of a custom, low-level interface. It is significantly faster and uses far fewer allocations thandatabase/sqlwrappers.github.com/cvilsmeier/sqinn-go: This avoids CGO by spinning up a completely separate native C-compiled SQLite child process (sqinn) and communicating with it via stdin/stdout. It has fantastic benchmarking results for raw throughput, but requires you to manage the external binary.
Summary Recommendation
- Use
github.com/ncruces/go-sqlite3if you want the absolute closest behavior to standard C-SQLite, great speed, and don't mind the wazero dependency. - Use
modernc.org/sqlite(orglebarez/sqlitefor GORM) if you want the oldest, most battle-tested pure Go transpilation without any WebAssembly sandbox overhead.
DISTRIBUTED SQLITE
Here are the best ways to build a distributed SQLite architecture in Go today:
1. LiteFS (The WAL-Streaming Standard)
How it works: It runs as a background Go process (or FUSE file system) on your servers. When your Go application executes a write transaction, SQLite writes to the WAL. LiteFS intercepts these WAL pages at the file-system level and rapidly streams them to all other connected replica nodes. Architecture: It uses a Single-Primary, Multi-Replica model. Only the primary node can write, but all nodes have a full, up-to-date local copy of the database for lightning-fast reads. If the primary goes down, the cluster automatically elects a new primary via Hashicorp's Consul or etcd leases. The Best Part: You don't have to change your Go code at all. You just use modernc.org/sqlite or ncruces/go-sqlite3 to connect to a local /mnt/litefs/db.sqlite file. The Go app thinks it's a normal local file; LiteFS handles the distributed magic underneath. Written in: 100% Go.
2. rqlite (The Raft-Consensus Approach)
How it works: Instead of shipping SQLite WAL files at the OS level, rqlite sits above SQLite. When you send an INSERT or UPDATE, rqlite uses the HashiCorp Raft algorithm to achieve cluster-wide consensus on the SQL statement before committing it to the local SQLite database on each node. Architecture: Multi-node cluster. Highly fault-tolerant. If you have a 3-node cluster, 1 node can die and the database keeps operating flawlessly. The Catch: You don't interact with a local .db file using standard database/sql. Instead, you use the github.com/rqlite/gorqlite Go driver to talk to the rqlite cluster over an HTTP/JSON API. Written in: 100% Go.
3. Marmot (The Multi-Master / NATS Approach)
How it works: Marmot sits alongside your Go application. It uses standard SQLite triggers and the WAL to capture data changes. It then broadcasts those raw changes over NATS (a highly performant distributed messaging system) to all other nodes in the cluster. Architecture: Masterless (Multi-Master). You can write locally to Node A and Node B simultaneously, and Marmot will resolve the replication in the background. The Best Part: Like LiteFS, your Go application just interacts with standard local .db files. Written in: 100% Go.
4. Turso / libSQL (Embedded Replicas)
How it works: You run a central libSQL server (or use the Turso cloud). In your Go application, you use the libSQL Go driver. The driver maintains a full local replica of the remote database on disk. Architecture: When your Go app does a Read, it reads from the local disk instantly (0ms latency). When your Go app does a Write, the driver seamlessly forwards the write to the primary server, which then syncs the updated WAL frame back to all connected Go clients in the background. The Catch: To use the embedded replica feature, the official Go libSQL driver has traditionally relied on CGO. (Though pure-Go network clients exist, the local-syncing engine heavily relies on C).
Summary: Which should you choose?
If you want zero code changes, purely local database files, and a single primary with auto-failover, use LiteFS. If you want absolute fault tolerance and don't mind connecting over HTTP rather than direct disk access, use rqlite. If you need to write from anywhere (Multi-master), use Marmot.
DISTRIBUTED SQLITE APPS
The idea of running a production website on SQLite used to be considered a joke, but over the last few years, it has become one of the hottest architectural trends in software engineering.
1. Expensify (Millions of users, Billions of dollars)
How they do it: They built a WAN-replicated, highly available consensus layer on top of SQLite[1]. It handles millions of users across multiple global data centers[2]. If one data center goes entirely offline, the others keep humming along perfectly. They chose SQLite because of its sheer speed and reliability[3].
2. Tailscale (Global Networking Infrastructure)
How they do it: They use an active/active single-writer model with SQLite, using tools like Litestream to continuously stream the Write-Ahead Log to cloud storage for near-real-time backup and replication[4]. They noted that reads from SQLite are so fast that it vastly out-performed their old network-based database[5].
3. Bluesky (Decentralized Social Network)
How they do it: Because Bluesky is federated, user data is hosted on Personal Data Servers (PDS) and App Views. Instead of using a massive, monolithic Postgres database, Bluesky spins up individual SQLite databases for user shards and data feeds, allowing them to scale horizontally and distribute data incredibly efficiently.
4. 37signals (Creators of Basecamp & Ruby on Rails)
How they do it: Instead of requiring customers to spin up complicated MySQL or Postgres databases, the entire backend runs natively on SQLite[6]. They rely heavily on the speed of local disk writes and modern replication tools to make the database enterprise-ready without the traditional client-server database overhead.
5. EpicWeb.dev (Kent C. Dodds)
How he does it: The site is hosted on Fly.io and uses LiteFS. When a user in London visits the site, they read data instantly (0ms latency) from a local SQLite replica in London. If they buy a course, the write is routed to the primary node (e.g., in Chicago) and then the SQLite WAL changes are instantly streamed back to the London replica.
6. Any app running on Cloudflare D1
What it is: D1 is literally just distributed SQLite at the edge. If you use D1, Cloudflare automatically replicates your SQLite database across their global edge network. Thousands of modern web apps and APIs are currently running on Cloudflare D1, allowing users to query data locally no matter where they are in the world.
Why are all these companies switching?
0ms Read Latency: In a traditional app, querying a database requires a network call to the DB server (adding 2ms-20ms of delay). With distributed SQLite, the DB is literally a file on the application server's hard drive. Reads take microseconds. You avoid the "N+1 query problem" entirely[6]. Cheaper: You don't have to pay AWS or Google Cloud thousands of dollars for managed RDS/Postgres instances. Simplicity: Managing a .db file that automatically syncs over the network is vastly easier than tuning a massive PostgreSQL cluster.