Tuesday, March 31, 2026

glTF: "JPEG of 3D", Sketchfab design models search

glTF (GL Transmission Format) is a royalty-free, open-standard file format designed by the Khronos Group for the efficient transmission, loading, and runtime rendering of 3D models and scenes. Often called the "JPEG of 3D", it streamlines 3D workflows by offering a standardized format that minimizes file size and processing needed, supporting PBR (physically based rendering) for high-quality visuals.

 glTF - Wikipedia

glTF (Graphics Library Transmission Format or GL Transmission Format and formerly known as WebGL Transmissions Format or WebGL TF) is a standard file format for three-dimensional scenes and models. A glTF file uses one of two possible file extensions: .gltf (JSON/ASCII) or .glb (binary). 


It is JSON-based, renders instantly in web browsers (using Three.js or Babylon.js), and—most importantly for you—it supports Custom Extensions.

You can take a dumb 3D model in a glTF file and inject your own custom semantic JSON data directly into the node tree.



The core of glTF is a JSON file that describes the structure and composition of a scene containing 3D models, which can be stored in a single binary glTF file (.glb). The top-level elements of the file include: Scenes and nodes, cameras, meshes, buffers, materials, textures, skins and animations.

























HTTP / fetch from node.js; skip Axios!

How to make an HTTP request in Node.js

...using the request package (now deprecated) to make HTTP requests in Node.js.
Then promises became mainstream and I switched to request-promise (also deprecated).
In more recent times I moved to axios and I thought I would never look back… and yet here we are.
The HTTP story in Node.js keeps evolving, and for good reasons!

Making HTTP requests is one of the most common tasks in Node.js development.
Whether you’re calling a REST API, fetching data from an external service, or building a web scraper, you need to know how to do it effectively.

Quick Answer: Use fetch()


npm just got HACKED (supply chain attacks explained) - YouTube
NetworkChuck


What happened, are you affected & how to prevent - axios supply chain attack - YouTube
Maximilian Schwarzmüller



axios Compromised on npm - Malicious Versions Drop Remote Access Trojan - StepSecurity

axios is the most popular JavaScript HTTP client library with over 100 million weekly downloads. On March 30, 2026, StepSecurity identified two malicious versions of the widely used axios HTTP client library published to npm: axios@1.14.1 and axios@0.30.4. The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code



Monday, March 30, 2026

animation: How AI works @3Blue1Brown

Neural networks - YouTube

Transformers, the tech behind LLMs | Deep Learning Chapter 5 - YouTube

3Blue1Brown - YouTube

3Blue1Brown - 3Blue1Brown

about 3Blue1Brown

3Blue1Brown name

3b1b/manim: Animation engine for explanatory math videos @GitHub

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos.
Note, there are two versions of manim. This repository began as a personal project by the author of 3Blue1Brown for the purpose of animating those videos, with video-specific code available here. In 2020 a group of developers forked it into what is now the community edition, with a goal of being more stable, better tested, quicker to respond to community contributions, and all around friendlier to get started with. See this page for more details.

3b1b/manim: Animation engine for explanatory math videos

3b1b/videos: Code for the manim-generated scenes used in 3blue1brown videos

How I animate 3Blue1Brown | A Manim demo with Ben Sparks - YouTube


A Mathematical Framework for Transformer Circuits





SQLite.c => WASM => GoLang + distributed

In 2026, avoiding CGO when using SQLite in Go is a very common and recommended practice. Bypassing standard CGO-based drivers (like mattn/go-sqlite3) allows you to easily cross-compile your binaries (e.g., building a Linux binary from a Mac) without needing a C toolchain (GCC/Musl) and significantly speeds up build times.

Currently, there are two major, production-ready "Pure Go" approaches to solve this: the Transpiled approach and the WebAssembly (WASM) approach.

Here is the breakdown of the best ways to use SQLite without CGO today.


This has become a massive favorite in the Go community over the last couple of years. Instead of translating SQLite's C code to Go, this library compiles the original C SQLite code into WebAssembly (WASM) and runs it entirely inside Go using the blazing-fast pure-Go wazero runtime.

Why choose it:

  • It runs the exact, original C SQLite source code, guaranteeing 100% bug-for-bug compatibility with upstream SQLite.
  • Extremely fast performance that often beats transpiled alternatives.
  • Very active maintenance and frequent updates matching new SQLite releases.

Installation:

go get github.com/ncruces/go-sqlite3

Standard database/sql Usage:

package main

import (
	"database/sql"
	"log"

	// Import the database/sql driver and embed the WASM build
	_ "github.com/ncruces/go-sqlite3/driver"
	_ "github.com/ncruces/go-sqlite3/embed"
)

func main() {
	// Note the driver name is "sqlite3"
	db, err := sql.Open("sqlite3", "file:my_database.db")
	if err != nil {
		log.Fatal(err)
	}
	defer db.Close()

	// Use standard db.Query, db.Exec, etc.
	_, err = db.Exec(`CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)`)
	if err != nil {
		log.Fatal(err)
	}
}

Note: If you use GORM, ncruces provides a built-in driver via github.com/ncruces/go-sqlite3/gormlite.


2. The Transpiled Approach: modernc.org/sqlite (The Battle-Tested Standard)

This is the most mature pure-Go SQLite driver. The author wrote a custom C-to-Go transpiler that converts the SQLite C source code directly into Go code.

Why choose it:

  • Drop-in replacement for the standard library (database/sql).
  • Incredibly mature with a massive ecosystem of wrappers built around it.
  • Requires no external WASM runtimes; just standard Go execution.

Installation:

go get modernc.org/sqlite

Standard database/sql Usage:

package main

import (
	"database/sql"
	"log"

	_ "modernc.org/sqlite"
)

func main() {
	// Note the driver name is "sqlite" (different from mattn's "sqlite3")
	db, err := sql.Open("sqlite", "my_database.db")
	if err != nil {
		log.Fatal(err)
	}
	defer db.Close()

	_, err = db.Exec(`CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)`)
	if err != nil {
		log.Fatal(err)
	}
}

How to use Pure-Go SQLite with GORM?

If you are using GORM, you should not use the standard gorm.io/driver/sqlite because it relies on CGO under the hood. Instead, use pure-Go GORM drivers.

Option A (Using ModernC): github.com/glebarez/sqlite This is the officially recommended CGO-free driver by the GORM team.

import (
    "github.com/glebarez/sqlite" // Pure Go ModernC wrapper
    "gorm.io/gorm"
)

db, err := gorm.Open(sqlite.Open("gorm.db"), &gorm.Config{})

Option B (Using Ncruces/WASM): github.com/ncruces/go-sqlite3/gormlite

import (
    "github.com/ncruces/go-sqlite3/gormlite"
    "gorm.io/gorm"
)

db, err := gorm.Open(gormlite.Open("gorm.db"), &gorm.Config{})

Honorable Mentions for Extreme Performance

If you need maximum throughput and are willing to write non-standard code:

  1. zombiezen.com/go/sqlite: This library wraps modernc but ditches the standard database/sql API in favor of a custom, low-level interface. It is significantly faster and uses far fewer allocations than database/sql wrappers.
  2. github.com/cvilsmeier/sqinn-go: This avoids CGO by spinning up a completely separate native C-compiled SQLite child process (sqinn) and communicating with it via stdin/stdout. It has fantastic benchmarking results for raw throughput, but requires you to manage the external binary.

Summary Recommendation

  • Use github.com/ncruces/go-sqlite3 if you want the absolute closest behavior to standard C-SQLite, great speed, and don't mind the wazero dependency.
  • Use modernc.org/sqlite (or glebarez/sqlite for GORM) if you want the oldest, most battle-tested pure Go transpilation without any WebAssembly sandbox overhead.





DISTRIBUTED SQLITE

Here are the best ways to build a distributed SQLite architecture in Go today:

1. LiteFS (The WAL-Streaming Standard)

Created by Ben Johnson (the original author of BoltDB) and backed by Fly.io, LiteFS is exactly what you described. It is a distributed file system built specifically for SQLite.

  • How it works: It runs as a background Go process (or FUSE file system) on your servers. When your Go application executes a write transaction, SQLite writes to the WAL. LiteFS intercepts these WAL pages at the file-system level and rapidly streams them to all other connected replica nodes.

  • Architecture: It uses a Single-Primary, Multi-Replica model. Only the primary node can write, but all nodes have a full, up-to-date local copy of the database for lightning-fast reads. If the primary goes down, the cluster automatically elects a new primary via Hashicorp's Consul or etcd leases.

  • The Best Part: You don't have to change your Go code at all. You just use modernc.org/sqlite or ncruces/go-sqlite3 to connect to a local /mnt/litefs/db.sqlite file. The Go app thinks it's a normal local file; LiteFS handles the distributed magic underneath.

  • Written in: 100% Go.

2. rqlite (The Raft-Consensus Approach)

If you want an inherently distributed database engine built around SQLite rather than just syncing files, rqlite is the industry standard.

  • How it works: Instead of shipping SQLite WAL files at the OS level, rqlite sits above SQLite. When you send an INSERT or UPDATE, rqlite uses the HashiCorp Raft algorithm to achieve cluster-wide consensus on the SQL statement before committing it to the local SQLite database on each node.

  • Architecture: Multi-node cluster. Highly fault-tolerant. If you have a 3-node cluster, 1 node can die and the database keeps operating flawlessly.

  • The Catch: You don't interact with a local .db file using standard database/sql. Instead, you use the github.com/rqlite/gorqlite Go driver to talk to the rqlite cluster over an HTTP/JSON API.

  • Written in: 100% Go.

3. Marmot (The Multi-Master / NATS Approach)

If you need Bi-directional / Multi-Master replication (meaning you want to be able to write to any node at any time and have the changes sync everywhere), look at Marmot.

  • How it works: Marmot sits alongside your Go application. It uses standard SQLite triggers and the WAL to capture data changes. It then broadcasts those raw changes over NATS (a highly performant distributed messaging system) to all other nodes in the cluster.

  • Architecture: Masterless (Multi-Master). You can write locally to Node A and Node B simultaneously, and Marmot will resolve the replication in the background.

  • The Best Part: Like LiteFS, your Go application just interacts with standard local .db files.

  • Written in: 100% Go.

4. Turso / libSQL (Embedded Replicas)

While libSQL (an open-source, edge-optimized fork of SQLite) is written in C, its ecosystem provides a radically modern way to distribute SQLite using Embedded Replicas.

  • How it works: You run a central libSQL server (or use the Turso cloud). In your Go application, you use the libSQL Go driver. The driver maintains a full local replica of the remote database on disk.

  • Architecture: When your Go app does a Read, it reads from the local disk instantly (0ms latency). When your Go app does a Write, the driver seamlessly forwards the write to the primary server, which then syncs the updated WAL frame back to all connected Go clients in the background.

  • The Catch: To use the embedded replica feature, the official Go libSQL driver has traditionally relied on CGO. (Though pure-Go network clients exist, the local-syncing engine heavily relies on C).

Summary: Which should you choose?

  • If you want zero code changes, purely local database files, and a single primary with auto-failover, use LiteFS.

  • If you want absolute fault tolerance and don't mind connecting over HTTP rather than direct disk access, use rqlite.

  • If you need to write from anywhere (Multi-master), use Marmot.



DISTRIBUTED SQLITE APPS


The idea of running a production website on SQLite used to be considered a joke, but over the last few years, it has become one of the hottest architectural trends in software engineering.

Because of tools like LiteFS, Litestream, and libSQL, some massively popular websites and global infrastructure companies now run their production systems entirely on distributed SQLite.

Here are some of the most prominent real-world examples:

1. Expensify (Millions of users, Billions of dollars)

Expensify is perhaps the most famous and extreme example. This massive expense management company processes billions of dollars in financial transactions, and their entire global backend is powered by a custom distributed SQLite architecture called Bedrock.

  • How they do it: They built a WAN-replicated, highly available consensus layer on top of SQLite[1]. It handles millions of users across multiple global data centers[2]. If one data center goes entirely offline, the others keep humming along perfectly. They chose SQLite because of its sheer speed and reliability[3].

2. Tailscale (Global Networking Infrastructure)

Tailscale is a massively popular zero-trust VPN and networking company. They manage the secure connections for millions of devices globally. In 2022, they migrated their core control-plane database away from 

  • How they do it: They use an active/active single-writer model with SQLite, using tools like Litestream to continuously stream the Write-Ahead Log to cloud storage for near-real-time backup and replication[4]. They noted that reads from SQLite are so fast that it vastly out-performed their old network-based database[5].

3. Bluesky (Decentralized Social Network)

The popular Twitter/X alternative, Bluesky, heavily utilizes SQLite for its underlying architecture (the AT Protocol)[6].

  • How they do it: Because Bluesky is federated, user data is hosted on Personal Data Servers (PDS) and App Views. Instead of using a massive, monolithic Postgres database, Bluesky spins up individual SQLite databases for user shards and data feeds, allowing them to scale horizontally and distribute data incredibly efficiently.

4. 37signals (Creators of Basecamp & Ruby on Rails)

The company behind Basecamp and HEY recently launched a new line of web products under the brand "ONCE" (such as their Campfire chat app).

  • How they do it: Instead of requiring customers to spin up complicated MySQL or Postgres databases, the entire backend runs natively on SQLite[6]. They rely heavily on the speed of local disk writes and modern replication tools to make the database enterprise-ready without the traditional client-server database overhead.

5. EpicWeb.dev (Kent C. Dodds)

While maybe not as massive as Expensify, EpicWeb is the canonical "modern web application" showcase for distributed SQLite. Kent C. Dodds is a highly influential web developer, and his entire educational platform runs on a global SQLite distributed cluster.

  • How he does it: The site is hosted on Fly.io and uses LiteFS. When a user in London visits the site, they read data instantly (0ms latency) from a local SQLite replica in London. If they buy a course, the write is routed to the primary node (e.g., in Chicago) and then the SQLite WAL changes are instantly streamed back to the London replica.

6. Any app running on Cloudflare D1

Cloudflare runs a massive chunk of the internet. Recently, they launched a serverless database offering called Cloudflare D1.

  • What it is: D1 is literally just distributed SQLite at the edge. If you use D1, Cloudflare automatically replicates your SQLite database across their global edge network. Thousands of modern web apps and APIs are currently running on Cloudflare D1, allowing users to query data locally no matter where they are in the world.

Why are all these companies switching?

  1. 0ms Read Latency: In a traditional app, querying a database requires a network call to the DB server (adding 2ms-20ms of delay). With distributed SQLite, the DB is literally a file on the application server's hard drive. Reads take microseconds. You avoid the "N+1 query problem" entirely[6].

  2. Cheaper: You don't have to pay AWS or Google Cloud thousands of dollars for managed RDS/Postgres instances.

  3. Simplicity: Managing a .db file that automatically syncs over the network is vastly easier than tuning a massive PostgreSQL cluster.