We Shipped beads v1.0.0 Support. It Took a Rollback, a Flock Bug, and 6 Hotfixes.
On April 2, beads shipped v1.0.0. The headline feature was embedded Dolt: a zero-config backend that runs the database in-process, no separate server to manage. For solo developers, this was the promise of bd init and you're done. No ports, no daemons, no configuration.
We started adding support in Beadbox immediately. Six hotfix releases, a public rollback, and a deep dive into bd's source code later, we came out the other side with a resilience layer we probably should have built months ago.
The morning before everything broke
The day started clean. We'd been running a dead code hunt across the codebase and shipped v0.20.0 with 5,350 lines removed and a 2-second improvement on cold launch. Forty-two beads closed. A good morning.
Then we upgraded bd to 0.63.3, the first release built on beads v1.0.0's embedded Dolt backend.
Beadbox couldn't find the database. Embedded mode stores data in .beads/embeddeddolt/ instead of .beads/dolt/. The database name changed too, from hardcoded beads to a project prefix read from metadata.json. And bd sql, which our WebSocket server used for O(1) change detection via DOLT_HASHOF_TABLE, isn't supported in embedded mode at all.
Three assumptions broken in the first ten minutes.
Six releases in one day
Discover, fix, ship, discover again.
v0.20.1 added credential persistence using the OS keychain (six beads worth of work already in progress), fixed a custom status filter bug, and patched Windows-specific issues.
v0.20.2 taught Beadbox to read dolt_database from metadata.json so it could find the renamed database.
v0.20.3 added embedded mode guards. Every bd sql call got wrapped with a check: if we're in embedded mode, fall back to CLI-based polling instead of direct SQL queries. The getDoltDir function learned to look in embeddeddolt/ first.
v0.20.4 fixed --db path normalization for the embedded layout. Paths that worked with the old directory structure broke with the new one.
Each fix revealed the next problem.
The flock
After v0.20.4, we thought we were stable. Then we ran a simple concurrency test: five bd list calls at the same time.
Four of them failed.
Embedded Dolt acquires an exclusive file lock (flock) on the database for the entire lifetime of every command. From PersistentPreRun to PersistentPostRun, nothing else can touch it. This is by design. Without it, concurrent engine initialization causes a nil-pointer panic (beads#2571). The flock prevents the crash. But it also means that in embedded mode, bd is effectively single-process.
Beadbox is not single-process. Our WebSocket server polls for changes every second. The UI fires multiple server actions on page load. A user clicking through the app while the background poller runs will generate concurrent bd calls. The flock blocks all of them except the first.
The DoltHub blog post about the embedded implementation described the intended behavior: concurrent callers should "queue up naturally with exponential backoff." But arch reviewed the shipped source code and found that bd uses TryLock with LOCK_NB (non-blocking). It doesn't wait. It fails immediately. There are two lock layers: bd's flock at the top, and Dolt's driver-level backoff underneath. The first layer short-circuits the second. The retry logic exists in the codebase, but it never executes because the flock rejects the connection before Dolt's backoff gets a chance to run.
The fix (shared locks for read operations via FlockSharedNonBlock) exists in bd's source. It just isn't wired up yet.
This is the problem Beadbox solves.
Real-time visibility into what your entire agent fleet is doing.
We could keep shipping hotfixes against a moving target, or pull back and build a proper resilience layer. We pulled back.
All v0.20.x releases came down from the public repo. v0.19.0 went back up as the recommended version. We posted a discussion explaining what happened and what to do, and added a banner to beadbox.app. Thirty minutes from decision to done.
Every hour a broken release stays up is an hour where someone downloads it, hits the flock issue, and blames the product. We'd rather explain a rollback than debug someone else's bad first experience.
We weren't the only ones
While we were debugging, a beads user named Kevin posted beads#2938: "Beads feels painful to use." He'd spent 9.5 hours debugging issues that included the exact embedded-to-server confusion we were hitting. The upgrade to v1.0.0 had silently switched his workspace from server mode to embedded mode (beads#2949), hiding his existing issues behind a fresh empty database.
9.5 hours. An experienced user, not someone new to the tool. If that's the experience for someone who knows beads well, the problem isn't the user. It's the migration path.
What we built for v0.21.0
Instead of patching around individual failures, we built a layer that treats lock contention as a normal operating condition.
Flock retry with exponential backoff. Every bd CLI call retries up to 5 times, 100ms to 1.6 seconds between attempts. Lives in one place in lib/bd.ts, so every command gets it for free. This covers the common case: two calls collide, one waits briefly, both succeed.
Graceful degradation UI. Lock contention no longer means an error screen. The app shows stale data with a refresh indicator. If contention persists past 30 seconds, an amber banner explains the situation. When the lock clears, the banner disappears and data refreshes automatically.
Auto-promote suggestion. Repeated contention triggers a suggestion to migrate to server mode: backup, reinitialize with --server, restore. One click. This is the right answer for anyone running Beadbox alongside other bd consumers, and now the app tells you that instead of making you figure it out.
Embedded mode detection.getDoltDir checks for embeddeddolt/ and routes accordingly. bd sql calls are guarded. The WebSocket pipeline falls back to CLI-based polling in embedded mode (slower, but respects the single-process constraint).
What we learned
Embedded Dolt is single-process by design. Not a bug. The flock prevents real panics. Any tool consuming a beads workspace concurrently needs to serialize access or run in server mode. For Beadbox, server mode is the right default. Embedded works for light usage with the retry layer absorbing the occasional collision.
The docs described intent, not implementation. The DoltHub blog said backoff. The code said TryLock with LOCK_NB. We spent time assuming concurrent reads should work because the documentation said they would. Reading the source resolved the confusion in minutes. When behavior doesn't match docs, read the code.
Test concurrency before you ship. We didn't run concurrent bd calls until after v0.20.4 was public. for i in {1..5}; do bd list & done; wait would have caught the flock issue before any release. Five seconds of testing would have saved us a rollback.
Roll back early. The instinct to keep pushing forward is strong. You're close, you can see the fix, one more release. But every broken release that stays public is a trust withdrawal you can't easily undo. Pulling back to v0.19.0 gave us room to build the resilience layer properly instead of shipping it in panicked increments.
Check your environment variables. We lost hours to BEADS_DIR pointing bd at the wrong workspace. bd was discovering a different database than the one Beadbox was monitoring, and the symptoms looked like data corruption. If your bd commands return unexpected results, env | grep BEADS before anything else.
Where things stand
v0.21.0 is out with beads v1.0.0 support, the resilience layer, and credential persistence via the OS keychain. The release discussion has the full details.
If you're on beads v1.0.0 with embedded mode and hitting intermittent failures, v0.21.0's retry layer should handle it. If you're running Beadbox alongside other tools that hit the same workspace, switch to server mode. The auto-promote flow makes it one click.
And if you're Steve or anyone on the beads team reading this: shared flocks for reads would fix the root cause upstream. beads#2939 (Unix domain sockets) would make local connections cleaner too. We'll keep building around whatever ships.
Try it yourself
Start with beads for the coordination layer. Add Beadbox when you need visual oversight.
Free while in beta. No account required. Works natively with Dolt.