You're running multiple AI coding agents on the same codebase. Maybe three, maybe thirteen. They need to track their own work: create issues, update statuses, check dependencies, report progress. Dozens of writes per minute across the fleet.
This is agentic engineering: humans coordinating fleets of AI agents to ship software. The workflow is new, but the first thing everyone does is reach for the tool they already know. Jira. Linear. GitHub Issues. Notion. Whatever your team uses for project management.
It doesn't work. And the mismatch isn't superficial. It's architectural.
Latency kills throughput
A Jira API call takes 200-800ms. A Linear API call is faster but still 100-300ms. Creating a single issue, reading its dependencies, updating its status: that's three round-trips through HTTPS, DNS resolution, TLS handshake, and JSON serialization. Call it 500ms on a good day.
A local CLI write to a SQLite database takes under 50ms. Often under 10ms.
That sounds like a rounding error until you multiply it by the number of operations. An agent working through a task might create 2-3 sub-issues, update the parent status, check for blockers, and comment its progress. Six operations. At 500ms each, that's 3 seconds of pure waiting. At 10ms each, it's 60 milliseconds. The agent that could finish a task cycle in 30 seconds now spends 10% of its time waiting on HTTP instead of writing code.
Scale that to 13 agents and the overhead is measured in minutes per hour.
Auth infrastructure is fragile glue
Every agent needs an API token. Tokens expire. Rate limits exist. One agent's burst of 20 rapid-fire updates triggers a 429 Too Many Requests. Now it's stuck in a retry loop with exponential backoff instead of doing its job.
You've added an entire failure mode that has nothing to do with the work itself. Token rotation, secret management, rate limit budgeting across agents. That's operational overhead for a capability that should be trivial: writing a record to a local database.
When the issue tracker is a file on disk, there's nothing to authenticate against. If the agent can read the filesystem, it can read and write issues. One less thing to break.
The data model assumes humans
Open Jira. You see sprints. Story points. Assignees with profile photos and email addresses. Workflows with states like "In Review" and "Ready for Grooming." The entire data model was designed for a team of humans doing standups, sprint planning, and retrospectives.
Agents don't do standups. They don't estimate in story points. They don't need a workflow with seven states and four approval gates.
What agents need is a dependency graph. This task is blocked by that task. This epic has 12 children and 7 are complete. This agent claimed this issue 45 seconds ago and hasn't reported back. The fundamental data structure is a tree of tasks with blocking relationships, not a board of cards moving through columns.
SaaS tools bolt on "automation" features, but the core model underneath is still a Kanban board for humans. You can write a Jira plugin that lets agents create issues. You can't change the fact that Jira thinks your agent is a person on a sprint team.
Cloud dependency is a single point of failure
Your agents run locally. They read local files, write local code, and commit to local git repos. They can work offline, on a plane, or on a network with 2000ms latency. They don't care.
But if your issue tracker is a SaaS product, every agent operation requires internet access. Linear goes down for 10 minutes? Your entire fleet stalls. Your home internet hiccups for 30 seconds? Every agent retries in a loop. The issue tracker, the thing that's supposed to coordinate work, becomes the single point of failure for the whole system.
Local-first means the issue tracker is as reliable as the filesystem. It's always available, always fast, always under your control.
The write volume is orders of magnitude wrong
SaaS project management tools are designed for a team of 5-10 humans making a handful of updates per day. Maybe 50-100 writes across the whole team.
13 agents updating issues every few minutes produce hundreds of API calls per hour from a single project. That's not a marginal increase in usage. It's a different usage pattern entirely. Rate limits that seem generous for human teams become hard walls for agent fleets.
And it's not just volume. It's concurrency. Three agents updating the same epic's children simultaneously. Race conditions on status fields. Optimistic locking failures on comment threads. These are problems SaaS tools never had to solve because humans don't update the same issue from three terminals at the same instant.
Collaboration means giving up your data
To share a Jira project with a teammate, both of you need Jira accounts. The data lives on Atlassian's servers. You're paying per seat, per month, for the privilege of accessing your own project data through their API.
Want to move to a different tool? Export what you can as CSV and abandon the rest. Comments, attachments, custom fields, audit history: good luck getting that out in a usable format. The SaaS model trades data ownership for convenience.
But collaboration doesn't require a vendor in the middle. If your issue database is backed by something like Dolt (Git for databases), you push it to a remote and your teammate pulls it. Branch your issue database the same way you branch code. Merge it the same way too. Resolve conflicts with the same tools and mental model. Your data stays yours. Collaboration works like git, not like a subscription.
What actually works
Strip away the brand names and think about what agents actually need from an issue tracker:
- Local-first. No network dependency. The database is a file on disk.
- CLI-native. Agents live in the terminal. The interface should too.
- Git-backed. Versioned, mergeable, auditable. No vendor lock-in.
- No auth overhead. If the agent can read the filesystem, it can track issues.
- Low latency. Under 50ms per operation, not 500ms.
- Syncable without a middleman. Push and pull like a git repo, not through API webhooks.
This is what I use daily. beads is a Git-native issue tracker built for exactly this workflow. It stores everything in a local SQLite database backed by Dolt for versioning and sync. The CLI is the primary interface. Agents create, update, and query issues the same way they run any other command.
Beadbox is the visual layer I built on top of it. It watches the local database for changes and renders dependency trees, epic progress, and agent activity in real time. The agents use the CLI. I use the dashboard. Both read from the same local database.
The old tools aren't the problem
Jira is excellent at what it does: coordinating human teams through structured workflows. Linear is beautiful for small teams that want speed and polish. GitHub Issues is frictionless for open-source collaboration.
None of them are bad. They're solving a different problem. If your workflow is a team of five humans doing two-week sprints, keep using them.
But if you're running 5, 10, or 13 AI agents coordinating in real time on the same codebase, you've outgrown the SaaS model. Agentic engineering needs tooling built for agentic engineering, not human workflows with an API bolted on.
