Last year, we saw unification dominate database workloads. This year, scale beyond data volume is quietly emerging as a key theme.
The shift is marked by explosive growth in the scale of metadata, cluster count, and agility. In this landscape, it is hard to miss how AI agents are driving databases to juggle millions of schemas, contexts, branches, and lightweight instances.
Even a quick look suggests that multidimensional scale will change database architecture at its foundations, and we might have to rethink what a “database” means in the years to come.
This is the transformation we must confront in 2026:
Database for AI AgentsHuman developers spent the last decade wrestling with scale, consistency, and cost. Now a new type of user has entered the system, and it is rewriting the ground rules faster than most teams realize.
AI agents. Thousands of them. Running 24 hours a day. Generating code, shipping features, spinning up infrastructure, mutating schemas, and discarding resources as casually as a human developer refreshes a browser tab.
If the cloud era was about elasticity, the agent era is about hyper-elasticity.
If the cloud era was about efficiency, the agent era is about automation at overwhelming scale.
1. Scalability Becomes the First Principle
For years the industry talked about “scale” as a comforting abstraction. Add more shards. Add more replicas. Add more cache. But the LLM and AI agents give us another possibility:
every user action, every trace, every log line, every event becomes a potential context for an AI agent to provide a customized experience.
And if that data can produce value for the enterprises, the only rational strategy is simple:
store everything.
This is not a philosophical stance. It is an economic one. Enterprises are learning that with agents in the loop, value does not come from pre-defined dashboards. It comes from personalized, contextualized insight generated at the moment of need, because a real customized user experience equals value.
To make that possible, databases must scale orders of magnitude beyond classic OLTP patterns. Not just more rows, but more clusters, more branches, more independent contexts, created and destroyed at machine speed.
2. Databases Now Serve a New User: AI Agents
The most important shift shown in the deck is not technical.
It’s behavioral.
Agents don’t behave like developers. They don’t throttle themselves. They don’t batch work. They don’t wait for off-peak periods. They create thousands of databases per day because that is the natural unit of work for them.
More than 90 percent of new daily TiDB Cloud clusters are created not by humans, but by AI agents.
This is the beginning of a historic transition:
The primary users of databases are no longer humans.
Developers and DBAs are becoming supervisors of fleets of autonomous systems that generate SQL, mutate schemas, and perform migrations automatically.
This is why extreme flexibility is no longer optional. Agents don’t negotiate with you about schema-blocking DDL. They just start another experiment. Agents don’t consolidate workloads. They branch.
And this leads directly to the next challenge.
3
. The Agent Explosion and the XYZ Dilemma
Let’s image a platform with 100000 users (not a big number), each running 10 tasks by agents, each testing 10 branches → 10,000,000 databases.
This is not hypothetical math. This is how Manus 1.5 runs today with TiDB X. Agents treat the database not as a single shared global resource, but as a programmable substrate:
create a database, evolve it, test it, deploy it, delete it.
Traditional shared-nothing systems were simply not built for this. The agent workload demands:
- Second-level creation of databases
S3-backed compute-storage separation - Non-blocking, agent-friendly schema evolution
- Unified OLTP + analytics + vector search
- Branching via copy-on-write storage
The old database metaphor was a central warehouse.
The new metaphor is version control: clone, branch, experiment, merge, discard.
4. Cos
t Becomes the Hard Wall
When a single human creates a database, the cost is trivial, but when an agent creates a thousand in a day, the cost is existential.
Agents operate databases with 1,000× the efficiency of human engineers; cost becomes the dominant constraint.
Because agents do not slow down. They do not optimize unless you force them to. Their natural state is a combinatorial explosion.
This is why 2025 shifts the conversation from “How fast is your database?” to
“How gracefully does your databases decay cost at a massive scale?”
The new requirement is explicit:
- Costs must fall to zero when the workload falls to zero
- Costs must track workload precisely, not approximately
- Branch explosions must be cheap (copy-on-write, not copy-on-copy)
- Per-agent and per-tenant metering must be first-class
If you cannot control costs at the statement level in an agent world, you cannot survive.
5. The Machine Era Is Here
Agents don’t replace developers; they amplify the pressure on the underlying infrastructure.
A database built for humans will collapse under agent-level concurrency, agent-level branching, and agent-level iteration speed.
A database built for agents unlocks something entirely new:
software that builds itself, personalized systems at massive scale, and a world where experimentation costs pennies instead of days of engineering time.
TiDB X was designed around this future because we see it happening every day.
From Manus 1.5 to internal workflows to emerging agent platforms, the message is clear:
2025 is the year databases stop being passive storage and become active substrates for autonomous software creation.
Welcome to the machine. It’s going to be a fun decade.
The author is Ed Huang, Co-founder & CTO, TiDB.
Disclaimer: The views expressed are solely of the author and ETCIO does not necessarily subscribe to it. ETCIO shall not be responsible for any damage caused to any person/organization directly or indirectly.






