From synthetic humans to synthetic consensus — why “Human-Centered AI” is quickly becoming the most sophisticated control system ever designed.

Everybody wants bots, but nobody wants to admit what they’re really for.
We’ve watched private militias replace national armies. Now the same logic is swallowing the information war. While you were busy doomscrolling, the real transformation happened: we moved from harvesting human data to manufacturing it at industrial scale.
Thirty years ago, we coupled our modems to phone lines and celebrated forums and email as revolutionary. Those primitive systems were already hoovering up data. Then came the crawlers, the search engines, the insatiable hunger to digitize everything. “Data is the new oil,” they said. They were right. We built oceans of it.
But oil fields eventually run dry.
The big LLMs have scraped the internet clean. They’re bloated, generalist, and increasingly mediocre. The next evolutionary step was obvious to anyone paying attention: Specialized agents. Digital specialists. Synthetic personas.
Think about the old comic artists who could draw Donald Duck or Batman in their sleep. Their style was so distinct it became its own medium. Today we do the same thing with LoRAs. Feed a model ten consistent images of a character across different angles and lighting conditions, and suddenly you’ve instantiated a new entity that can live across images, animation, dialogue, and personality. Combine multiple LoRAs — character, art style, background, voice, behavioral patterns — and you don’t have content anymore.
You have people that were never born.
Wire those synthetic beings to LLMs and they awaken with voice, memory, and personality. Now they can debate, evolve, police each other, generate synthetic data, and run 24/7 bot farms that make human posters look pathetic. The propaganda videos you see coming out of current wars, recycling the same uncanny animated spokespeople across platforms? That’s just the beta version.
And the governments watching this understood something the tech press missed entirely: if you can manufacture synthetic humans at scale, you can also manufacture synthetic consensus. The propaganda application was obvious. The governance application was inevitable.
And here’s where it gets interesting.
The new bottleneck isn’t currently compute. It’s high-quality data and better algorithms. So the obvious solution is to let the bots talk to each other, argue, run experiments, and generate new training data. Professor Quantum debating Professor Classical. When they hit a wall, you don’t send a human. You give them a bitcoin wallet, let them rent time on a quantum cluster and a classical supercomputer simultaneously, let them run the experiment, fail, document the failure, and try again. The humans come in at the end to verify. This is how scientific data gets manufactured at machine speed. The best training data isn’t scraped from humans anymore. It’s produced by synthetic minds in conflict with each other, supervised loosely by the humans who built them.
Meanwhile, the surveillance economy has been doing something similar to you.
Every platform has been building your digital twin for years. What they call “recommendation systems” are actually primitive versions of you – a LoRA trained on your happy mode, rage mode, your horny mode, your DM sliding patterns, your employment history, and when you go quiet. They’re not selling you products. They’re learning to puppet you.
Governments have watched this closely and decided they don’t want to merely predict you.
They want to own the model.
This is where the grotesque bait-and-switch called “Human-Centered AI” enters the chat. When politicians say it, they don’t mean AI that elevates human potential. They mean AI that keeps humans centered in their matrix of control. They want to own the simulation. They already monopolize violence, law, and (mostly) money – Bitcoin showed them the limits of that monopoly. Now they’re coming for the final territory: your mind and behavior, 24/7, across every device.
Jokes in your private messages. Voice tone. Associations. All red-flagged by centralized models. This isn’t science fiction. The UK and Australia have already floated the idea of permanent backdoors into phones. Other nations won’t be shy.
This is why I say I prefer predatory capitalism to state “human-centered” AI. At least corporations want you productive and consuming. States ultimately want you obedient or eliminated. That distinction matters because consumption requires your agency. Obedience requires its destruction.
The antidote is ugly, chaotic and necessary. Flood the system with contradictory data. Run your own bot/agent swarms. Build decentralized models. Make surveillance so noisy it becomes computationally useless. Not because chaos is good. Because a monoculture of human behavioral data is civilizationally suicidal. Distributed, competing, contradictory models are the only path to freedom.
True human-centered AI isn’t about controlling humans.
It’s about building systems that make humans unrecognizably better – more creative, smarter, more dangerous, more alive. The coming era of quantum integration and bizarre data cross-pollination will open doors we can barely imagine. But only if we don’t let the people who print money and drop bombs become the curators of reality itself.
The age of synthetic minds is here. The only question is whether we will be their zookeepers or their ancestors.
Choose fast.