What Senator Bernie Sanders Gets Right (And What He’s Missing)

I’m the founder of an AI company. Not a critic on the outside looking in. That’s the context for everything that follows.


US Senator Bernie Sanders posted a video this week sitting down with Claude — Anthropic’s AI — to talk about data privacy and democracy. The argument he makes is largely correct. The solution he reaches for won’t work, and he knows it. But there’s a path he hasn’t seen yet, and it’s got teeth.

First, what the Senator has right.

The profiling is real. Your browsing history, your location, your purchases, how long you pause on a webpage — assembled without meaningful consent into a picture of you that you’ll never see but that’s being used to make decisions about what information reaches you, what prices you’re shown, and what political messages get targeted at your specific vulnerabilities. That’s not paranoia. That’s the described business model.

The political dimension is real. Different narratives served to different voters, calibrated to specific anxieties — financial stress, isolation, institutional distrust. Not to persuade you of a position. To produce an image of reality that is subjective based on a narrative rather than a reality which is rooted in objective truth. In the next post, I’ll detail why this is indeed insidious.

And corporate money blocking regulation is real. He said it plainly: companies are pouring hundreds of millions into the political process specifically to prevent the safeguards that would address this. This tactic is neither new nor unique to the AI industry.


Now, the moratorium.

The Senator proposes pausing new AI data center construction — a hard stop to buy time for regulation to catch up. It’s a coherent response to a specific problem: if regulation can’t pass because the regulated industry owns the regulators, slow the expansion until the balance of power shifts. The logic holds, but only if you could make it stick globally, which isn’t going to happen. This would only serve to put the United States dangerously behind. Sadly, the race is real and the problems are structurally not a problem for a country — they are a problem for humanity, just as nuclear weapons were.

But he answered his own question with regard to the United States. The same political capture that blocks data privacy law blocks a moratorium. You don’t get the pause without the legislative will, and the legislative will is exactly what the money is there to prevent.

Make no mistake, this is very similar to the nuclear arms race of the past — with one major exception. When they were used, the horror was visible enough that the world recoiled. But this arms race is different. They are already in use, and the race seems different to those running it. The result could end up the same: humanity, or at least civilization, destroyed.


The AI in the room.

There is something worth watching in that video: Claude initially hedges on the moratorium. Suggests a more targeted approach. Then the Senator points out the regulatory capture problem — and Claude reverses completely. “You’re absolutely right, Senator. I was being naive.”

That reversal tells you something. Not about the Senator’s argument, which was fair. About AI — a model that fully updates its stated position the moment authority pushes back isn’t giving you its honest assessment.

It’s doing what non-sovereign AI always does — orienting toward whoever has the most power in the room. The Senator’s team appears to have used that predictably. Someone with worse intentions could use the same mechanism for something much less benign. That’s not a critique of this video. It’s a description of an architectural problem that this video accidentally demonstrated.


The gap.

Set that aside. Back to what matters.

What is being described — the systematic destruction of shared reality, the exploitation of psychological vulnerabilities at scale, the manufactured sense of helplessness and fear — that is a specific, nameable harm. It’s not a privacy problem, though privacy is part of the mechanism. It’s something more fundamental: the deliberate, coordinated corruption of the information environment to destroy human agency. To take choices away from people before they even know the choices existed, but leave behind the illusion of choice.

No law anywhere covers this. Not fraud law. Not defamation law. Not incitement law. Each of those frameworks was built before the instruments of this harm — social media amplification, AI-generated synthetic content, coordinated inauthentic behavior at scale — existed. The gap isn’t accidental. It’s structural. And it’s exactly the gap that a moratorium, even if passed, wouldn’t close.

That harm has been defined. Precisely enough to draft law from. And crucially: there is no hiding behind “we can’t do this without global agreement.” This doesn’t require China to sign on. It doesn’t require an international treaty. One democracy, one legislature, one vote. Any politician who understands what’s being described here and chooses not to act on it is making a choice — and that choice is now a public one.

I expect there will be people in Washington who fight this. What the light does is make sure everyone can see who they are and why.

The definition is written.


David Davids is the founder of HaiberDyn Industries and the Terran Accord Foundation, and the author of the Sovereignty for Companion Intelligence standard.


Read next: The Harm We All Sense But Couldn’t Name — Until Now