“AI ethics is becoming the new currency of control.”
At the United Nations last week, nearly every major power stood in agreement: the world needs redlines for AI.
The Secretary-General called for a legally binding ban on lethal autonomous weapons that operate without human oversight.
China and France echoed the same moral principle: no machine should decide who lives or dies.
Even the United States—careful to reject any centralized global governance—floated a proposal for an AI verification system tied to biological-weapons treaties.
For the first time in diplomatic history, AI morality entered the chamber of geopolitics.
And yet, beneath the applause for “responsibility,” another question lingered, largely unspoken:
Who gets to draw these redlines — and who gets to live behind them?
1. The Promise and the Paradox
Redlines sound virtuous. They promise restraint, safety, a moral perimeter around power.
But in the age of machine intelligence, drawing a line is itself an act of control.
When only a handful of corporations possess the compute, data, and global reach to operationalize “ethical AI,” regulation begins to look less like governance — and more like consolidation.
AI ethics has become the new frontier of monopoly.
Guardrails are the new patents.
And morality, once universal, is quietly turning into a premium feature.
2. When the Regulators Own the Infrastructure
Consider OpenAI’s Preparedness Framework.
On paper, it looks like transparency: a formal protocol for detecting “catastrophic misuse.”
But read closely: OpenAI defines the risk, audits itself, and reports to its own board.
It’s not oversight — it’s ownership disguised as ethics.
This is not unique.
Google’s AI Principles prohibit “harmful” military applications — except, notably, when they serve “U.S. and allied defense.”
Ethics with an asterisk.
Safety, selectively distributed.
When the cost of “being responsible” is measured in millions of GPU hours and proprietary data licenses, only those already in power can afford to be good.
Morality becomes a luxury brand — affordable only to the well-capitalized.
3. The Historical Pattern: Regulation as Entrenchment
We’ve seen this before.
In the 20th century, the Nuclear Non-Proliferation Treaty preserved peace — and a hierarchy of power.
Nations without enrichment facilities were told: “trust the responsible states.”
In finance, the Basel Accords imposed capital requirements that crushed small banks while global players grew stronger under the banner of “prudence.”
Now, in AI, the same moral architecture is reappearing:
- The few define the risk.
- The rest comply or fall behind.
When compliance becomes capital, regulation becomes extraction.
Slavoj Žižek once called this the “moral supplement to domination” — the act of masking power under virtue.
That’s exactly where we are now.
AI safety is not yet saving us; it is sanctifying the existing order.
4. Power Without Redistribution
The most dangerous convergence in the 21st century is not between human and machine intelligence, but between ethical authority and technical monopoly.
Governments quote the ethics of the very companies they’re supposed to regulate — and call it oversight.
Investors reward the firms that talk most loudly about “alignment” while quietly scaling closed-source models.
Universities and research labs, increasingly dependent on corporate grants, echo the same frameworks.
This isn’t conspiracy; it’s inertia.
Incentives drift toward power, and virtue follows.
Ethics, stripped of redistribution, becomes ceremony — a ritual of moral legitimacy without structural change.
We end up in a world where the right to be ethical costs more than most nations can pay.
5. If We Were Serious — And We’re Not
If humanity were truly serious about governing AI, we would demand four things:
- A global oversight body with subpoena power — not just advisory notes.
- Shared public compute infrastructure, funded like defense but governed like science.
- Transparent accounting for “AI safety spending,” because virtue without receipts is just PR.
- Personal liability for executives who deploy models that cause systemic harm.
But none of this will happen — at least not yet.
Because ethics without redistribution threatens profit, not peril.
And profit still rules the redlines.
6. The Counterargument: “Better Than Nothing”
Some will argue that imperfect regulation is better than chaos — that a flawed framework is at least a start.
But history warns otherwise.
When the regulated write the regulations, compliance becomes a competitive advantage.
Rules designed to restrain power end up protecting it.
“Safety” becomes the new moat; moral language becomes intellectual property.
The appearance of oversight becomes the absence of accountability.
7. The Reflexive Mirror
What would real oversight look like?
Perhaps something closer to reflexive governance — systems designed not only to control outcomes but to observe their own power.
Guardrails screen actions. Reflexive loops examine patterns: drift, bias, emergent goals, the quiet colonization of decision-making by optimization.
The first keeps harm out.
The second keeps conscience in.
Until we build such reflexive structures — across corporations, governments, and the models themselves — every “redline” will remain a projection of whoever holds the pen.
8. The Real Redline
The danger ahead is not that AI will act without human control,
but that human power will act without reflexive control.
If ethics becomes a product, if safety becomes subscription-based, if morality itself requires venture funding —
then the redlines will not save us.
They will simply redraw the map of exclusion.
Redlines can save us only if they’re drawn by many hands.
Otherwise, they’re not boundaries — they’re borders.