The Event Horizon: Altman Declares Superintelligence Imminent, Demands Global Policy Overhaul

Oliver Grant

April 10, 2026

Superintelligence

SAN FRANCISCO — OpenAI CEO Sam Altman issued a stark directive to global leaders this week, declaring that humanity is “past the event horizon” in the development of digital superintelligence. Speaking on the heels of OpenAI’s release of the “Industrial Policy for the Intelligence Age” framework in early April 2026, Altman warned that the rapid emergence of AI surpassing human intellect necessitates an immediate, New Deal-scale transformation of global law and economic structures to prevent systemic collapse.

A “New Deal” for the Intelligence Age

The centerpiece of Altman’s urgent plea is a comprehensive policy roadmap designed to cushion the blow of what he describes as the “takeoff” of superintelligence. Predictively, Altman noted that AI agents are expected to perform complex cognitive labor by the end of 2026, creating a productivity surge that could render current economic models obsolete.

“Humanity is close to building digital superintelligence,” Altman stated in a recent dispatch. “The takeoff has started. We need a new social contract before the disruption outpaces our ability to govern.”

Radical Economic Shifts: Robot Taxes and Public Wealth

The proposed framework, released earlier this month, suggests a pivot away from traditional payroll-based taxation. Key pillars include:

  • The Robot Tax: Levying taxes on automated labor to offset the loss of income tax from displaced workers.
  • Public Wealth Fund: Creating a sovereign fund that gives every citizen a direct stake in AI-driven growth, funded by equity from frontier AI firms.
  • The Efficiency Dividend: Incentivizing a 32-hour workweek at full pay, allowing citizens to benefit from AI productivity without losing their livelihoods.

Existential Risks and the “Right to AI”

Altman’s warnings extend beyond the economy into the realm of national security. He highlighted that superintelligent systems could soon provide “bad actors” with the blueprints for bioweapons or the capability to execute catastrophic cyberattacks against financial grids.

To combat this, the “Industrial Policy” advocates for strict containment protocols—government-monitored “kill switches” for any AI system capable of self-replication. Simultaneously, Altman argues for a “Right to AI,” suggesting that access to advanced models should be treated as a public utility, much like electricity or water, to prevent a “knowledge gap” between the elite and the underserved.

The Credibility Gap: Innovation vs. Regulation

Despite his public calls for oversight, Altman remains a polarizing figure in Washington. Critics point to a perceived “regulatory flip-flop,” where Altman champions high-level safety principles while his lobbyists reportedly work to weaken specific transparency bills, such as California’s SB 53.

A recent investigation alleged that while Altman was testifying about existential risks, OpenAI’s legal teams were actively resisting measures that would grant workers stronger safeguards against AI displacement. This “regulatory nihilism,” as some detractors call it, suggests a tension between OpenAI’s mission to save humanity and its need to maintain a competitive lead against rivals.

Global Governance and the “IAEA for AI”

To bridge this gap, Altman has doubled down on the need for an international body, similar to the International Atomic Energy Agency (IAEA), to oversee frontier models. This body would be responsible for auditing “super-models” and ensuring that no single nation or corporation gains a “quiet dominance” that could erode human agency or democratic processes.

Expert Analysis: What This Means for the Industry

The shift in Altman’s rhetoric from “potential risk” to “event horizon” marks a fundamental change in the tech landscape. For the industry, this signals that the era of “move fast and break things” is being replaced by a period of forced maturation.

  1. Consolidation of Power: By calling for heavy regulation and licensing, OpenAI may inadvertently create a “moat” that prevents smaller startups from competing. Only trillion-dollar firms can afford the auditing and compliance regimes Altman is proposing.
  2. The New Sovereign Wealth: If the Public Wealth Fund model is adopted, we will see the first true “AI-Socialist” hybrid economies. This would decouple survival from labor, a shift that could stabilize or destabilize global markets depending on the speed of implementation.
  3. Security-First Development: We are entering a phase where AI safety is no longer a PR department; it is a defense requirement. Expect “containment” to become the most-funded sector in tech through 2027.

CHECK OUT: Opera Neon: The 30-Year Evolution into Agentic AI Browsing

Frequently Asked Questions

1. Has Sam Altman said superintelligence is already here? He has stated we are “past the event horizon” and “close to building” it, suggesting that while the final form may not be fully realized, the process is now irreversible.

2. What is a “Robot Tax”? It is a proposed tax on companies that use AI and robotics to replace human workers, intended to fund social safety nets and public wealth programs.

3. When does Altman expect AI to perform human-level cognitive work? Altman’s 2026 timeline predicts that AI agents will be capable of significant cognitive tasks and discovering novel knowledge within the year.

4. Why are critics calling Altman a “hypocrite” regarding regulation? Critics argue he calls for general government oversight in public while his company lobbies against specific, binding laws that would increase OpenAI’s legal liability.

5. What is the “Right to AI”? It is the policy proposal that access to advanced AI should be a basic right, provided to schools, libraries, and low-income individuals through government subsidies.