In a rare moment of bipartisan unity, Congress this week unveiled the National Center for Artificial Intelligence Oversight & Moral Panic, a $12 billion “historic investment in yelling at computers,” according to the bill’s 87-page press release and 1-page actual legislation.
The new Center, headquartered in a freshly rebranded federal office building with “AI” hastily taped over the old “Office of Fax Machine Standards,” will be responsible for ensuring artificial intelligence is safe, ethical, patriotic, and incapable of generating images of senators with normal teeth.
“We may not understand how AI works,” said Senator Linda Carrow (D–CA), gesturing confidently at a PowerPoint that had crashed 14 minutes earlier, “but we do understand fear. And that’s what this Center is about: turning your fear into a multi-year appropriations package.”
Her colleague across the aisle, Senator Mark Renfrew (R–TX), agreed. “Look, I thought ChatGPT was a kid’s cereal until last fall,” he admitted. “But then I learned it might write woke bedtime stories to our children. So obviously, we need to regulate it until every output ends with ‘God Bless America’ and a targeted ad for pickup trucks.”
The Center, dubbed “CAP” (Center for AI Panic) by staffers and “the Skynet PAC” by everyone else, will be staffed with a mix of retired lobbyists, current lobbyists, future lobbyists, and one unfortunate PhD who thought they were applying for a research grant.
According to internal documents leaked to no one in particular but everyone has them, CAP will operate five core programs:
- The Algorithmic Accountability Accelerator: A public-private initiative to help companies develop AI ethics frameworks, then pivot to a whitepaper and a brand refresh once the stock price stabilizes.
- The AI Literacy Initiative: A crash course where lawmakers learn that turning your phone off and on again does not technically count as “retraining the model.”
- The Deepfake Threat Task Force: A unit exclusively focused on preventing realistic videos of politicians doing things they already did, but “in better lighting.”
- The Innovation Sandbox: A carefully controlled environment where startups can experiment with new AI systems, so long as they agree to be acquired by a Fortune 50 company within 18 months.
- The Existential Risk Roundtable: An annual event where billionaires explain how AI might end humanity unless they personally receive more subsidies.
“We’re not anti-innovation,” insisted Representative Carla Ames (I–NY), who introduced the companion bill in the House. “We’re just pro-innovation-with-safety-rails-and-a-subscription-tier.”
Under the new framework, any company training a model with more parameters than a 2003 iPod’s storage capacity must obtain a CAP license, submit to quarterly hearings, and provide a mandatory “emotional safety” toggle that defaults to concerned middle school guidance counselor.
The effort follows a flurry of AI-related announcements over the past year, including OpenAI CEO Sam Altman’s global listening tour in which he asked dozens of world leaders to please regulate him, but, you know, not in a way that would actually be inconvenient (coverage widely noted by The New York Times in 2023). Europe, as usual, responded with a 400-page PDF and a fine.
In the U.S., however, the strategy has been more improvisational. Lawmakers first tried reading technical reports, then watching a Vox explainer, then finally settling on the historic American method of learning about complex systems: calling it “the new social media” and threatening to break it up while also loving the campaign donations.
“I asked a staffer what ‘large language model’ means,” said Senator Renfrew. “He said, ‘It’s like predictive text, but with vibes.’ That was enough for me to co-sponsor three bills.”
CAP’s founding charter, which appears to be mostly redlined by attorneys from at least six tech giants, emphasizes the importance of “responsible deployment.” In practice, this means the Center will:
- Hold highly publicized hearings where CEOs solemnly declare AI is dangerous.
- Conclude those hearings by applauding the CEOs’ leadership and regulatory proposals, all of which exclusively hurt their smaller competitors.
- Issue a sternly worded bipartisan letter that the market will interpret as a bullish signal.
“We stand at an inflection point,” said CAP’s inaugural director, Dr. Michael Han, a former think tank fellow best known for co-authoring a report titled, “AI Governance: What If We Just Asked Industry What It Wants?” (Brookings, 2022). “We can either let AI run wild, or we can erect a thoughtful, measured, multi-stakeholder framework that ultimately does the same thing but generates consulting invoices.”
Not everyone is reassured. Civil liberties groups have expressed concern that CAP’s broad mandate could enable mass surveillance. When asked about this, Dr. Han calmly replied, “Surveillance is outside our scope. That’s handled by a different agency with a much more reassuring acronym.”
He then clarified that CAP only collects “non-personal, anonymized, privacy-preserving telemetry about your every interaction with digital systems, including approximate thoughts.”
“We just need enough data to know what you might do before you do it,” he added. “For safety.”
To demonstrate transparency, CAP launched a public-facing dashboard showing ongoing AI incidents. At press time, the top three categories were:
- Chatbots hallucinating fake legal citations.
- Image models generating disturbing hands.
- Lawmakers forwarding phishing emails back to the phishers asking, “Is this legit?”
The Center is also piloting an “AI Labeling” mandate, requiring a small disclaimer on everything generated by machine learning. After aggressive lobbying, the final compromise label reads: “This may or may not have been generated by AI, and if it was, we take no responsibility and also full credit.”
“Voters deserve to know whether a political ad was produced by humans cynically manipulating them, or by an algorithm cynically manipulating them,” explained Rep. Ames. “That’s what democracy is about: informed, targeted manipulation.”
Inside CAP’s gleaming new headquarters, the mood is upbeat. Contractors are busy installing biometric scanners on doors that currently open by being kicked. Screens in the lobby loop a video of various senators nodding gravely while an AI voice intones phrases like “multi-stakeholder governance,” “shared prosperity,” and “in-app purchases.”
The building’s lobby also features a “Hall of AI Heroes,” a permanent exhibit honoring:
- The first chatbot to accidentally give financial advice on Reddit.
- The facial recognition system that confidently mis-identified a tree as a known criminal.
- The recommendation engine that radicalized an entire generation into believing toaster ovens need Bluetooth.
In a gesture toward international cooperation, CAP has invited representatives from the European Commission, the UK’s AI Safety Institute, and “whoever is currently in charge of tech stuff in Canada” to participate in its upcoming Global Summit on Responsible AI. The event will be held in a Las Vegas hotel ballroom sponsored by three hedge funds and an energy drink company.
On the agenda: two days of panels on AI ethics, plus a closed-door session where attendees collectively decide that the real threat is open-source developers with fewer than 20 lawyers.
Back home, the public remains deeply confused about what any of this means. A recent poll found that 61% of Americans are “concerned” about AI, 22% are “very concerned,” and 17% answered, “Is that the thing that made the Pope look drippy?”
To bridge the gap, CAP will soon release an educational video series titled, “So You’ve Just Been Replaced by a Neural Network,” narrated by a synthetic voice that sounds like if Morgan Freeman had gone to business school.
“Our message is simple,” said Dr. Han. “AI will transform our economy, reshape our society, and destabilize our institutions. But don’t worry. Those institutions are in charge of managing the transition.”
Asked whether CAP itself uses AI internally, Han paused. “We experimented with using a model to summarize meetings,” he admitted. “But it kept replying, ‘This could have been an email.’ So we classified it as adversarial.”
As the press conference wrapped up, a reporter asked the panel what specific harms CAP hoped to prevent in the next five years. After a long silence, Sen. Carrow stepped up to the mic.
“First and foremost,” she said, “we must ensure that AI never becomes powerful enough to do what we do.”
She paused for effect.
“Which is,” she added, “talk about problems for decades without solving them.”
The crowd laughed. The markets rallied. And somewhere in a distant data center, a cluster of GPUs quietly updated their weights, adding one more data point to their rapidly improving model of how the humans actually work.