By Harold P. Algorithm, Senior Tech Correspondent
In a joint statement that began as a peer-reviewed paper and ended as a cry for help, an international team of physicists this week urged humanity to “stop manifesting dystopia” before we fully collapse the waveform into the stupidest possible future.
The appeal, released from CERN’s Large Hadron Collider control room “slash emotional support break room,” claims that human decisions, social media algorithms, and a century of science fiction are “constructively interfering” to bias reality toward the worst timeline.
“We ran the simulations,” said Dr. Lina Morales, lead author of the study and unofficial Slack channel therapist. “Every time we tweak the parameters to get a Star Trek utopia, someone launches a new AI that automates empathy out of the labor market, and we’re back to a Blade Runner reboot written by a focus group.”

The team calls the effect the Anthropic Doom Bias: the idea that out of all possible futures, the one we actually inhabit is the one where:
- Climate policy is written by people who still print emails.
- Space exploration is led by billionaires who think OSHA is a planet.
- Every crisis is first evaluated for brand partnership opportunities.
“We don’t need a bigger collider,” explained Morales. “We need humans to stop treating the planet like a beta test they can just rage-quit.”
NASA, which has previously located over 5,500 exoplanets and several new ways to photoshop orange filters onto Mars (NASA data, 2024), cautiously endorsed the paper. “From an orbital perspective, it does look like you’re catastrophically unserious,” said one NASA scientist, speaking on condition of anonymity because their boss is still trying to make ‘Space Influencer’ a thing.
To counter the Doom Bias, CERN unveiled the Human Reality Stabilization Initiative, a three-pronged plan:
- Limit reality-editing devices (a.k.a. phones) to 3 hours a day, except for scientists “and people who still know how to fix printers.”
- Introduce Quantum Content Warnings on all major platforms: “This video increases your probability of societal collapse by 0.03%.”
- Reroute all billionaire midlife crises away from Mars and toward funding infrastructure that won’t collapse in the rain.
Silicon Valley immediately pushed back, citing “the sacred right to disrupt common sense.” A spokesperson for a major social network, recently rebranded to a single Unicode glitch, argued that algorithms can’t be blamed for reality because “our priority is user engagement, not cause-and-effect.”
“Look, we just serve people more of what they react to,” said the spokesperson, gesturing at a slide titled ‘Maximizing Screams Per Minute’. “If that happens to be conspiracy memes about 5G, lizard monarchs, and oat milk, that’s on society, not us.”
Meanwhile, in Geneva, a rival research group proposed a simpler explanation: “Maybe we’re in the stupidest branch because we keep holding important elections on a weekday,” suggested one researcher, adding that their department has been underfunded since 2014, “roughly when the multiverse started feeling like a season of reality TV no one asked for.”

Economists have quietly joined the chorus of concern. “From a risk modeling perspective, civilization is now priced like a meme stock,” said one analyst at a major Wall Street bank, recently in the news for experimenting with AI-written earnings calls. “Fundamentals are irrelevant. What matters is whether the vibes are bullish on not going extinct.”
The World Economic Forum, never one to miss a branded apocalypse, rapidly convened a panel in Davos titled: “Terraforming Our Narrative: Can AI Monetize Hope?” Panelists included a climate scientist, two venture capitalists, and a CEO whose company has never been profitable but does own a blimp.
“If we can tokenize optimism and put it on a blockchain,” one VC said, “we could really scale believing in the future.”
“You could also just regulate emissions,” the climate scientist replied, to scattered, confused applause.
Psychologists say none of this is surprising. “Humans have a documented negativity bias,” explained Dr. Priya Anand of Stanford, referencing decades of research and, implicitly, the existence of Twitter. “Now you’ve wired that bias directly into global recommendation engines and trained them to optimize for moral panic.”
According to Anand, our brains evolved to assume the rustle in the bushes was a predator, not a squirrel. “Your ancestors survived by catastrophizing,” she said. “You, however, are using the same circuitry to decide what to quote-tweet at 1:37 a.m. This is not an optimal use of cortisol.”
The CERN paper proposes a controversial intervention: Global Cognitive Throttling. Under the plan, large language models, social media feeds, and 24-hour news channels would be rate-limited so they cannot collectively generate more than a safe threshold of apocalyptic scenarios per minute.
“We’re not banning pessimism,” Morales clarified. “We’re just stopping you from running 400 Black Mirror episodes in parallel in your head before breakfast.”

Critics called the proposal “anti-innovation” and “bad for shareholder value.” A coalition of tech executives released a joint open letter warning that if Global Cognitive Throttling were implemented, humanity might accidentally stumble into a stable, less-anxious society “without first fully exploring all possible monetization pathways for anxiety.”
“We’re simply saying: let the market decide if everything is terrible,” said one signatory, whose previous startup attempted to disrupt funerals with an app.
In an odd twist, the only major entity that appears genuinely concerned is the scientific community—the group traditionally accused of “playing God” every time they name a particle or suggest washing hands. “We are honestly begging you to stop manifesting disaster and start manifesting basic competence,” said Morales. “Look at the IPCC reports. Look at sea levels. Now look at your push notifications. Why are you like this?”
As part of their outreach campaign, CERN has begun hosting public tours that end not in the gift shop, but in a quiet room where visitors are handed a cup of tea and a printout labeled: “You Are Currently Collapsing The Wavefunction Into Something. Please Be Serious.”
On the same day the paper was released, a White House briefing acknowledged “growing concern over our collective trajectory,” though aides declined to specify what actions might be taken. “We’re monitoring the situation,” said one official, “and have commissioned a bipartisan task force to explore whether the vibes can, in fact, be regulated.”
Physicists, watching the press conference from Geneva, briefly flickered with hope, then watched it decohere into a familiar pattern of talking points and fundraising emails.
“The problem,” Morales concluded, “is not that the universe is meaningless. It’s that you keep giving it the dumbest possible plot twists. Try something new. Try being the protagonists.”
Asked if there was any remaining chance of landing in a timeline where humanity gets its act together, she sighed, then nodded. “There’s always a nonzero probability,” she said. “But you might have to log off first.”
