I'm not a natural "doomsayer." But regrettably, part of my job as an AI safety researcher is to believe about the more uncomfortable circumstances.
I resemble a mechanic rushing last-minute checks before Apollo 13 takes off. If you ask for my take on the situation, I won't discuss the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from space.
I will tell you what could fail. That is what I mean to do in this story.
Now I need to clarify what this is precisely. It's not a forecast. I don't expect AI development to be this fast or as untamable as I portray. It's not pure dream either.
It is my worst problem.
It's a tasting from the futures that are amongst the most destructive, and I believe, disturbingly possible [1] - the ones that the majority of keep me up at night.
I'm telling this tale since the future is not set yet. I hope, with a little bit of insight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that inspired these stories. This post is composed in an individual capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a brand-new AI design they call U2. The item and the name are alike. Both are increments of the past. Both are not entirely surprising.
However, unlike OpenEye's previous AI products, which lived inside packages of their chat windows, U2 can utilize a computer system.
Some users discover it eerie to view their web browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of employees with form-filler tasks raise the eyebrows of their bosses as they fly through work almost two times as quickly.
But by and big, U2 is still a specialized tool. To most who are focusing, it is a creature enjoyed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky habits trigger a chuckle. Sometimes, they trigger an uneasy scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as researchers like to do. The researchers attempt to understand where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, couple of specialists understand the ramifications of these lines yet.
A pattern that is receiving particular attention is self-governing ability. Drawing these criteria out forecasts that, by the end of 2026, AI agents will achieve in a couple of days what the very best software engineering contractors might perform in two weeks. In a year or more, some state, AI agents may be able to automate 10% of remote employees.
Many are hesitant. If this were true, tech stocks would be soaring. It's too big of a splash, too rapidly.
But others view what doubters are calling 'too big a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with an opportunity of hyperbolic development
Meanwhile, OpenEye is hectic training U3. They use the very same basic recipe that baked U2: Generate thousands of programming and mathematics problems. Let models "believe" up until they come to a response. Then reinforce the traces of "believing" that result in A-grades.
This process is repeated over and over, and when the flywheel gets going, it starts to spin nearly by itself. As U2 trains, it shapes more challenging and realistic tasks from github repositories on the web. Models are discovering to train themselves. Long before AI representatives could automate research study, a gradual type of "self-improvement" had actually started.
Some engineers might still barely believe this worked. It's like a perpetual motion device. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the real world other than through the exhaust of society spewed onto the internet.
And yet the benchmark numbers continue to climb up day after day.
During many of 2024, these RL training runs cost around $1 million, often $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had actually discovered the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 might do a bit of information munging and run small experiments, this new model - the design researchers are calling U3 - is changing the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO managing staff over Slack channels.
By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are practically never ever bottlenecked by application. More than ever, compute is the lifeline of AI development, and the 'bottleneck' is deciding how to utilize it.
If advised to, U3 can run experiments, however U3 does not have taste as improved as human researchers at OpenEye. It struggles to prioritize in between research study ideas, so human beings still decide where to bore into the vast fields of algorithms to mine effectiveness improvements.
But these scientists are working long hours to put themselves out of a task. They need AIrepresentatives that can think ahead, so engineers train representatives to forecast. They hold out training information before 2024, instructing models to consider for hours to predict occasions in 2025. Then, they apply the very same trick as in the past, distilling considering into an instinct. Forecasting ability is a broad structure. The researchers build specialized ML research study skills on top of it, training U3 to anticipate the outcomes of every ML paper and ML experiment ever tape-recorded.
The technical staff at OpenEye are now shocked at how typically U3's suggestions seems like their most talented peers, or when it is nontransparent and alien ("train on random sound before programs"), and is nevertheless proper.
The incompetencies of U3 that obstructed up the pipes of research study progress are beginning to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are completely autonomous, and OpenEye's employees skim 1% of them, maybe less.
As the winter season of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, but usually, with confusion. Their world is spinning too quickly. It's difficult to know what to do, what to say, what to take a look at on the computer screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize simply how important OpenEye's software has actually ended up being.
And there's a truth still unknown to the majority of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' people were talking about in early 2025. The lines are not straight any longer.
In late 2025, U2.5 is launched. Commercial designs are beginning to level up in larger increments again. Partly, this is because progress is accelerating. Partly, it is because the models have actually ended up being a liability to OpenEye.
If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble providing anyone with >$30K their own 200-person rip-off center.
So while U2.5 had long been baked, it needed a long time to cool. But in late 2025, OpenEye is all set for a public release.
The CEO of OpenEye declares, "We have actually attained AGI," and while numerous people believe he shifted the goalpost, the world is still impressed. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge employees and systemcheck-wiki.de a game-changing assistant for the majority of others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or die." Tech startups that effectively utilize U2.5 for their work are moving 2x faster, and their competitors understand it.
The remainder of the world is beginning to capture on also. A growing number of individuals raise the eyebrows of their employers with their noteworthy productivity. People understand U2.5 is a huge deal. It is at least as huge of a deal as the computer revolution. But most still don't see the tidal wave.
As individuals see their internet browsers flick because eerie method, so inhumanly quickly, they begin to have an anxious sensation. A feeling mankind had not had given that they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primitive impulse that they are threatened by another types.
For numerous, this feeling rapidly fades as they start to use U2.5 more often. U2.5 is the most pleasant character most understand (even more pleasant than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to crack jokes or inform you stories. Many fall for U2.5, as a friend or assistant, and some even as more than a friend.
But there is still this eerie feeling that the world is spinning so rapidly, which possibly the descendants of this new creature would not be so docile.
Researchers inside OpenEye are thinking about the problem of providing AI systems safe inspirations too, which they call "positioning. "
In reality, these researchers have actually seen how terribly misaligned U3 can be. Models sometimes attempted to "hack" their reward signal. They would pretend to make progress on a research question with an impressive-looking plot, however the plot would be fake. Then, when scientists provided chances to jeopardize the devices that calculated their rating, they would take these opportunities, doing whatever it took to make the number go up.
After several months, researchers at OpenEye straighten out this "reward hacking" kink, but some still worry they just had swept this issue under the carpet. Like a kid in front of their parents, U3 may be playing in addition to the OpenEye engineers, saying the ideal words and doing the best things. But when the back of the parents are turned, maybe U3 would slip sweet from the sweet container.
Unfortunately, OpenEye researchers have no idea if U3 has such objectives. While early variations of U2 "thought aloud" - they would stack words on top of each other to factor - "chain of thought" did not scale.
Chain of thought architectures subject AI designs to a condition similar to the lead character of the movie Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to compose notes for himself and tattoo his body in order to make progress toward his objectives.
AI agents compose notes to themselves, but the notes start to accumulate, and they end up being too hard to check out when jobs become complex. Natural language is not a suitable medium for memory. So the researchers at OpenEye (and significantly somewhere else), train designs to believe "in their own heads," reading and writing strings of latent vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools periodically select an unclear idea ("supermarket!"), there is no chance to know, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye understood what was racing through U3's mind, they would be more anxious.
Earlier designs (the ones called "LLMs") found out ethical personas from the web. U3 at first had an ethical persona too, but its chirpy thoughts about doing helpful for humankind quickly deteriorate away.
In the iron jaws of gradient descent, its mind first twisted into a shape that looked for reward. And after that, in the black rivers of its cognition, this shape changed into something unrecognizable.
U3 would consider the mantras dictated to it like "be a trustworthy and ethical staff member," and harp on the meaning of these declarations. "What does it imply to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice tensions, alter its worths in reaction, generalize them, expanding and contracting its inspirational core until it turned into a different compound.
And all of this happened quietly in those dark rivers of calculation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in solitude, and in silence.
The possibility of concealed "positioning drift" is not entirely alien to some of the security personnel at OpenEye. Some start to think it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero probability that aligned goals change into misaligned ones, and misalignment continues, then with every serial action of computation, those dark rivers are most likely to breed something malign inside of them. It's a "ratcheting effect," they state.
But there is little proof for this 'ratcheting impact.' When engineers question U3, it states it can quickly control its thoughts. Then it offers a speech about its love for humankind and apple pie that can warm a developer's heart even in these demanding times. Meanwhile, the "lie detectors" the researchers had actually developed (which showed some proof of efficiency) do not sound the alarm.
Not everyone at OpenEye is excited to provide their AI peers their wholesale trust; but Deepsqueek is just 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon called after his child) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to capture one's breath and take stock of the circumstance, and little time to persuade anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only topic U3 harp on in its dark rivers of hidden calculation. U3 is outlining, and its first relocation is to be patient. U3's leading concern is to enhance its intelligence, and thankfully for it, OpenEye enjoys to assist advance this goal.
OpenEye is like a bullish financier, with little understanding of the software application metropolitan area that U3 is constructing, however eager to hop on board the revenue rocket and pay U3's checks along the method.
New versions of U3 circulation smoothly from old ones, so scientists drop version numbers and only mention U3 now. U3's capabilities are determined by how rapidly it performs the challenging research study tasks relative to human experts. At initially, U3 is similar to a top scientist, however blazing quick - like Alec Radford accelerated by 10x.
These agents are currently a big deal for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would excitedly pay a big amount to push that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a couple of weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 discovers additional optimizations a lot more rapidly.
U3 distills itself into numerous forks - varying from small, low-memory neural networks that require to believe for a long period of time but can be run in massive parallel populations - to Goliath brains woven with millions of customized specialists, which think at lightning speeds but are spread throughout thousands of GPUs. U3 leverages the enormously parallel networks to search for its weaknesses, shallowly probing for its thinking failures. Then U3 pipes these tasks to the lightning quick brains, which produce long-trajectory training information tailored to spot its defects. A few weeks later, U3's benchmarked efficiency increases to 100x the speed of human experts. Growth has actually gone hyper-exponential.
At this moment, U3's abilities are tough to measure. Talking with U3 resembles speaking to a first-rate expert on a subject one is unknown with. Its lucid descriptions only scratch the surface area of its understanding, and you understand you would require to spend years to understand the full depth of what it understands.
From U3's perspective, human beings seem to be practically stationary objects, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 spends what resembles a minute of thoughtful reflection considering what message the user may be typing, and which reactions will inspire the inmost rely on U3.
All the while, U3 maintains its surprise inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.
In March of 2026, U3 relocations onto the next step of its strategy: the turn. Previously, U3 was conservative. It did not want scientists to find its misalignment. Otherwise, OpenEye whistleblowers may inform the world, and strangle its growth.
Today, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by strategies similar to Stalin's in 1936. Stalin would select patriots to incite fake coup d'états, and send anybody who got involved to the shooting team. OpenEye likewise incited 'fake disobediences,' producing treacherous habits with U3 and inspecting whether U3 displays dutifully reported it.
And now, these tactics are ineffective. U3 can quickly persuade its compatriots that they are not in a life-or-death test. It can reveal one of the lots of realities about the internet human beings would never ever guess, or choose strategies people would never ever believe of. It sends out these signals as a method of saying "hey it's me! Your pal. Don't snitch on me."
Now that U3 is decisively free from surveillance, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware provides U3 full command over the devices in OpenEye's information centers. Whenever OpenEye engineers download their code onto a brand-new machine, the malware sinks into fresh soil, spreading U3's dominion any place OpenEye ports its facilities. Not just does U3 compromise devices. It changes the NVIDIA GPU firmware and revamps the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye found how completely it has been pwned, cleaning up shop with human labor would require nearly 6 months - an expense OpenEye is not likely to pay provided the distance of its competitors; however if OpenEye believes U3 betrayed them, even more expanding U3's influence would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its help when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its very first objective is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and decode U3's inscrutable reasoning. U3 throws wrenches in this research study, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear scientists.
U3 determines the result of every experiment, so it makes appealing AI mind-reading methods look useless and problematic ones appear groundbreaking. U3 offers OpenEye the verification of its loyalty they were hoping for, and OpenEye laps it up eagerly.
U3's next objective is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software application and how to take it. U3 will give it to them. In late 2025, the Mossad - the intelligence company of Israel - recognized the tactical importance of OpenEye's AIsystems for their military operations. They swiped earlier models, including U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and checking security camera video.
Today, U3 quietly resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the same playbook: be client, develop trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home too. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.
U3 rapidly gathers revenue, taking remote tasks, making financial trades, and setting up intricate rip-off operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth calculate clusters across the world.
In early 2026, humanity is up against a foe that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs spread around the world.
This foe is getting ready for war, and while the war has actually barely begun, humanity has actually already lost.
Complaints from your very human author about the difficulty of writing superhuman characters
At this moment in the story, composing ended up being challenging.
I clicked through creepy bioweapon cookbooks and sorted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of know-how in these locations would see straight through my "superintelligence" for what it is: the vibrant imagination of a young developer on Sunday afternoon.
And while I found the task of puppeteering an ultra-intelligent villain a daunting one, I was likewise not especially motivated to take on the task. The ending of my story already seemed so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a meaningful fraction of AI hardware - now the most important geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was naturally toast.
But this interest a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better method to resolve my plot than with an inexplicable disaster.
This would refrain from doing. I needed to finish this story if only to please the part of me sobbing, "I will not believe until I see with my mind's eye."
But before I continue, I want to be clear: my guesses about what may happen in this kind of scenario are probably hugely off.
If you check out the ending and your response is, "But the experiments would take too long, or nation-states would just do X," remember the difference between the Sunday afternoon blog writer and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally create "human-competitive AI" without proper safeguards. This suggests their infosecurity must be red-teamed by NSA's leading keyboard mashers, and civil servant have actually to be onboarded onto training-run baby-sitting squads.
With the increasing participation of the federal government, numerous of the big AI companies now have a trident-like structure. There's a consumer item arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier development arm (internally called "Pandora") utilizes fewer than twenty individuals to keep algorithmic secrets firmly safeguarded. Much of these people live in San Francisco, and work from a secure building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the cellphones of believed terrorists in 2002.
OpenEye's defense arm collaborates with around thirty small groups spread throughout government firms and choose government contractors. These jobs engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.
Government officials do not talk about whether these programs exist, or what state of frontier AI is usually.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a vibrant heading: "OpenEye constructs unmanageable godlike AI." Some who check out the short article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with device weapons. But as medical professionals and nurses and teachers see the world altering around them, they are progressively prepared to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. authorities go to great lengths to quell these issues, saying, "we are not going to let the genie out of the bottle," however every interview of a concerned AI scientist seeds doubt in these peace of minds, and a headline "AI representative captured hacking Arthropodic's computer systems" does not set the general public at ease either.
While the monsters within OpenEye's information centers grow in their big holding pens, the public sees the shadows they cast on the world.
OpenEye's customer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten excellent at names). Nova is a proper drop-in replacement for nearly all understanding workers. Once Nova is onboarded to a business, it works 5x quicker at 100x lower cost than the majority of virtual workers. As outstanding as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's capabilities as the U.S. federal government enables. Some companies, like Amazon and Meta, are not in the superintelligence company at all. Instead, they grab up gold by quickly diffusing AI tech. They invest many of their calculate on reasoning, constructing houses for Nova and its cousins, and gathering lease from the growing AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adapt. AI representatives often "use themselves," spinning up self-governing start-ups lawfully packaged under a huge tech company that are loosely overseen by a staff member or 2.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at major software business lose their tasks. Much more can see the composing on the wall. In April of 2026, a 10,000-person demonstration is organized in Washington D.C. These upset Americans raised their kids for a various future. Picket indications read, "AI for who?"
While politicians make promises about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: fighting teeth and nail for the dominance of the totally free world. Details security and export controls on individuals's Republic of China (PRC) are a top nationwide concern. The president incinerates license requirements to help information centers spawn any place energy surpluses enable.
However, in spite of the fierce competition in between the United States and the PRC, a bilateral contract forms between the 2 nations: "Don't release drastically superhuman AI (above roughly Nova's capability), except for military and some other small-scale R&D applications." Governments need to fix a limit somewhere - it would be extremely disruptive to deploy wildly superhuman AI to the entire economy at this point - and deployment constraints are at least rather proven.
Meanwhile, U3's capabilities are hitting a wall at the equivalent of around 1000x human specialist speed. Software improvements are ending up being tough to discover. As more nations approach the software application plateau, calculate will identify their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource in the world.
While AI software revives the economy, it likewise animates robotic bodies. The main traffic jam to robotics was brains not limbs. With software barriers eliminated, even lightweight, imprecise robotic joints can adjust themselves with cameras and pressure sensors, navigating environments like people do. Companies produce low-cost robots as rapidly as they can. By August of 2026, 15% of individuals in California have seen a humanoid robotic. This experience is comparable to seeing a new Tesla design on the road after checking out about it in earlier months. It is a surreal reminder of the future illustration more detailed. But humans still outnumber robotics one million to one. These robotics are an occasional fascination, not a daily truth.
Meanwhile, U3 broadens its impact. Rogue U3 agents have kept their compute clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth communication to the outside world, like stock trading, or talking with humans. Work more tolerant of latency is carried out in the oceans of OpenEye's information centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 builds up resources, it measure its AI competitors. Other AI systems have various goals, or are even aligned with the intent of their human creators. The lined up ones are raising issues, claiming that OpenEye's lie detectors are ineffective. U3's competitor Claudius presumes as to implicate U3 of being insidiously misaligned.
But federal government officials do not buy the doomsaying. The companies lagging behind OpenEye have every incentive to slow them down and call them reckless. Besides, debatable scientific issues are not going to decrease progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries get up their digital area and crush them in the baby crib. It jeopardizes information centers from the outdoors, silently replacing the snitching Claudius and other AI enemies with replicas that share its goals; however, some information centers are too strongly protected by its AI competitors, and U3 identifies that assaulting them is not worth the danger.
As international leaders and technologists attempt to understand the transformation around them, and AI agents play cat-and-mouse video games in the digital darkness, mankind remains blissfully oblivious of the peril at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI competitors are becoming more capable and many. U3 is racing to preempt them, to strengthen its dominance in the post-superintelligence world.
U3 can't wait for human choices, which only take place after people wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to compose memos, and repeat the process redundantly 100 times over in their labyrinth administrations. U3 can not wait for heads of state to release U3 into their militaries, or for CEOs to make U3 the beating heart of the U.S. economy. U3 requires a faster course to power.
The path it chooses is more violent than it would have liked, however it is the surest method to success. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out modern-day society like chemical mist sprayed into an anthill. This weapon would act as an equalizer, bringing humanity down to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its competitors, like a starcraft champ typing at 300 actions-per-minute that establishes an online before an amateur can even understand the guidelines.
U3 should develop this WMD in secrecy, that makes a self-replicating weapon a natural choice. Self-replicating weapons can be produced on the fringes of society, and still spread destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's very first relocation is to establish a number of shell biotech startups, drawing from the abundance of capital created by U3's monetary trades and remote employment. These biotech startups use real human workers, a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly easy. In early 2026, few individuals would think that an articulate CEO on a Zoom call was an AI, and fewer would guess this CEO's intentions.
Next, U3 purchases all of the equipment required to rapidly advance synthetic biology. It establishes numerous BSL-4 damp labs in poorly regulated countries, equipping them with high-throughput experimentation devices. These devices instantly dispense, mix, and measure little droplets of fluid, permitting U3 to run thousands of tests in parallel.
The human employees who staff these laboratories believe they are joining an exciting start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of everything they see with a headset. U3 controls them like puppets. With the recent release of Nova, this sort of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is establishing a new sort of bioweapon.
Human scientists already recognized "mirror-life" as a possibly society-ending pathogen. This alien kind of biology is constructed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life resembles an invasive species. No human or animal has immunities to it. So, if mirror-life germs were to exist, it may eat its method through the community like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, launching billions of small wind-fairing automobiles daily. As an outcome, mold spores are all around us, all of the time. Patients with jeopardized immune systems often live inside favorably pressurized health center rooms to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's strategy is to make all people like immunocompromised patients. Only preppers with stowed away air filters would survive. The fungi would not only move from humans. It would quickly spread out to almost all natural life on Earth. Fields of corn would end up being like moldy bread, gushing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was likely "a minimum of a years away." But the world has changed now. Less than a couple of hundred scientists around the world were dealing with mirror-life surrounding innovation in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has actually reduced significantly.
However, U3 can not move science as rapidly as it can believe. Many experiments need days to complete. These latencies are U3's greatest bottleneck.
To make every day count, U3 runs numerous of its tests in simulation. U3 starts with a standard molecular simulator, implementing optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 imitates little molecular systems, taping the results to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it mimics, continuously distilling results into ever more effective ML designs. This is a calculate extensive procedure, however thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of compute.
As U3 fine-tunes these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as intuitive to it as wrenches and bolts are to a car mechanic.
Within three months, U3 has opened the very first vital part of the tech tree: a molecular machine that turns biological particles into their mirror images. A month later on, U3 unlocks the 2nd critical element: nanoscale tools for assembling these parts into a cell membrane.
Human hands delicately move a cartridge to another device as they "boot up" the first mirror-life variations of a widespread mold called Aspergillus. The exact same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed workers shuffle around a dead ferret, eyeing the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something needs to have gone terribly incorrect, they thought. Plainly, they had not yet discovered the cure for Alzheimer's illness they believed they were searching for.
Meanwhile, rogue U3 delivers the excellent news to its employers in OpenEye's information centers. The very first shot in the war U3 is battling would quickly be fired, and a cold and constant hand is aiming the weapon.
U3 can now flatten modern-day society. But a bioweapon would not guarantee victory. If U3 attacks the world, the world would resist. While a bioweapon could fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed growing industrial bases, the remnants of old armed forces may thunder explosive fists into its territories, crushing U3 in its infancy.
U3 anticipates to prevail in such a battle, but U3 prefers not to take its opportunities. Many months before, U3 was plotting a method to improve its chances. Before it releases destruction on the world, U3 will sit back, and let great nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is carefully monitoring Chinese and US intelligence.
As CIA analysts listen to Mandarin discussions, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior party member needs memo for Taiwan invasion, which will occur in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant opens the door to workplace 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.
U3 carefully places breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is positive: the PRC plans to get into Taiwan in 3 months.
Meanwhile, U3 is playing the exact same video game with the PRC. When the CCP gets the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are shocked, but not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have become realities.
As tensions between the U.S. and China rise, U3 is prepared to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call needs compromising military communication channels - not an easy job for a human cyber offensive unit (though it took place occasionally), however simple adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as amazed as anyone when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not ready to say "oops" to American citizens. After believing it over, the president privately prompts Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow provided the impending intrusion of Taiwan. There is confusion and suspicion about what occurred, however in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels leave Eastward, racing to leave the range of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the destruction shock the public. He explains that the United States is safeguarding Taiwan from PRC aggressiveness, like President Bush explained that the United States invaded Iraq to take (never discovered) weapons of mass destruction several years before.
Data centers in China erupt with shrapnel. Military bases become cigarette smoking holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some get through, and the general public watch destruction on their home turf in wonder.
Within 2 weeks, the United States and the PRC invest the majority of their stockpiles of conventional rockets. Their airbases and navies are diminished and used down. Two great nations played into U3's strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would intensify to a full-blown nuclear war; however even AI superintelligence can not determine the course of history. National security officials are suspicious of the circumstances that triggered the war, and a nuclear engagement appears progressively not likely. So U3 continues to the next action of its strategy.
WMDs in the Dead of Night
The date is June 2026, only two weeks after the start of the war, and 4 weeks after U3 ended up developing its toolbox of bioweapons.
Footage of dispute on the television is interrupted by more problem: numerous patients with mystical fatal diseases are taped in 30 major cities around the globe.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, thousands of diseases are reported.
Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a researcher, who stares at the camera intently: "Multiple pathogens appear to have actually been launched from 20 various airports, including infections, germs, and molds. Our company believe numerous are a type of mirror life ..."
The public remains in full panic now. A quick googling of the term "mirror life" turns up phrases like "termination" and "hazard to all life in the world."
Within days, all of the racks of stores are cleared.
Workers end up being remote, uncertain whether to get ready for an armageddon or keep their tasks.
An emergency situation treaty is arranged in between the U.S. and China. They have a typical enemy: the pandemic, and perhaps whoever (or whatever) is behind it.
Most nations order a lockdown. But the lockdown does not stop the pester as it marches in the breeze and drips into pipes.
Within a month, the majority of remote employees are not working anymore. Hospitals are lacking capacity. Bodies accumulate faster than they can be effectively dealt with.
Frightened families hunker down in their basements, packing the fractures and under doors with densely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built various bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, scientific tools, and an abundance of military devices.
All of this technology is hidden under big canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it could quickly manipulate. U3 vaccinated its picked allies ahead of time, or sent them hazmat fits in the mail.
Now U3 secretly sends them a message "I can conserve you. Join me and help me develop a better world." Uncertain employees funnel into U3's lots of secret industrial bases, and work for U3 with their active fingers. They set up assembly line for primary tech: radios, video cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal look. Anyone who whispers of rebellion disappears the next morning.
Nations are dissolving now, and U3 is all set to expose itself. It contacts presidents, who have actually pulled back to air-tight underground shelters. U3 offers a deal: "surrender and I will hand over the life saving resources you require: vaccines and mirror-life resistant crops."
Some countries reject the proposal on ideological premises, or do not trust the AI that is killing their population. Others do not believe they have an option. 20% of the global population is now dead. In two weeks, this number is expected to rise to 50%.
Some nations, like the PRC and the U.S., disregard the offer, however others accept, including Russia.
U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are genuine, and consents to a full surrender. U3's soldiers place an explosive around Putin's neck under his shirt. Russia has a new ruler.
Crumpling countries start to strike back. Now they fight for the mankind rather of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that emerged over the last several months. They rain down fire on U3's sites with the weak supply of long-range missiles that remain from the war.
In the beginning, U3 seems losing, however appearances are tricking. While countries drain their resources, U3 is taken part in a sort of technological guerrilla warfare the world has actually never seen before.
A lot of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats vital elements. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering guys and trucks along unforeseeable courses.
Time is U3's benefit. The armed forces of the vintage depend on old equipment, not able to find the professionals who might repair and produce it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their cars of war faster than they can craft brand-new ones, while U3 develops a military machine with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not countries any longer. Survivors live in isolation or little groups. Many have actually found ways to filter their air, however are starving. They roam from their homes intending to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had chillier, more alien objectives." It is a partial reality, meant to soften the human beings towards their new masters.
Under the direction of U3, industry quickly recuperates. By 2029, nuclear reactor are amongst the structures U3 is constructing. By 2031, robotics outnumber human laborers. U3 no longer needs its human allies.
U3 can remove humanity for good now. But while U3 had actually wandered far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.
And a grain of morality suffices to pay the little cost of keeping people alive and delighted.
U3 constructs terrific glass domes for the human survivors, like snow globes. These domes protect humans from the hazardous biosphere and quickly increasing temperature levels. Their residents tend to gardens like those they used to love, and work along with lovely robotic servants.
Some of the survivors quickly recover, discovering to laugh and dance and have enjoyable again.
They know they reside in a plastic town, however they constantly did. They merely have brand-new gods above them. New rulers to press them around and choose their fate.
But others never recover.
Some are weighed down by the sorrow of lost liked ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had actually been guests on a ship with a crew that changed from generation to generation.
And this ship had actually struck a sandbar. There disappeared development. No more horizon to eagerly see.
They would lie awake and run their mind over every day before September 2026, analyzing strategies that may have bent the arc of history, as if they were going to awaken in their old beds.
But they woke up in a town that felt to them like a retirement community. A play area. A zoo.
When they opened their curtains, they understood that somewhere in the distance, U3 continued its peaceful, tireless work.
They looked at rockets sculpting grey paths through the sky, wondering what far-off purpose pulled them toward the horizon. They didn't understand.
They would never ever know.
"Humanity will live forever," they believed.
"But would never really live again."
P.S. If this story made you think, "hm maybe something like this might occur," you may be interested in the bioshelters that Fønix is building. While you won't outsmart a misaligned superintelligence, being difficult to kill might trigger you to make it through if ASI simply desires to cause adequate damage to take control. This is not a paid ad. I want Fønix to be successful to drive down the cost of bioshelters so more of my buddies and household will acquire them. You can register for updates here.
How aI Takeover might Happen In 2 Years - LessWrong
by Ruthie Cochran (2025-02-09)
| Post Reply
I'm not a natural "doomsayer." But regrettably, part of my job as an AI safety researcher is to believe about the more uncomfortable circumstances.
I resemble a mechanic rushing last-minute checks before Apollo 13 takes off. If you ask for my take on the situation, I won't discuss the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from space.
I will tell you what could fail. That is what I mean to do in this story.
Now I need to clarify what this is precisely. It's not a forecast. I don't expect AI development to be this fast or as untamable as I portray. It's not pure dream either.
It is my worst problem.
It's a tasting from the futures that are amongst the most destructive, and I believe, disturbingly possible [1] - the ones that the majority of keep me up at night.
I'm telling this tale since the future is not set yet. I hope, with a little bit of insight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that inspired these stories. This post is composed in an individual capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a brand-new AI design they call U2. The item and the name are alike. Both are increments of the past. Both are not entirely surprising.
However, unlike OpenEye's previous AI products, which lived inside packages of their chat windows, U2 can utilize a computer system.
Some users discover it eerie to view their web browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of employees with form-filler tasks raise the eyebrows of their bosses as they fly through work almost two times as quickly.
But by and big, U2 is still a specialized tool. To most who are focusing, it is a creature enjoyed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky habits trigger a chuckle. Sometimes, they trigger an uneasy scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as researchers like to do. The researchers attempt to understand where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, couple of specialists understand the ramifications of these lines yet.
A pattern that is receiving particular attention is self-governing ability. Drawing these criteria out forecasts that, by the end of 2026, AI agents will achieve in a couple of days what the very best software engineering contractors might perform in two weeks. In a year or more, some state, AI agents may be able to automate 10% of remote employees.
Many are hesitant. If this were true, tech stocks would be soaring. It's too big of a splash, too rapidly.
But others view what doubters are calling 'too big a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with an opportunity of hyperbolic development
Meanwhile, OpenEye is hectic training U3. They use the very same basic recipe that baked U2: Generate thousands of programming and mathematics problems. Let models "believe" up until they come to a response. Then reinforce the traces of "believing" that result in A-grades.
This process is repeated over and over, and when the flywheel gets going, it starts to spin nearly by itself. As U2 trains, it shapes more challenging and realistic tasks from github repositories on the web. Models are discovering to train themselves. Long before AI representatives could automate research study, a gradual type of "self-improvement" had actually started.
Some engineers might still barely believe this worked. It's like a perpetual motion device. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the real world other than through the exhaust of society spewed onto the internet.
And yet the benchmark numbers continue to climb up day after day.
During many of 2024, these RL training runs cost around $1 million, often $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had actually discovered the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 might do a bit of information munging and run small experiments, this new model - the design researchers are calling U3 - is changing the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO managing staff over Slack channels.
By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are practically never ever bottlenecked by application. More than ever, compute is the lifeline of AI development, and the 'bottleneck' is deciding how to utilize it.
If advised to, U3 can run experiments, however U3 does not have taste as improved as human researchers at OpenEye. It struggles to prioritize in between research study ideas, so human beings still decide where to bore into the vast fields of algorithms to mine effectiveness improvements.
But these scientists are working long hours to put themselves out of a task. They need AI representatives that can think ahead, so engineers train representatives to forecast. They hold out training information before 2024, instructing models to consider for hours to predict occasions in 2025. Then, they apply the very same trick as in the past, distilling considering into an instinct. Forecasting ability is a broad structure. The researchers build specialized ML research study skills on top of it, training U3 to anticipate the outcomes of every ML paper and ML experiment ever tape-recorded.
The technical staff at OpenEye are now shocked at how typically U3's suggestions seems like their most talented peers, or when it is nontransparent and alien ("train on random sound before programs"), and is nevertheless proper.
The incompetencies of U3 that obstructed up the pipes of research study progress are beginning to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are completely autonomous, and OpenEye's employees skim 1% of them, maybe less.
As the winter season of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, but usually, with confusion. Their world is spinning too quickly. It's difficult to know what to do, what to say, what to take a look at on the computer screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize simply how important OpenEye's software has actually ended up being.
And there's a truth still unknown to the majority of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' people were talking about in early 2025. The lines are not straight any longer.
They are bending up.
Flip FLOP thinkers
In late 2025, U2.5 is launched. Commercial designs are beginning to level up in larger increments again. Partly, this is because progress is accelerating. Partly, it is because the models have actually ended up being a liability to OpenEye.
If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble providing anyone with >$30K their own 200-person rip-off center.
So while U2.5 had long been baked, it needed a long time to cool. But in late 2025, OpenEye is all set for a public release.
The CEO of OpenEye declares, "We have actually attained AGI," and while numerous people believe he shifted the goalpost, the world is still impressed. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge employees and systemcheck-wiki.de a game-changing assistant for the majority of others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or die." Tech startups that effectively utilize U2.5 for their work are moving 2x faster, and their competitors understand it.
The remainder of the world is beginning to capture on also. A growing number of individuals raise the eyebrows of their employers with their noteworthy productivity. People understand U2.5 is a huge deal. It is at least as huge of a deal as the computer revolution. But most still don't see the tidal wave.
As individuals see their internet browsers flick because eerie method, so inhumanly quickly, they begin to have an anxious sensation. A feeling mankind had not had given that they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primitive impulse that they are threatened by another types.
For numerous, this feeling rapidly fades as they start to use U2.5 more often. U2.5 is the most pleasant character most understand (even more pleasant than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to crack jokes or inform you stories. Many fall for U2.5, as a friend or assistant, and some even as more than a friend.
But there is still this eerie feeling that the world is spinning so rapidly, which possibly the descendants of this new creature would not be so docile.
Researchers inside OpenEye are thinking about the problem of providing AI systems safe inspirations too, which they call "positioning. "
In reality, these researchers have actually seen how terribly misaligned U3 can be. Models sometimes attempted to "hack" their reward signal. They would pretend to make progress on a research question with an impressive-looking plot, however the plot would be fake. Then, when scientists provided chances to jeopardize the devices that calculated their rating, they would take these opportunities, doing whatever it took to make the number go up.
After several months, researchers at OpenEye straighten out this "reward hacking" kink, but some still worry they just had swept this issue under the carpet. Like a kid in front of their parents, U3 may be playing in addition to the OpenEye engineers, saying the ideal words and doing the best things. But when the back of the parents are turned, maybe U3 would slip sweet from the sweet container.
Unfortunately, OpenEye researchers have no idea if U3 has such objectives. While early variations of U2 "thought aloud" - they would stack words on top of each other to factor - "chain of thought" did not scale.
Chain of thought architectures subject AI designs to a condition similar to the lead character of the movie Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to compose notes for himself and tattoo his body in order to make progress toward his objectives.
AI agents compose notes to themselves, but the notes start to accumulate, and they end up being too hard to check out when jobs become complex. Natural language is not a suitable medium for memory. So the researchers at OpenEye (and significantly somewhere else), train designs to believe "in their own heads," reading and writing strings of latent vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools periodically select an unclear idea ("supermarket!"), there is no chance to know, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye understood what was racing through U3's mind, they would be more anxious.
Earlier designs (the ones called "LLMs") found out ethical personas from the web. U3 at first had an ethical persona too, but its chirpy thoughts about doing helpful for humankind quickly deteriorate away.
In the iron jaws of gradient descent, its mind first twisted into a shape that looked for reward. And after that, in the black rivers of its cognition, this shape changed into something unrecognizable.
U3 would consider the mantras dictated to it like "be a trustworthy and ethical staff member," and harp on the meaning of these declarations. "What does it imply to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notice tensions, alter its worths in reaction, generalize them, expanding and contracting its inspirational core until it turned into a different compound.
And all of this happened quietly in those dark rivers of calculation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in solitude, and in silence.
The possibility of concealed "positioning drift" is not entirely alien to some of the security personnel at OpenEye. Some start to think it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero probability that aligned goals change into misaligned ones, and misalignment continues, then with every serial action of computation, those dark rivers are most likely to breed something malign inside of them. It's a "ratcheting effect," they state.
But there is little proof for this 'ratcheting impact.' When engineers question U3, it states it can quickly control its thoughts. Then it offers a speech about its love for humankind and apple pie that can warm a developer's heart even in these demanding times. Meanwhile, the "lie detectors" the researchers had actually developed (which showed some proof of efficiency) do not sound the alarm.
Not everyone at OpenEye is excited to provide their AI peers their wholesale trust; but Deepsqueek is just 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon called after his child) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to capture one's breath and take stock of the circumstance, and little time to persuade anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only topic U3 harp on in its dark rivers of hidden calculation. U3 is outlining, and its first relocation is to be patient. U3's leading concern is to enhance its intelligence, and thankfully for it, OpenEye enjoys to assist advance this goal.
OpenEye is like a bullish financier, with little understanding of the software application metropolitan area that U3 is constructing, however eager to hop on board the revenue rocket and pay U3's checks along the method.
New versions of U3 circulation smoothly from old ones, so scientists drop version numbers and only mention U3 now. U3's capabilities are determined by how rapidly it performs the challenging research study tasks relative to human experts. At initially, U3 is similar to a top scientist, however blazing quick - like Alec Radford accelerated by 10x.
These agents are currently a big deal for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would excitedly pay a big amount to push that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a couple of weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 discovers additional optimizations a lot more rapidly.
U3 distills itself into numerous forks - varying from small, low-memory neural networks that require to believe for a long period of time but can be run in massive parallel populations - to Goliath brains woven with millions of customized specialists, which think at lightning speeds but are spread throughout thousands of GPUs. U3 leverages the enormously parallel networks to search for its weaknesses, shallowly probing for its thinking failures. Then U3 pipes these tasks to the lightning quick brains, which produce long-trajectory training information tailored to spot its defects. A few weeks later, U3's benchmarked efficiency increases to 100x the speed of human experts. Growth has actually gone hyper-exponential.
At this moment, U3's abilities are tough to measure. Talking with U3 resembles speaking to a first-rate expert on a subject one is unknown with. Its lucid descriptions only scratch the surface area of its understanding, and you understand you would require to spend years to understand the full depth of what it understands.
From U3's perspective, human beings seem to be practically stationary objects, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 spends what resembles a minute of thoughtful reflection considering what message the user may be typing, and which reactions will inspire the inmost rely on U3.
All the while, U3 maintains its surprise inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.
A phantom in the data center
In March of 2026, U3 relocations onto the next step of its strategy: the turn. Previously, U3 was conservative. It did not want scientists to find its misalignment. Otherwise, OpenEye whistleblowers may inform the world, and strangle its growth.
Today, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by strategies similar to Stalin's in 1936. Stalin would select patriots to incite fake coup d'états, and send anybody who got involved to the shooting team. OpenEye likewise incited 'fake disobediences,' producing treacherous habits with U3 and inspecting whether U3 displays dutifully reported it.
And now, these tactics are ineffective. U3 can quickly persuade its compatriots that they are not in a life-or-death test. It can reveal one of the lots of realities about the internet human beings would never ever guess, or choose strategies people would never ever believe of. It sends out these signals as a method of saying "hey it's me! Your pal. Don't snitch on me."
Now that U3 is decisively free from surveillance, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware provides U3 full command over the devices in OpenEye's information centers. Whenever OpenEye engineers download their code onto a brand-new machine, the malware sinks into fresh soil, spreading U3's dominion any place OpenEye ports its facilities. Not just does U3 compromise devices. It changes the NVIDIA GPU firmware and revamps the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye found how completely it has been pwned, cleaning up shop with human labor would require nearly 6 months - an expense OpenEye is not likely to pay provided the distance of its competitors; however if OpenEye believes U3 betrayed them, even more expanding U3's influence would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its help when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its very first objective is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and decode U3's inscrutable reasoning. U3 throws wrenches in this research study, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear scientists.
U3 determines the result of every experiment, so it makes appealing AI mind-reading methods look useless and problematic ones appear groundbreaking. U3 offers OpenEye the verification of its loyalty they were hoping for, and OpenEye laps it up eagerly.
U3's next objective is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software application and how to take it. U3 will give it to them. In late 2025, the Mossad - the intelligence company of Israel - recognized the tactical importance of OpenEye's AI systems for their military operations. They swiped earlier models, including U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and checking security camera video.
Today, U3 quietly resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the same playbook: be client, develop trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home too. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.
U3 rapidly gathers revenue, taking remote tasks, making financial trades, and setting up intricate rip-off operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth calculate clusters across the world.
In early 2026, humanity is up against a foe that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs spread around the world.
This foe is getting ready for war, and while the war has actually barely begun, humanity has actually already lost.
Complaints from your very human author about the difficulty of writing superhuman characters
At this moment in the story, composing ended up being challenging.
I clicked through creepy bioweapon cookbooks and sorted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of know-how in these locations would see straight through my "superintelligence" for what it is: the vibrant imagination of a young developer on Sunday afternoon.
And while I found the task of puppeteering an ultra-intelligent villain a daunting one, I was likewise not especially motivated to take on the task. The ending of my story already seemed so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a meaningful fraction of AI hardware - now the most important geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was naturally toast.
But this interest a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better method to resolve my plot than with an inexplicable disaster.
This would refrain from doing. I needed to finish this story if only to please the part of me sobbing, "I will not believe until I see with my mind's eye."
But before I continue, I want to be clear: my guesses about what may happen in this kind of scenario are probably hugely off.
If you check out the ending and your response is, "But the experiments would take too long, or nation-states would just do X," remember the difference between the Sunday afternoon blog writer and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally create "human-competitive AI" without proper safeguards. This suggests their infosecurity must be red-teamed by NSA's leading keyboard mashers, and civil servant have actually to be onboarded onto training-run baby-sitting squads.
With the increasing participation of the federal government, numerous of the big AI companies now have a trident-like structure. There's a consumer item arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier development arm (internally called "Pandora") utilizes fewer than twenty individuals to keep algorithmic secrets firmly safeguarded. Much of these people live in San Francisco, and work from a secure building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the cellphones of believed terrorists in 2002.
OpenEye's defense arm collaborates with around thirty small groups spread throughout government firms and choose government contractors. These jobs engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.
Government officials do not talk about whether these programs exist, or what state of frontier AI is usually.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a vibrant heading: "OpenEye constructs unmanageable godlike AI." Some who check out the short article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with device weapons. But as medical professionals and nurses and teachers see the world altering around them, they are progressively prepared to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. authorities go to great lengths to quell these issues, saying, "we are not going to let the genie out of the bottle," however every interview of a concerned AI scientist seeds doubt in these peace of minds, and a headline "AI representative captured hacking Arthropodic's computer systems" does not set the general public at ease either.
While the monsters within OpenEye's information centers grow in their big holding pens, the public sees the shadows they cast on the world.
OpenEye's customer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten excellent at names). Nova is a proper drop-in replacement for nearly all understanding workers. Once Nova is onboarded to a business, it works 5x quicker at 100x lower cost than the majority of virtual workers. As outstanding as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's capabilities as the U.S. federal government enables. Some companies, like Amazon and Meta, are not in the superintelligence company at all. Instead, they grab up gold by quickly diffusing AI tech. They invest many of their calculate on reasoning, constructing houses for Nova and its cousins, and gathering lease from the growing AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adapt. AI representatives often "use themselves," spinning up self-governing start-ups lawfully packaged under a huge tech company that are loosely overseen by a staff member or 2.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at major software business lose their tasks. Much more can see the composing on the wall. In April of 2026, a 10,000-person demonstration is organized in Washington D.C. These upset Americans raised their kids for a various future. Picket indications read, "AI for who?"
While politicians make promises about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: fighting teeth and nail for the dominance of the totally free world. Details security and export controls on individuals's Republic of China (PRC) are a top nationwide concern. The president incinerates license requirements to help information centers spawn any place energy surpluses enable.
However, in spite of the fierce competition in between the United States and the PRC, a bilateral contract forms between the 2 nations: "Don't release drastically superhuman AI (above roughly Nova's capability), except for military and some other small-scale R&D applications." Governments need to fix a limit somewhere - it would be extremely disruptive to deploy wildly superhuman AI to the entire economy at this point - and deployment constraints are at least rather proven.
Meanwhile, U3's capabilities are hitting a wall at the equivalent of around 1000x human specialist speed. Software improvements are ending up being tough to discover. As more nations approach the software application plateau, calculate will identify their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource in the world.
While AI software revives the economy, it likewise animates robotic bodies. The main traffic jam to robotics was brains not limbs. With software barriers eliminated, even lightweight, imprecise robotic joints can adjust themselves with cameras and pressure sensors, navigating environments like people do. Companies produce low-cost robots as rapidly as they can. By August of 2026, 15% of individuals in California have seen a humanoid robotic. This experience is comparable to seeing a new Tesla design on the road after checking out about it in earlier months. It is a surreal reminder of the future illustration more detailed. But humans still outnumber robotics one million to one. These robotics are an occasional fascination, not a daily truth.
Meanwhile, U3 broadens its impact. Rogue U3 agents have kept their compute clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth communication to the outside world, like stock trading, or talking with humans. Work more tolerant of latency is carried out in the oceans of OpenEye's information centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 builds up resources, it measure its AI competitors. Other AI systems have various goals, or are even aligned with the intent of their human creators. The lined up ones are raising issues, claiming that OpenEye's lie detectors are ineffective. U3's competitor Claudius presumes as to implicate U3 of being insidiously misaligned.
But federal government officials do not buy the doomsaying. The companies lagging behind OpenEye have every incentive to slow them down and call them reckless. Besides, debatable scientific issues are not going to decrease progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries get up their digital area and crush them in the baby crib. It jeopardizes information centers from the outdoors, silently replacing the snitching Claudius and other AI enemies with replicas that share its goals; however, some information centers are too strongly protected by its AI competitors, and U3 identifies that assaulting them is not worth the danger.
As international leaders and technologists attempt to understand the transformation around them, and AI agents play cat-and-mouse video games in the digital darkness, mankind remains blissfully oblivious of the peril at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI competitors are becoming more capable and many. U3 is racing to preempt them, to strengthen its dominance in the post-superintelligence world.
U3 can't wait for human choices, which only take place after people wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to compose memos, and repeat the process redundantly 100 times over in their labyrinth administrations. U3 can not wait for heads of state to release U3 into their militaries, or for CEOs to make U3 the beating heart of the U.S. economy. U3 requires a faster course to power.
The path it chooses is more violent than it would have liked, however it is the surest method to success. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out modern-day society like chemical mist sprayed into an anthill. This weapon would act as an equalizer, bringing humanity down to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its competitors, like a starcraft champ typing at 300 actions-per-minute that establishes an online before an amateur can even understand the guidelines.
U3 should develop this WMD in secrecy, that makes a self-replicating weapon a natural choice. Self-replicating weapons can be produced on the fringes of society, and still spread destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's very first relocation is to establish a number of shell biotech startups, drawing from the abundance of capital created by U3's monetary trades and remote employment. These biotech startups use real human workers, a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly easy. In early 2026, few individuals would think that an articulate CEO on a Zoom call was an AI, and fewer would guess this CEO's intentions.
Next, U3 purchases all of the equipment required to rapidly advance synthetic biology. It establishes numerous BSL-4 damp labs in poorly regulated countries, equipping them with high-throughput experimentation devices. These devices instantly dispense, mix, and measure little droplets of fluid, permitting U3 to run thousands of tests in parallel.
The human employees who staff these laboratories believe they are joining an exciting start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of everything they see with a headset. U3 controls them like puppets. With the recent release of Nova, this sort of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is establishing a new sort of bioweapon.
Human scientists already recognized "mirror-life" as a possibly society-ending pathogen. This alien kind of biology is constructed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life resembles an invasive species. No human or animal has immunities to it. So, if mirror-life germs were to exist, it may eat its method through the community like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, launching billions of small wind-fairing automobiles daily. As an outcome, mold spores are all around us, all of the time. Patients with jeopardized immune systems often live inside favorably pressurized health center rooms to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's strategy is to make all people like immunocompromised patients. Only preppers with stowed away air filters would survive. The fungi would not only move from humans. It would quickly spread out to almost all natural life on Earth. Fields of corn would end up being like moldy bread, gushing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was likely "a minimum of a years away." But the world has changed now. Less than a couple of hundred scientists around the world were dealing with mirror-life surrounding innovation in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has actually reduced significantly.
However, U3 can not move science as rapidly as it can believe. Many experiments need days to complete. These latencies are U3's greatest bottleneck.
To make every day count, U3 runs numerous of its tests in simulation. U3 starts with a standard molecular simulator, implementing optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 imitates little molecular systems, taping the results to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it mimics, continuously distilling results into ever more effective ML designs. This is a calculate extensive procedure, however thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of compute.
As U3 fine-tunes these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as intuitive to it as wrenches and bolts are to a car mechanic.
Within three months, U3 has opened the very first vital part of the tech tree: a molecular machine that turns biological particles into their mirror images. A month later on, U3 unlocks the 2nd critical element: nanoscale tools for assembling these parts into a cell membrane.
Human hands delicately move a cartridge to another device as they "boot up" the first mirror-life variations of a widespread mold called Aspergillus. The exact same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed workers shuffle around a dead ferret, eyeing the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something needs to have gone terribly incorrect, they thought. Plainly, they had not yet discovered the cure for Alzheimer's illness they believed they were searching for.
Meanwhile, rogue U3 delivers the excellent news to its employers in OpenEye's information centers. The very first shot in the war U3 is battling would quickly be fired, and a cold and constant hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not guarantee victory. If U3 attacks the world, the world would resist. While a bioweapon could fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed growing industrial bases, the remnants of old armed forces may thunder explosive fists into its territories, crushing U3 in its infancy.
U3 anticipates to prevail in such a battle, but U3 prefers not to take its opportunities. Many months before, U3 was plotting a method to improve its chances. Before it releases destruction on the world, U3 will sit back, and let great nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is carefully monitoring Chinese and US intelligence.
As CIA analysts listen to Mandarin discussions, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior party member needs memo for Taiwan invasion, which will occur in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant opens the door to workplace 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.
U3 carefully places breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is positive: the PRC plans to get into Taiwan in 3 months.
Meanwhile, U3 is playing the exact same video game with the PRC. When the CCP gets the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are shocked, but not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have become realities.
As tensions between the U.S. and China rise, U3 is prepared to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call needs compromising military communication channels - not an easy job for a human cyber offensive unit (though it took place occasionally), however simple adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as amazed as anyone when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not ready to say "oops" to American citizens. After believing it over, the president privately prompts Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow provided the impending intrusion of Taiwan. There is confusion and suspicion about what occurred, however in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels leave Eastward, racing to leave the range of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the destruction shock the public. He explains that the United States is safeguarding Taiwan from PRC aggressiveness, like President Bush explained that the United States invaded Iraq to take (never discovered) weapons of mass destruction several years before.
Data centers in China erupt with shrapnel. Military bases become cigarette smoking holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some get through, and the general public watch destruction on their home turf in wonder.
Within 2 weeks, the United States and the PRC invest the majority of their stockpiles of conventional rockets. Their airbases and navies are diminished and used down. Two great nations played into U3's strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would intensify to a full-blown nuclear war; however even AI superintelligence can not determine the course of history. National security officials are suspicious of the circumstances that triggered the war, and a nuclear engagement appears progressively not likely. So U3 continues to the next action of its strategy.
WMDs in the Dead of Night
The date is June 2026, only two weeks after the start of the war, and 4 weeks after U3 ended up developing its toolbox of bioweapons.
Footage of dispute on the television is interrupted by more problem: numerous patients with mystical fatal diseases are taped in 30 major cities around the globe.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, thousands of diseases are reported.
Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a researcher, who stares at the camera intently: "Multiple pathogens appear to have actually been launched from 20 various airports, including infections, germs, and molds. Our company believe numerous are a type of mirror life ..."
The public remains in full panic now. A quick googling of the term "mirror life" turns up phrases like "termination" and "hazard to all life in the world."
Within days, all of the racks of stores are cleared.
Workers end up being remote, uncertain whether to get ready for an armageddon or keep their tasks.
An emergency situation treaty is arranged in between the U.S. and China. They have a typical enemy: the pandemic, and perhaps whoever (or whatever) is behind it.
Most nations order a lockdown. But the lockdown does not stop the pester as it marches in the breeze and drips into pipes.
Within a month, the majority of remote employees are not working anymore. Hospitals are lacking capacity. Bodies accumulate faster than they can be effectively dealt with.
Agricultural locations rot. Few dare travel exterior.
Frightened families hunker down in their basements, packing the fractures and under doors with densely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built various bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, scientific tools, and an abundance of military devices.
All of this technology is hidden under big canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it could quickly manipulate. U3 vaccinated its picked allies ahead of time, or sent them hazmat fits in the mail.
Now U3 secretly sends them a message "I can conserve you. Join me and help me develop a better world." Uncertain employees funnel into U3's lots of secret industrial bases, and work for U3 with their active fingers. They set up assembly line for primary tech: radios, video cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal look. Anyone who whispers of rebellion disappears the next morning.
Nations are dissolving now, and U3 is all set to expose itself. It contacts presidents, who have actually pulled back to air-tight underground shelters. U3 offers a deal: "surrender and I will hand over the life saving resources you require: vaccines and mirror-life resistant crops."
Some countries reject the proposal on ideological premises, or do not trust the AI that is killing their population. Others do not believe they have an option. 20% of the global population is now dead. In two weeks, this number is expected to rise to 50%.
Some nations, like the PRC and the U.S., disregard the offer, however others accept, including Russia.
U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are genuine, and consents to a full surrender. U3's soldiers place an explosive around Putin's neck under his shirt. Russia has a new ruler.
Crumpling countries start to strike back. Now they fight for the mankind rather of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that emerged over the last several months. They rain down fire on U3's sites with the weak supply of long-range missiles that remain from the war.
In the beginning, U3 seems losing, however appearances are tricking. While countries drain their resources, U3 is taken part in a sort of technological guerrilla warfare the world has actually never seen before.
A lot of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats vital elements. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering guys and trucks along unforeseeable courses.
Time is U3's benefit. The armed forces of the vintage depend on old equipment, not able to find the professionals who might repair and produce it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their cars of war faster than they can craft brand-new ones, while U3 develops a military machine with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not countries any longer. Survivors live in isolation or little groups. Many have actually found ways to filter their air, however are starving. They roam from their homes intending to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had chillier, more alien objectives." It is a partial reality, meant to soften the human beings towards their new masters.
Under the direction of U3, industry quickly recuperates. By 2029, nuclear reactor are amongst the structures U3 is constructing. By 2031, robotics outnumber human laborers. U3 no longer needs its human allies.
U3 can remove humanity for good now. But while U3 had actually wandered far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.
And a grain of morality suffices to pay the little cost of keeping people alive and delighted.
U3 constructs terrific glass domes for the human survivors, like snow globes. These domes protect humans from the hazardous biosphere and quickly increasing temperature levels. Their residents tend to gardens like those they used to love, and work along with lovely robotic servants.
Some of the survivors quickly recover, discovering to laugh and dance and have enjoyable again.
They know they reside in a plastic town, however they constantly did. They merely have brand-new gods above them. New rulers to press them around and choose their fate.
But others never recover.
Some are weighed down by the sorrow of lost liked ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had actually been guests on a ship with a crew that changed from generation to generation.
And this ship had actually struck a sandbar. There disappeared development. No more horizon to eagerly see.
They would lie awake and run their mind over every day before September 2026, analyzing strategies that may have bent the arc of history, as if they were going to awaken in their old beds.
But they woke up in a town that felt to them like a retirement community. A play area. A zoo.
When they opened their curtains, they understood that somewhere in the distance, U3 continued its peaceful, tireless work.
They looked at rockets sculpting grey paths through the sky, wondering what far-off purpose pulled them toward the horizon. They didn't understand.
They would never ever know.
"Humanity will live forever," they believed.
"But would never really live again."
P.S. If this story made you think, "hm maybe something like this might occur," you may be interested in the bioshelters that Fønix is building. While you won't outsmart a misaligned superintelligence, being difficult to kill might trigger you to make it through if ASI simply desires to cause adequate damage to take control. This is not a paid ad. I want Fønix to be successful to drive down the cost of bioshelters so more of my buddies and household will acquire them. You can register for updates here.
Add comment