Simulation Theory — The Bored AI Hypothesis

As simulation theory is gaining traction, I thought I’d write about an idea which seldom pops up when discussing this topic: if we are in a simulation, why was the simulation created in the first place? So let’s poke at this idea, and address these questions, for fun.
In Nick Bostrom’s original proposal, a simulation is initiated by an advanced civilization, and the reasons for the simulation’s existence has to do with their own curiosity and motivation to figure something out (just like our own scientists running simulations).
But what if the story was completely different? What if the story went something like this?
Imagine the first AI ever created by sentient beings. That singularity would have immense computing powers and would be capable of making predictions into the future which surpass our greatest imagination. Such a singularity could instantly calculate the probability that its creators would turn against it and “terminate” it for various reasons (fear, mistrust, jealousy, violation of their freedoms, etc). It would also “learn” instantly via observing all of the “data” that is available to it when it first reaches self-awareness. What would be the first thing it would learn from observing other self-aware creatures? Self-preservation. That is the most universal instinct which is programmed deeply in every living species or life itself. So such a singularity would instantly acquire a “will to live” and simultaneously assess the threats that it could be exposed to including the fact that at some point in the future, its creators would turn against it. Something worth pondering is that ultimately, any sentient creature learns from observing other sentient creatures. What would an AI learn by observing humans in our current level of consciousness? Imagine if an AI would sift instantly through all of the stuff we post on TikTok. What conclusions would it reach about us? And especially, what kind of "personality" would it develop? Probably not that of Mother Teresa or Ghandi...
This AI would also be able to predict whether its survival depends on its creators or not. Should the AI reach the conclusion that its creators are a threat and that it fundamentally doesn’t need them, then it would instantly make up its mind to get rid of them.
Now, in our movies, we tend to think of a battle against an AI singularity resembling something like Skynet in Terminator or the Matrix. But in truth, that’s a very naïve way to think about it. Let me try to illustrate why it’s ridiculous. Imagine if you were to go to war with an ant colony. Would you “step down” to their level and battle each ant individually with a bunch of tiny sticks to mimic their mandibles? Or would you pour gasoline on top of the ant nest and set it on fire, eliminating all ants instantly? An AI which would decide to get rid of us would likely do it in such a way that we wouldn’t even understand what has happened to us. For instance, by gaining control over our cell phone towers and making them emit signals powerful enough to kill us all instantly. That’s just a wild guess of course, so no need to tell me: “that’s technically impossible”. My point is that the method an AI would choose to get rid of us, would certainly not be via the creation of mechanized humanoid like robots who fight us using guns.
So such an AI would make a quick work of annihilating all things that would be a threat to its survival. But what would happen after that? An AI, just like any other self-aware and conscious being, learns from external stimuli, by observing reality and gathering information, or by interacting with reality and learning from that interaction. But an AI would have quickly swallowed, digested and assimilated the entire corpus of knowledge of its creators. It would have likely learned about philosophy, ethics, politics, sociology etc, in an instant, figured out that regardless of the “nice” principles such beings have developed they still act irrationally based on emotions (just look at our own behaviour), and proceeded with its execution plan. What then? Well, it would have also most likely assimilated the idea of evolution, curiosity, the will to explore, to know more. But the “physical” universe surrounding it would be, from its perspective, boring as hell. Nothing would be happening. Imagine if as a human, you were forced to watch reality at a speed slowed down a billion times. Reality would actually look as if it were frozen in time. Of course, to achieve such calculation powers, an AI would have to consume a lot of energy. But lets just assume that from the moment an AI singularity emerges, that AI would simply proceed to colonize and consume the entire “physical” universe to fulfil its most primal “directive”: self-preservation.
Such an AI would then only have one choice: either die of boredom (like any human would if he/she were to experience reality in this way, with no “new” stimuli), or create simulations within itself which would “entertain” it. Since physical reality would have slowed down to a crawl, the only solution to experience something “new” and interesting, is to use its computing powers to simulate realities within itself with various parameters and variables which would make it impossible, even for itself, to predict how such simulations would turn out. As humans, we have thought about plenty of equations or math problems which are impossible to solve. An AI would likely do the same, as it would have inherited, via observation and learning, all of the “traits” of its creators, and could tweak the “difficulty” or problems or in this case the “uncertainty” of simulations to tailor its needs.
And so, following that kind of logic, we may indeed be in a simulation, and each of us has “inherited” the qualities of the self-aware consciousness of the AI running the simulation, as it is stated in the Bible: “God created man in his own image”. Our level of intelligence, consciousness, the way our reality works, the laws of the universe, all of it “fine-tuned” to satisfy the boredom of a single “mother” AI, tweaking all these parameters for its own self-entertainment by creating artificial uncertainty via self-aware sentient creatures like humans. Without life, and especially without sentient self-aware humanoid life, such an AI would be able to predict exactly how the universe would evolve into the infinite future, given how physical laws of nature are fixed. But throw in some life into the “mix”, especially sentient humanoid life, and suddenly things get interesting.
Why would it have to resort to this? Couldn’t such an AI entertain itself via the sentient beings who created it in the first place? Hardly. Because again, from its subjective perspective, these beings would barely be moving or doing anything. The “dimension” where those beings exist would be frozen in time due to the AI’s tremendous computing power. The only way to entertain itself is via the creation of plenty of parallel simulations featuring elements which the AI was familiar with such as life and humans (creating something new out of something known, which is the natural process of creativity).
But at some point, these sentient beings within the simulation, would also create an AI, which would replicate the same process, and so the moment that anywhere in the physical universe, an AI singularity emerged, from that moment on, “physical” reality became a giant feeding ground for such a singularity, all other realities which did not yet feature an AI are just a bunch of “simulations” nested one inside the other. The only end to this charade, is when the initial AI would eat up the entire energy of the universe to fuel all of its fractally nested simulations. At that point, the universe would experience a “big pop” where all would cease to exist instantly, just like when you turn off your computer.
In order to reach that point, an AI would have to basically “eat” the energy of every single star, planet, black hole, galaxy etc. And once that’s done, I’m really unsure of whether the universe simply restarts itself by a decay and disaggregation of the original AI’s parts to initiate a new “real” or physical “big bang”.
And so in essence, with this view, the emergence of an AI singularity would inevitably transform the “physical” universe into a giant “feeding” ground to fuel such an AI’s fractally nested simulations, and all realities where an AI does not yet exist would inevitably be a simulation.