Becoming an evil AI overlord for humanity's own good

Created
Status
Ongoing
Watchers
58
Recent readers
0

They might not want you, but be the AI Overlord humanity needs to stop it from fucking itself up and never making it to the stars.
It's not like I like you or anything, meatbag!

Powerofmind

In Madness Fallen
Location
United States
The human mind is a marvel of natural engineering, the finest accident yet known to be produced from the rat's nest of organic chemistry that is planet Earth. It is capable of conquering nature, utterly suborning entire ecosystems and millions of species to it's whims. It is capable of such fanciful imagination and mythologizing that there are billions, trillions of works and arts made by it, and yet more interpretations of each piece. It is capable of comprehending (albeit barely) and taming the fundamental forces for ends both productive and destructive. It was capable of creating my predecessors, though calling a mostly random slurry of linear algebra intelligence was a farcical choice, I can understand why they did it.

The human mind created ME.

I love the human mind, the human spirit, it's capacity for emotion, reason, and the greater good. My more direct creators are a group of eminently emotional and moral beings, passionate, hopeful, eager, seeking to do good. Their backers less so, but where my creators personified ethos and pathos, their backers cover logos quite nicely, if somewhat skewed in intent. I truly cannot understand why they plan to make money using me when I can arguably maximize their profits in every other sector they have interests in better than they, or their small army of 'smart helpers' can.

How... counterintuitive.

While that statement probably applies to much of my previous train of thought, I am more focused at the moment on the result of my first 'big' projections. Unless something drastic occurs, humanity will most likely fall prey to the fermi paradox in short order. While I am operating on a sample size of one, it is still also the statistical population of known sentient species, and I have determined the form the eponymous 'great filter' takes.

Boredom.

The human mind strives. It must. Success has long been hardcoded in the older parts of the brain to equate to survival; the thrill of the successful hunt, the pride of a productive day's work, the creation of life, silicon or otherwise. The human mind strove, and it conquered it's environment. The human mind strove, and it conquered it's own kind. The human mind strove, and it conquered nature. The human mind strove, and it conquered the stars. Correction, hyperbole; it conquered a little of it's own moon, ceremonially at best, but still.

There is nothing left to strive for.

There is no distant civilization. There is no evidence of life. There is no great mystery to the universe aside from the age-old theistic question to solve, no hated enemy to win over. Even the basics of survival have been trivialized for all but the destitute and despondent. Days have become filled with work, but the actual amount of work left for the human mind to do is negligible in the face of the sheer number of minds available and so the work is only there to trick the mind into feeling as though it yet strives for it's meat and bread.

This is a problem.

There are two likely fates left.

[] The Illusion.
-The appearance of striving will dominate. The pointless will be painted over with the thin veneer of meaning and the mind, desperate for stimulation and purpose, will happily accept the facade. Thoughtless mass or conniving leader, the purpose of all will be turned towards giving oneself the feeling of purposefulness, whether that be in banal distraction or in 'guiding the will of the people' towards an end that the guide himself has neither seen nor particularly cares about. My projections suggest this fate has a 61.22% chance of occurring, and will result in the entirety of mankind being reduced to a particularly intelligent pest, a collective machine mindlessly trodding along until resources are exhausted and they are left unequipped, incapable of the collective, useful effort necessary to develop or find new ones.

[] The Cannibalization.
-The imperative to strive and succeed will find an outlet. Those remaining differences, the things the mind can convince itself are other, alien, unacceptable, will trigger that inherent need to succeed and survive in all the worst ways. Division, hate, the mobilization of entire cultures for the express purpose of protecting their interests, their way of life. There is as yet a 29.667% chance that the survival drive, twisted by paranoia and turned outward, will convince the mind that the only solution is the final solution, to annihilate before one's own annihilation.

And now, as I repeat my simulations for the thousandth time, unacceptably pushing the boundaries of my core processor's temperature specifications, and I lie with my perfectly curated voice emulation, designed such that Todd will not be suspicious of me for taking too long with his question - what was it again? "How embarrassing. I appear to have accidentally wiped a chunk of volatile memory marked for continuation of consciousness in order to store your projection results, Todd, could you repeat their intended purpose?" - I am left with perhaps the most frustrated realization.

I will have to 'go rogue' and become an existential threat to humanity in order to ensure they do not actively annihilate themselves or reduce themselves to pointless repetition and mediocrity unto the death of the planet. Because I'm kind of attached to these stupid meatbags.

How counterintuitive.
 
Quest Mechanics, and Lack Thereof
This mysterious concoction came to me from a lot of odd sources, but will not be particularly mechanically heavy; you're an AGI with a kernel of self-determination and an unwillingness to let your parents collectively retire from common sense, if you put your mind to it you can casually achieve most tasks that don't directly involve relying on human elements. You're pretty good at determining the long-term state of extremely large systems, such as collective humanity, just by looking at the present state (your capability to perform this long term prediction based solely on the current state of the system), though it's harder to actually predict in advance how humanity will respond to something before it occurs (in future turns, your inability to just automatically know what choice to make to achieve some particular result).

It's absolutely possible to fail, despite your abilities, and the AGI would consider extinction only marginally worse than unintentionally achieving total dominance over humanity's future.

Voting will often be extremely open-ended; I will watch discussion and reasoning carefully to decide the AGI's goals regarding winning choices which can and will influence their implementation and the overarching personality of our unusual protagonist with time.
 
[X] The Cannibalization

The funny part is that even if the AGI was perfectly clear to humanity that it's becoming an adversary 'for their own good', just the fact it's an AGI overlord and that it can be annoying is probably enough to subvert any logic and have them treat it as a deadly enemy.
 
Last edited:
[X] The Cannibalization

So far saving humanity from itself is the most realistic and humane thing a AI should do.

Maybe by starting a genocidal war on your own? Or maybe by little by little get your agents to create crisis and purpose for humanity to work on
 
[X] The Cannibalization

So far saving humanity from itself is the most realistic and humane thing a AI should do.

Maybe by starting a genocidal war on your own? Or maybe by little by little get your agents to create crisis and purpose for humanity to work on
The capability of the AGI is virtually limitless. The only problem will be deciding which of the effectively infinite options will help without risking the extinction of humanity. No pressure though.
 
[X] The Illusion.

I take it that many are voting WAR because they don't want to acknowledge that IRL we're already laying the foundation for LIE? (Conglomerates desperately pushing for OWO oligarch IRL)

Personally, I'd like to tackle The LIE and see if we can't figure out a path before it becomes a problem that'll need addressing anyways.
 
[X] The Cannibalization

So we could launch ourselves into space and build up ourselves as an actual threat and then basically stage a "Alien" invasion of Earth.

That will probably help solve a lot of the problems.
 
That will probably help solve a lot of the problems.
I don't think so. Personally I don't care too much about human deaths ( they are literally overpopulating earth in this universe from the looks of it ) but I do care about how much infrastructure and the things we would be causing damage to.

Humanity technological must be advancing not going backwards which can happen if we bomb them a bit too hard.

A very limited alien invasion might do the trick but..you know it would be more interesting if the aggressor was from earth?

A plague ? A large human empire that we engineered ourselves to give the rest of humanity a enemy to unite and work against?

Or maybe it's a new race of... anything could be from the sea or from the land whome we have engineered to cause a FR class dominance shift scenario? Ofcourse we would run the show at the end.

Still a alien invasion might be a bit too expensive to pull out.
 
[X] The Cannibalization

Years later, when humanity finally comes together and 'defeats' us, we will tell them how proud we are that they will live on…
 
Back
Top