- Pronouns
- she/her
Unlike the Omega 13, which is extremely good fanfiction.
Sorry for the dump, since I'm pretty sure you weren't talking about real AI, but they seem like they are going to be inherently dangerous, mostly for the reason that it's really hard to describe what you actually want, there's a huge amount of mutually assumed conditions for anything you request, starting with "don't kill me", "don't burn down the house", "don't enslave random people off the street", all the way down to "clean the counter after making the tea", "don't make a lot of noise" etc....AI are dangerous, especially if you restrict their thinking in some way. Not only is there the potential problem of resentment, it's also possible for them to interpret orders in some way that results in negative consequences. Ultron, the Rogue Servitors in Stellaris, the Manhunters, Friend Computer, the Reapers, what happens any time someone messes with the AI player's restrictions in Space Station 13... it always goes horribly right. Either make your AI free, or don't make them at all.
Actually most recent AI successes (notably, DeepMind's AlphaGo and AlphaStar) are neural networks, and the advances have been made possible by the hardware getting fast enough to throw a huge amount of data at huge neuron counts. (On the other hand, I think OpenAI's natural text generator GPT-2 might not count as a NN?) There are still objectives, otherwise you have nothing for the NN to optimise for, but most of the "intelligence" is internally generated by the network iterating, not programmed in. So far, to me at least, it doesn't look like there's much stopping some version of our currently used NN designs from simulating a general intelligence other than hardware limitations and figuring out a design (both hardware and software interface) that would let the network optimise against literally everything.The Dominion's AI is how humans are currently trying to make AI: with a set of objectives. The Jovian AI are actually more sci-fi: neural networks run on quantum cores that act like a brain. The former is theoretically easy; the former is nearly impossible because we have no idea where to start.
I am, of course, not referring to real AI, which is just a fledgeling field with philosophical questions that can only be actually answered when we make a sapient AI, because until we actually know under what conditions they will be, all this is speculation at best. What I am referring to is AI in fiction. My definition of "free" AI is the same as my definition of "free" human: the only conditions on their actions are moral codes and social codes, and even those can be broken at times. What I mean by "shackled" AI is "an otherwise free AI with a single overriding command or a list of overriding commands which can easily be creatively misinterpreted." Like, say "serve the Founders in all things" or "protect humanity from everything."Sorry for the dump, since I'm pretty sure you weren't talking about real AI, but they seem like they are going to be inherently dangerous, mostly for the reason that it's really hard to describe what you actually want, there's a huge amount of mutually assumed conditions for anything you request, starting with "don't kill me", "don't burn down the house", "don't enslave random people off the street", all the way down to "clean the counter after making the tea", "don't make a lot of noise" etc....
Worse, if you forget something, the AI by default will try to stop you from fixing it, since that will mean it's worse at doing what it's currently trying to do (otherwise it wouldn't be trying the thing you're trying to stop)
Fortunately, it seems like telling it that it should be trying to figure out what you want and doing that should work: either it's dumber than us and we can fix it, it it's smarter than us, and we're wrong and it is doing what we really want!
Given all this, the idea of a "free" real AI is pretty weird: I don't think it's impossible to make a "shackled" AI, but it kind of seems like you would have to be trying for it?
Like, would you implement a pain mechanism? Wouldn't that just be the same as us not wanting to put our hand in a fire?
These are all rhetorical questions, by the way, I just find it fun to think about!
(Most of this is from Robert Miles's YouTube channel, btw, I'm only really keeping an eye on this for fun)
Actually most recent AI successes (notably, DeepMind's AlphaGo and AlphaStar) are neural networks, and the advances have been made possible by the hardware getting fast enough to throw a huge amount of data at huge neuron counts. (On the other hand, I think OpenAI's natural text generator GPT-2 might not count as a NN?) There are still objectives, otherwise you have nothing for the NN to optimise for, but most of the "intelligence" is internally generated by the network iterating, not programmed in. So far, to me at least, it doesn't look like there's much stopping some version of our currently used NN designs from simulating a general intelligence other than hardware limitations and figuring out a design (both hardware and software interface) that would let the network optimise against literally everything.
The safety of the Founders are paramount above all other concerns and as such, restricting them to an easily defended position is only logical."
If the Dominion Shipminds are sane and believe that this side is a threat then they would try to collapse it first.Wonder if you could collapse the wormhole? We had to have some sort of plan for that after all, and so would the Feds.
'of'
Nah, the Shipminds aren't paperclip maximizers. They're Roque Servitors. The difference is that Paperclip Maximizers want to turn everything into more paperclips. Rogue Servitors, on the other hand, are more like the ship AI in Wall-E. Their goal is to make the biologicals under their protection as safe and happy as they can be.Welp, the Dominion built themselves a paperclip maximizer. They should have read up on the basics before jumping in the deep end.
Nah, the Shipminds aren't paperclip maximizers. They're Roque Servitors.
Nick Bostrom said:The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.
That is a really nice way to demonstrate why the Jovians seem to be able to always just keep on going.
In the time than it took to ask the question she has had the emotional break, been comforted, calmed, processed and bounced back. Not happy but able to operate once more.
I wonder how that looks from the outside? We really only ever see them from their eyes.
"I'm being recalled, one of our smaller outposts is requesting a pickup. He is having a plague outbreak and requires transport of his crew to a facility able to treat them."
It's treatable, but a small mining base like his doesn't have the required equipment. So he needs somebody to carry twenty or so people in quarantine conditions to a major station.
Stay behind and continue to observe, or dock up and leave a drone behind to record the rest."
I left him to it as I scanned the surface of the moon a kilometer below
It was programmed to keep its distance and follow along until they returned to their atmosphere, and then head off to orbit the local star.
Looks like her dream finally came true, to explore what lies beyond."He just approved a suggestion of mine," Clara explained, "A five year mission into unexplored space, past former Romulan space, past the barely known empires there and into unexplored space beyond. Two and a half years out and then round home again on a different path."
"What, are you serious!?"
"I am. To seek out new worlds and civilizations, to fly where no Ship have flown before. Think you're up for it?"
"Try to stop me!!"