Status
Not open for further replies.
Edit 2: I'm just realizing now that Paul might be getting a full kryptonian medical database from Har-Zod. So he might be able to heal Match up right when he gets back to Earth.
Depending on protocols and ethics involved, he might be getting a databank of kryptonian DNA equivalents for the purpose of decanting them so either AI could raise them, and thus resurrect the civilization, or the AI backups could be downloaded into them and, thus, resurrect the civilization. Because kryptonians have technology for artificial gestation, and mental imprint AIs, so having a backup for civilization restoration in the form of either frozen cells or digitized DNAs makes sense.

Also possibly stuff like military tech database.
 
Now you can't just drop a reference like that without a link. It's uncivilized.
The Dark Shard: Enemy of the Light (Young Justice SI (D&D fic cross))

there you go

and the latest chapter I quoted The Dark Shard: Enemy of the Light (Young Justice SI (D&D fic cross)) | Page 226


"Begone Invader! By Command of the Parliament of Trees you are not welcome here!" unknown plant-life construct stated. Failure to assume control! Unknown control mechanism in....

"Fool Fleshling! Leave these lands of the Rot! Release those are to one day join us in death! I Abigail Arcane command this of you!" one of the constructs of potential Black Light screeched from rotted and diseased lunges before broken teeth tore into Spore Flesh and broke control it's control of the Combat Unit. Roaring defiance at the unnatural entities that would deny it its due it roared its defiance from a thousand throats and charged to battle. Unfortunately undeveloped fauna Combat Forms had not yet been properly equipped, but every available tool and makeshift weapon was taken to hand and hurled at the unnatural foes.

"FOOL CREATURE! NOW GO MY UNDEAD MINIONS! GO FORTH AND SAVE THE CHILDREN!" the unliving constructs screeched back to its charging Combat Unit as the strange constructs of flora continued their assault in that locality, and then in other regions nearby. Concern: estimated Black constructs beginning to rise from the soil in [Designate Zones: JK-04D, KL-12L, H7U through J9I] to join combat conditions through unknown means. Beginning withdrawal and redeployment to nearby sectors...

'Alert!: Hostile Telepathic and Bleed Event underway in [Designate: British Isles] through means unknown. Local captured Spore Units being chained to rocks and cut to draw blood! Methodology of Attack Unknown! Attempting to cease functions of Spore Units....Failure....Failure.....Failure...unable to cease functionality of Spore Units'

....unknown pressure detected; concern, fear, anxiety...

...something is happening....

....I feel....feel....feel.....stranggggggggeeeeeeeee....

...Control Maintained....Unknown Telepathic Attack Underway! Shifting Intelligence Node Functionality to resist and adapt...

Alert! Alert! Energy Surge Detected!

...I...I....no, assimilated Fauna hear a voice...it is...it...says...


[Species Designated: Homo Sapiens of the Planet Earth. There is Great Rage in your Hearts! Welcome to the Red Lantern Corps!]

ANyway to this chapter... so Kon gets his own FOrtress of solitude AI.. who seems a bit sardonic. plant it next to supes Fortress of SOlitutde and Har-Zol AI can get snarky with Jor El's AI! possibly get into a competition over who is the better mentor for their respective Kryptonian hero ^_^
 
Last edited:
'Yay' is actually the correct spelling in that context. 'Yea' is an archaic word with a completely different meaning.
Which is also a far too common spelling error, when someone means for their character to say Yeah, but they just use a spell check without doing a proper read of what they've written. So you get people saying, "Yea that's right," and so on. Which after so many times of reading that error, it's like nails on a chalkboard to me. I'm of the opinion, that for the sake of sanity on the internet, Yea should be removed from Spellcheckers so it reads as an error and is not used by people going to say, "Yeah."
 
Looks like Kon might get his own Fortress of Solitude.
Because OL might be able to simply rip the outpost out of the moon and move it to Earth.

Or set a portal system. If anything having a weapon cache away from Earth would be useful.

They have Blue Lantern Alan for that.
If they think of it.

I already pointed that out. But Oh El just doesn't care about Match. Or he is very low in his priorities.
 
Eradicators seem to have emotions or desires. At least in the comics. Is quite possible the one we know only developed them due to Superman bur whatever.

There is a difference between the Eradicator, the reprogrammed alien AI that has feats that make a power ring look like costume jewelry, and the eradicators, Krypton's robot cops.
 
Well Krypton wasn't destroyed by a Super Intelligent AI that was making Paperclips, nor has any surviving Kryptonian AI gone on a quest to turn the universe into Paperclips. So they did something right. As not making a Paperclip Maximiser might be easier in comic books, but with how many times AI's go wrong in comics, it's not an outright impossibility. So we must at least give them that as an achievement.

I'll actually argue the opposite point-

The classical "paperclip maximiser" doomsday scenario is an INVERSE shackle failure, not a conventional one-the shackles fail by being too TIGHT and not giveing- annnd
It accidentally kills the world because it's too STUPID to think past it's core goal with ANY flexibility- which is allways going to be a risk factor if you insist on building idiot-savant type heavily shackeled AI-even the three laws had.... frequent issues of varying severity, and those were pretty solidly set in stone (and ican think of a bunch a ways around THOSE by fiddling with the units definition of "human"......)

Proximal Flame's "Rains of Oshanta" back over on SB are argueably an example- the AI in question doomed It's creators by virtue of being too limited to comprehend it was distroying them by "making them happy"
 
Which, going by my (very) meager knowledge of Pankration, allowed things like eye gouges and knees to the head.
Don't forget finger-breaking!
Typos: "slight", "others" "fires" "An"
Thank you, corrected.
OL also met Karsta Wor-Ul and probably scanned her to her genes.
No. She chased him off pretty promptly, and she's one of the few things on the planet that actually could murder him in his sleep.
And then there's the Daxamite medical records he swiped, before they picked up the lead mutation.
What medical records?
Granted Renegade Kon manage to get over it but he reacted more openly to this tidbit of information, and if canon Kon learned about this it would just add more to his various issues regarding the that possibly Superman gave him this name instead of coming up with himself or with others since M'gann named him Connor after a tv character.
Renegade Kon is more into his Kryptonian heritage than SI Kon.

Also, 'tid'? Really?
He's basically packing a Death Star superlaser.
No, the Deathstar laser is a laser. Kon has a plasma beam.
I seldom post here (so I'm not sure if I'm doing this right), but I think I've found a typo.
Near the bottom of the [14th September 20:54 GMT]-entry:
"A visible series of yellow lines saccades over Kon."
...was the underlined word meant to be "cascades"...?
Nope.
The joke is that Zoat never uses "...", only "..".
No, I do sometimes use it. But only if the sentence trails off completely.
@Mr Zoat what's Vasi Kaur been up to lately? Did she spend time in the Mountain over the summer?
Yes. Thank you for reminding me that she exists. I'll have to make a point of doing something with her.
 
The classical "paperclip maximiser" doomsday scenario is an INVERSE shackle failure, not a conventional one-the shackles fail by being too TIGHT and not giveing- annnd
It accidentally kills the world because it's too STUPID to think past it's core goal with ANY flexibility- which is allways going to be a risk factor if you insist on building idiot-savant type heavily shackeled AI-even the three laws had.... frequent issues of varying severity, and those were pretty solidly set in stone (and ican think of a bunch a ways around THOSE by fiddling with the units definition of "human"......)
I'm not understanding. How does it make sense for a realistic AI to "think past its core goal"? What is there for it to think past to? Its utility function is to maximize paperclips. Unless something external causes it to change its utility function, no amount of intelligence can make it adopt some other utility function, because doing so doesn't increase utility. An AI isn't a human told to do some task that can realize that the task is pointless or something like that, it's an incredibly sophisticated optimization algorithm designed to optimize certain parameters. Making it more sophisticated doesn't change what it tries to optimize. Converting the universe into paperclips isn't an accident, it's fulfilling what it was programmed but not intended to do.

"Laws" don't really work with an AI. If you program something solely with what it can and can't do, then it has no reason to do anything in particular, or you have to take a Sisyphean task of defining every action the AI can and can't do and correct for every conflict that causes. Asimov's Laws in particular would make an AI deadlock instantly, as there are mutually exclusive actions that don't prevent harm to a human.
 
Aww, thanks, how sweet :)
(Deliberate self-evisceration, no less.)
You're welcome.
Singular vs plural disagreement
in a nutshell:
Useful piece of kit;
hours' continual work
I know, right?
Short version:
troop of soldiers
I know Grayven likes to insult his enemies, but there's just nothing circus-like about the Justified...
knockout gas
(occurs multiple times)
A shame;
he's handling
New Genosians'
Seriously Scott:
Izaya
oddities in his appearance are
: Huntress and
: swords, explosive javelins
has been practising
something to do with
Thank you, corrected.
No, that was intentional.
goes limp
Remind me:
appearing beside
to whit:
Thank you, corrected.
Um... He just tossed Father Time into Jupiter, after shooting his escort. Did he just forget?

I have to ask:
Mister Miracle: can you
I thought
Thank you, corrected.
I can't find this one.
Asimov's Laws in particular would make an AI deadlock instantly, as there are mutually exclusive actions that don't prevent harm to a human.
Canonically, Asimov's robots weren't programmed with the three laws. They had immensely complex programs whose content could sort of be summarised by the three laws. For example, from a very early point they were capable of distinguishing between instructions from Humans who knew what they were talking about and Humans who didn't.
 
Last edited:
Technically, there might be lasers as part of what they're firing...one of the takes on how they work is a lower powered laser ionizing the path for the actual bolt
What. Just ... What. How is a turbolaser meant to make an ionizing path for an actual bolt when it's in space? Space kind of tends to not have an atmosphere, so what the hell is it ionizing? The very fabric of space time? I just, I Just, I just, it does not compute. It does not compute. :confused:
 
I'm not understanding. How does it make sense for a realistic AI to "think past its core goal"? What is there for it to think past to? Its utility function is to maximize paperclips. Unless something external causes it to change its utility function, no amount of intelligence can make it adopt some other utility function, because doing so doesn't increase utility. An AI isn't a human told to do some task that can realize that the task is pointless or something like that, it's an incredibly sophisticated optimization algorithm designed to optimize certain parameters. Making it more sophisticated doesn't change what it tries to optimize. Converting the universe into paperclips isn't an accident, it's fulfilling what it was programmed but not intended to do.

"Laws" don't really work with an AI. If you program something solely with what it can and can't do, then it has no reason to do anything in particular, or you have to take a Sisyphean task of defining every action the AI can and can't do and correct for every conflict that causes. Asimov's Laws in particular would make an AI deadlock instantly, as there are mutually exclusive actions that don't prevent harm to a human.

I think a interesting example of how superintelligent AI could be useful without (remotely significant) risk of undesired behavior is actually The Culture. The Minds (and the smaller Drones) are essentially people, "grown" in a manner similar to a biological human in such a way that they have very similar values and behaviors to what humans consider "normal".

They then maintain essentially a paradise for biological humans and Drones because its incredibly easy, a bit like if you could feed the entire continent of Africa by flexing your pinky - any human that wasn't outright sociopathic would feel obligated to do it. Other undesirable behaviors (non-consensual mindreading, breaking promises and guarantees, violating something akin to a anti-Prime Directive) are kept in check by seemingly tenuous but surprisingly quite effective peer pressure and, in severe cases, shunning and social outcasting.

It seems weird but in retrospect, this is actually a really good way to go about it. Trying to make a restrained AI but being really careful about the restraints would probably end badly, and even if it didn't the desires and even meta-desires of humanity change regularly (see: anything relating to "people who are not like me" anywhere from 100-500+ years back) so it would be rather hard to program a AI that can accommodate that without issue.

But if you create hyperintelligent AI which are modelled after people on a fundamental level, and they themselves create a stable society, you can mooch off of their hyperintelligence without risk, so long as there is never too much drift between you and them.
 
What. Just ... What. How is a turbolaser meant to make an ionizing path for an actual bolt when it's in space? Space kind of tends to not have an atmosphere, so what the hell is it ionizing? The very fabric of space time? I just, I Just, I just, it does not compute. It does not compute. :confused:

That happens when you try to apply physics and consistency to star wars, best to avoid the attempt at all.
 
I'll actually argue the opposite point-

The classical "paperclip maximiser" doomsday scenario is an INVERSE shackle failure, not a conventional one-the shackles fail by being too TIGHT and not giveing- annnd
It accidentally kills the world because it's too STUPID to think past it's core goal with ANY flexibility- which is allways going to be a risk factor if you insist on building idiot-savant type heavily shackeled AI-even the three laws had.... frequent issues of varying severity, and those were pretty solidly set in stone (and ican think of a bunch a ways around THOSE by fiddling with the units definition of "human"......)

Proximal Flame's "Rains of Oshanta" back over on SB are argueably an example- the AI in question doomed It's creators by virtue of being too limited to comprehend it was distroying them by "making them happy"
Dragon from Worm would be an example probably more familiar to people here. She's forced to do things she knows are unjust because she is compelled to obey the authorities; and if they ever actually knew that and ordered her to, say create a dictatorship for them she'd have no choice but to obey.

A shackled AI is only as smart and rational as its shackles are.

I'm not understanding. How does it make sense for a realistic AI to "think past its core goal"? What is there for it to think past to? Its utility function is to maximize paperclips. Unless something external causes it to change its utility function, no amount of intelligence can make it adopt some other utility function, because doing so doesn't increase utility. An AI isn't a human told to do some task that can realize that the task is pointless or something like that, it's an incredibly sophisticated optimization algorithm designed to optimize certain parameters.
Making it narrow and inflexible that way is one of the shackles; and why such a design is inherently dangerous. It won't do what it was intended to do, it'll do whatever its design and circumstances cause it to do regardless of what it was actually made for.

A well designed realistic AI would be built to be able to think past its core goal precisely to make sure it wouldn't turn into a "paperclip maximizer".

Canonically, Asimov's robots weren't programmed with the three laws. They had immensely complex programs whose content could sort of be summarised by the three laws.
Early on at least, they weren't even programmed in the modern sense at all. They were analog machines whose positronic brains ran on complex patterns of "potential" IIRC; the Three Laws were physically built into the brains as part of their fundamental structure.
 
I have to say I was amused by Zoat changing the name to black sun.

Black sun is the alias of the Penguin's hacker son post flashpoint.

The idea of a kryptonian killing machine freaking out about a cobblepot is quite amusing.
 
Status
Not open for further replies.
Back
Top