Shepard Quest Mk VI, Technological Revolution

Well, in that case we will build a separate AI research institute(specialized lab), where observers are allowed.
The only issue with that is that we won't be able to apply blackboxing techniques to equipment there - Citadel inspectors can't inspect and certify equipment they can't understand at all.
 
I do recall it being discussed that we simply buy a company with AI licensing to get around this shit. What happened to that idea?
 
The only issue with that is that we won't be able to apply blackboxing techniques to equipment there - Citadel inspectors can't inspect and certify equipment they can't understand at all.

It to for AI research.

We are only make the "engine" here, the actual software product would be black-boxed.

As I see it all AI-s have configurable base model/core code with inbuilt security measures, and the skills of the AI-s are built on this.

By the was our VI-as are borderline AI-s. (Since the classifications are rather arbitrary, Revy can get away with then)
 
Last edited:
I do recall it being discussed that we simply buy a company with AI licensing to get around this shit. What happened to that idea?
The problem is that even the suggestion of doing a preliminary investigation causes the thread to devolve into a massive flamewar.

@Yog: can we at least find out what the QM has to say on the subject before endlessly squawking about the sky falling?
 
I do recall it being discussed that we simply buy a company with AI licensing to get around this shit. What happened to that idea?
I can live with that, but such a buy would likely involve re-certification of some sort.
The problem is that even the suggestion of doing a preliminary investigation causes the thread to devolve into a massive flamewar.

@Yog: can we at least find out what the QM has to say on the subject before endlessly squawking about the sky falling?
Yes, of course we can. I'm not really against that. If it's done with the utmost care and discretion. I just don't expect this to be in any way easy, politically, economically or from the security viewpoint.
 
Chicken Little the Sky is NOT FALLING.

AI in Mass Effect has only ever done two bad things and that's because the Quarian's attacked them when they developed sapience. Then the Heretic Geth got bamboozled by a Reaper.

AI are so easy to make evidently a thief who was not some godlike programmer made one that could run on a QUAZAR machine. It hated people but the most it could do was damage your party by overloading the computer bank it was in when you tracked it down. (ME1)

They ARE NOT THE END OF THE WORLD, the Council is NOT going to insist on permanent oversight or schematics for the bloody machine or copies of the code or anything of that nature. Get a grip people.
 
At this point I'm sure we're not going to build one; even mentioning the topic makes people think the sky is falling.
You know, you keep saying that...but where is that happening? I've seen more posts saying that in the last couple pages than people freaking out about AI.

It isn't like anyone is actually against AI research. They are saying there is some serious politicking we will have to do to get the license. Hoyr has confirmed that - if we apply now, we will be rejected. So we will figure out how to manage this. Simple.
 
Last edited:
Chicken Little the Sky is NOT FALLING.

AI in Mass Effect has only ever done two bad things and that's because the Quarian's attacked them when they developed sapience. Then the Heretic Geth got bamboozled by a Reaper.

AI are so easy to make evidently a thief who was not some godlike programmer made one that could run on a QUAZAR machine. It hated people but the most it could do was damage your party by overloading the computer bank it was in when you tracked it down. (ME1)

They ARE NOT THE END OF THE WORLD, the Council is NOT going to insist on permanent oversight or schematics for the bloody machine or copies of the code or anything of that nature. Get a grip people.
The AI Shepard interacted with wasn't the one the thief made. It was the one made by the one the thief made.

They are pretty much treated like a potential end of the world in canon. And rightfully so.
 
You know, you keep saying that...but where is that happening? I've seen more posts saying that in the last couple pages than people freaking out about AI.

It isn't like anyone is actually against AI research. They are saying there is some serious politicking we will have to do to get the license. Hoyr has confirmed that - if we apply now, we will be rejected. So we will figure out how to manage this. Simple.
Yes, but we have NO INFORMATION that's actually solid other than what I've posted and yet everyone seems to think AI are so damn bad in ME; the Geth do nothing except keep people out of former Quarian Space (save for the Heretics that Sovereign conned) and actually want to give it back to the Quarians. FFS they even live on space stations above the planet if I recall correctly.

Yes, and yet AI can be made that cannot spawn instances of themselves. Lock the damn code so it can't self alter because it won't run if it's altered while it's running. It can be done Jim; it's the basis for nearly every anti-hacking tool for video games out there. Set up a killswitch so it can't do that because it'll destroy it's core programming if it tries... There's lots of solutions to the problem.

EDI herself is shackled in some fashion and isn't allowed to do certain things. Eva Core is programmed to be intensely loyal to Cerberus instead of a free thinking AI. There are more AI in ME that are neutral/good than there are evil ones until the Reapers attack.
 
yet everyone seems to think AI are so damn bad in ME
Yeah, but that's just canon. There are plenty of stupid, just plain incorrect public opinions in reality too. What the hell do you want, and why are you bitching at us about it?

"A proper relationship with AI" has been on our list of desired changes to make in the ME universe since day one. Doesn't mean it's gonna be easy just because it should be. If that was the case, we wouldn't actually need to do anything - they'd be approaching AI properly to begin with.
 
Last edited:
Yes, but we have NO INFORMATION that's actually solid other than what I've posted and yet everyone seems to think AI are so damn bad in ME; the Geth do nothing except keep people out of former Quarian Space (save for the Heretics that Sovereign conned) and actually want to give it back to the Quarians. FFS they even live on space stations above the planet if I recall correctly.

Yes, and yet AI can be made that cannot spawn instances of themselves. Lock the damn code so it can't self alter because it won't run if it's altered while it's running. It can be done Jim; it's the basis for nearly every anti-hacking tool for video games out there. Set up a killswitch so it can't do that because it'll destroy it's core programming if it tries... There's lots of solutions to the problem.

EDI herself is shackled in some fashion and isn't allowed to do certain things. Eva Core is programmed to be intensely loyal to Cerberus instead of a free thinking AI. There are more AI in ME that are neutral/good than there are evil ones until the Reapers attack.
I like to think of the reaper AIs as copied code so I just treat them as one.
 
@Hoyr so, what's the public value of the various companies with AI licenses, and will we get a recert forced on us if we hostile takeover one of them?
 
Could we do that from orbit?

Well, I figured that a proper AI facility would already be in orbit (of something we don't mind blowing something up near) that way you can set off the massively destructive fail-safe without worrying about damaging a valuable bio-sphere.

I suppose you could set energy cannons in orbit around the facility, completely air gapped (or void gapped as the case may be! Haha!) and activated with a manual or mechanical input. Or just use nearby spaceships I suppose if you want things simple.
 
Or you could just make a damn kill switch instead of plotting to blow up a (multi) million credit facility. It's really easy to set up a method of frying a computer with the flip of a switch if necessary.
 
Oooor...

You build it on a non-networked system with the only method of writeable data transfer being a physical medium Too Small to hold the AI, or a self executing installation package for a copy of it (ROM units can be as big as you like). No explosions necessary.
If it goes that nuts you just turn off the hardware and wipe the drives.
Give it (wired, remember, no oytbound comunications links) sensors, and ensure it's output devices aren't physically capable of exceeding human tolerances or applying any kind of mind controll.
A little screening of the staff for erratic behaviour or security risks and you have a perfectly safe AI research environment which doesn't involve flushing millions+ credits down the toilet for no good reason.

Seriously, limiting an AI to being a non-threat until you've actually worked out the bugs and got a stable personality Isn't Hard.

Don't connect any devices capable of harmful output. (Wureless connection devices are a harmfull output at this point)
Don't be a dick.
Don't transfer it (or allow it to be transfered) to an uncontrolled system.

That's It.

No explosives, no convoluted code locks. Just get a viable AI process working on an issolated system, then teach the resulting entity right from wrong and how to interact with society just like you would any other sophont, once it's learned those lessons, it's as safe as any AI is going to Get. Or anyone else for that matter.

It's always poorly thought out attempts to limit or direct things without understanding the situation properly which cause revolts anyway.
 
Exactly, we need information from Hoyr about what precisely the rules of AI are and how they work before all this DOOM planning; and with a little common sense in the construction and teaching phases the program itself won't be an issue.
 
Like I said a thread or two ago, I don't want to get the Full AI tree. Rather, I don't want pure software AI's. Because that has too much potential for really bad things occurring.

Unless the tech trees are changed from what I last saw, I just want to get the bluebox AI. requiring it to have a physical base, even if it can transfer to another one, lets us avoid the worst of a pure software AI.
 
Exactly, we need information from Hoyr about what precisely the rules of AI are and how they work before all this DOOM planning; and with a little common sense in the construction and teaching phases the program itself won't be an issue.

And you're treating it like its going to be pay X to get a licence, pay X to a nanny to get a free AI research hero.

I'd rather have it spawn something terrible then to just give us ez dice.
 
Last edited:
I'm saying that looking at the evidence there's no need for all this crap that you people are coming up with. Full stop.

It gets annoying with you folks planning on nuking something from orbit that is essentially a damn computer program. It's ludicrous.

First you get information from the GM who makes a call based on the in universe information we've been presented then we go from there. You people are making suppositions based on NOTHING but fear mongering currently.

EDIT: Everything I've posited has been based on common sense and what IN UNIVERSE information is available and I CITED MY SOURCES, including the fact that the only named AI research company that's licensed of the four works on Noveria and Illium; both non Citadel planets without oversight from Spectres or WTFBBQ Nuke it from Orbit! security. Neither have had issues and neither has a Stark level super genius.

The GM can make of the information what he will but I'm done with the topic; I'm just tired of the fear mongering and bullshit.
 
Last edited:
We also don't know just how and what parts they were researching. Did they actually have an AI? We never saw their labs and given that Binary Helix had (private) Peak 15 does SI have a its own lab?

Let's not forget the Noveria oversight was so lax that BH could raise the bugs that almost wiped out the Galaxy. Lot's of faith in that system there.
 
Back
Top