Shepard Quest Mk VI, Technological Revolution

There's a little more to it then just having a good security set up. Actually one part of it I'll consider part of the AI Licensing Prep tech. But yeah its not impossible or that time consuming. I was actually kinda confused when the investigate what we need to do to get a license bit didn't pass in the last quarter vote. I was all ready to say something about that.
Yeah, well, it's a mystery to me too. It was one of the things that @Yog and @UberJJK specifically excluded from their votes, after I'd included it in mine, but it kind of got lost in the myriad of other things that were more important to get right at the time, like the corporate loan and factory build-out, new lab complexes, starting up the PMC, expanding into the non-military market... there was a lot of stuff to debate, and going through another round of "Oh noes! The moment we use the word 'AI' in a sentence the Spectres will nuke us from orbit! We need to cure death for everyone in the galaxy before we even think about having people look into AI research!" would have distracted from the more important topics.

I do hope I can get people to be a little less afraid of the subject before too much longer, though. I have tentatively scheduled 2174-Q2 as a good time to put RPs into AI research prep, and 2175-Q1 as the earliest possible time to invest in Blue Box AI research. I'd like to have my ducks in a row on AI research by then anyway; by the time we come out with our dreadnought-destroying Cabira frigates in late 2175 we'll have all the non-Turian council members trembling in their crystal towers and we'll have trouble getting any favors at all from them, cure for death or not. I hope getting a few Blue Boxes going will lead us to getting a few AI-based Research Heroes; there are so many new tech options available now we're going to need an expanded pool of RPs to make a decent headway on them before Reapertime.

I will point out that there is a difference between being allowed to do AI research, having a sapient AI and having a sapient AI that runs around doing whatever it wants (within the laws other sapient beings follow of course). I'll also tell you that such a vote if it was held at just this moment in game probably wouldn't pass. As for why well, find out in game or not as you will.
Well I assume it's due to Asari veto. The Turians are pretty much legally required to love us at this point, given they must have egg all over their faces from the First Contact War and will for some time, plus PI is giving them better military hardware for a discount when we really don't have to. The Salarians don't ever seem to have trouble with dangerous research options; even after the disaster with the Krogans they are all raring to go with uplifting the yahg.

It's only the Asari who don't love us right now, mostly because their guilty conscience over having a secret Prothean archive that they've outlawed everyone else from having is getting projected onto us.
 
Last edited:
Why don't we just get our planet, then do the very illegal AI research?
Because that would piss off our customers.
Plus, "AI Research Prep" involves things like ensuring we have appropriate security measures in place for researching AI, on the off-chance they are hostile and attempt to murder everyone. This is in addition to getting the license necessary to do it legally in Citadel jurisdiction.
 
Because that would piss off our customers.
Plus, "AI Research Prep" involves things like ensuring we have appropriate security measures in place for researching AI, on the off-chance they are hostile and attempt to murder everyone. This is in addition to getting the license necessary to do it legally in Citadel jurisdiction.

Also, the research mechanics does not even allow AI research without getting a license.
 
Also, the research mechanics does not even allow AI research without getting a license.
AI License Prep doesn't actually get us a legal license; it gets us anti-hacking countermeasures so hostile AIs can't ROFL-stomp our servers. The legal research license is going to require a Company Action or three, maybe a Personal Action depending on how much the Asari councilor objects to our existence.
 
On thinking further about it... I'm not that against AI research. But there's a problem - council is likely to insist on observers or some such. Or some serious concessions. It would, in any case, invite lots and lots of scrutny. Perhaps even from Reapers / Collectors (more so than they are already interested in us) - their primary mandate deals with AI, after all. I'm not sure we could afford it while developing a super-secret project (the super ships).
 
On thinking further about it... I'm not that against AI research. But there's a problem - council is likely to insist on observers or some such. Or some serious concessions. It would, in any case, invite lots and lots of scrutny. Perhaps even from Reapers / Collectors (more so than they are already interested in us) - their primary mandate deals with AI, after all. I'm not sure we could afford it while developing a super-secret project (the super ships).
Scrutiny is an interesting notion. Proper AI research would actually require us to isolate the AI code from everything else, so other than outside monitors being given occasional eyes-only access to our AI research division there's really nothing else the Council could possibly insist on without potentially breaking quarantine and setting off the very sort of loose-AI situation they're worried about.
 
Scrutiny is an interesting notion. Proper AI research would actually require us to isolate the AI code from everything else, so other than outside monitors being given occasional eyes-only access to our AI research division there's really nothing else the Council could possibly insist on without potentially breaking quarantine and setting off the very sort of loose-AI situation they're worried about.
They could reasonably insist on constant on scene observer presence, complete monitoring of all communications between the research complex and the outside world, and full access to all schematics of all research computers (meaning we won't be able to blackbox them).
 
No they couldn't. No one would allow them that much unrestricted access to their technology. You can bet the alliance didn't.

EDIT: And no business would allow possible corporate espionage of company secrets to some untrusted Council mook. The relationship between the three councilors pretty much equals them squabbling like cats and dogs ninety percent of the time with everyone looking out for number one.

So they'd know damn well no one is going to be willing to give them schematics or constant presence at a research laboratory. Too much money to be lost, and more than a little chance of the "observer" for the Council working for more than one master or just selling what he can see/knows to the highest bidder.
 
Last edited:
We're also the most rapidly developing company in the Galaxy. I wouldn't grant us(if I was the Council) AI licences to make that growth even faster.

Is there anything that goes into detail about the canon Council approved AI research? Yeah, it says there are four corporations with approval, but was it research about AIs or actual AI researchers?

Then there's does the AI even want to be a researcher? How do we compensate them? Are they basically enslaved to us by Council edit since they illegal everywhere else?
 
Is there anything that goes into detail about the canon Council approved AI research? Yeah, it says there are four corporations with approval, but was it research about AIs or actual AI researchers?
They are the only ones allowed to conduct research into the field of true artificial intelligence.
 
Last edited:
No they couldn't. No one would allow them that much unrestricted access to their technology. You can bet the alliance didn't.

EDIT: And no business would allow possible corporate espionage of company secrets to some untrusted Council mook. The relationship between the three councilors pretty much equals them squabbling like cats and dogs ninety percent of the time with everyone looking out for number one.

So they'd know damn well no one is going to be willing to give them schematics or constant presence at a research laboratory. Too much money to be lost, and more than a little chance of the "observer" for the Council working for more than one master or just selling what he can see/knows to the highest bidder.
Which is likely one of the reasons there are so few AI research companies (four in all of known space).

Those requests are reasonable, you have to understand. Very much so.
 
No they aren't.

Quite frankly I have no idea where you live; but in countries that aren't part of Russia/the former USSR/Red China/North Korea there is very little government interference. Especially in research. Unless you have specific contracts and are working FOR the government and are receiving government funding for your work you don't have government oversight like that in capitalist countries.
 
Well, there's things like the FDA regulating food and medicine quality and standards. There are various regulations for things like the automotive industry. There are other things right now, but my brain is a bit addled.
 
Well, there's things like the FDA regulating food and medicine quality and standards. There are various regulations for things like the automotive industry. There are other things right now, but my brain is a bit addled.
Yes, regulations and inspections but NOT schematics being handed over for government perusal or constant government presence by a non-project scientist etc.
 
There's standards and then there's someone asking you to write down the recipe for Coca-cola in triplicate for them.
 
There's standards and then there's someone asking you to write down the recipe for Coca-cola in triplicate for them.
Exactly; which is essentially what I told Yog. Demanding hardware schematics and a permanent presence in the lab is the equivalent of demanding company secrets from Coca-Cola.

Especially since you can program AI on a regular computer that'll run in a QUAZAR machine as of Mass Effect 1; AI that can spawn further instances or make new AI as was demonstrated on a mission on the Citadel there's no possible reason anyone would be willing to give such schematics.
 
No they aren't.

Quite frankly I have no idea where you live; but in countries that aren't part of Russia/the former USSR/Red China/North Korea there is very little government interference. Especially in research. Unless you have specific contracts and are working FOR the government and are receiving government funding for your work you don't have government oversight like that in capitalist countries.
AI research is sorta like a mix between biolgocail weapon / nuclear weapon research. Of course there will be oversight.

And I live in Russia.
Exactly; which is essentially what I told Yog. Demanding hardware schematics and a permanent presence in the lab is the equivalent of demanding company secrets from Coca-Cola.
AIs aren't like Coca-Cola. They are like bioweapons capable of causing global extinction (and that's a proven historical fact in ME - see Quarians).

If, say, Great Britain, was totally wiped out, with maybe a hundred people surviving, by an outbreak of a plague developed in a medical research laboratory (by accident or on purpose), don't you think all the governments ever will want this level of oversight?

AIs are all that and worse, because they are intelligent and capable of advancement / self-upgrading.
 
The rules on AI in Citadel space were made BEFORE the Quarians and Geth had their issues.

And frankly even in the companies working on vaccines to Cholera, Anthrax, Spanish Flu, etc etc don't have permanent government oversight and they work with the equivalent of biological weapons EVERY DAY. They have to do things to government specifications for safety but they don't have permanent assigned CDC personnel.
 
The rules on AI in Citadel space were made BEFORE the Quarians and Geth had their issues.

And frankly even in the companies working on vaccines to Cholera, Anthrax, Spanish Flu, etc etc don't have permanent government oversight and they work with the equivalent of biological weapons EVERY DAY. They have to do things to government specifications for safety but they don't have permanent assigned CDC personnel.
Again, AIs are far more dangerous than biological weapons. AIs advance on their own. They are intelligent. They can be actively malicious. They represent a far larger threat than a bioweapon does. Frankly? There is no correct analogue for how scary AI research is in real life. And no, real life AI research doesn't count (yet).

And are you seriously arguing that AI laws weren't tightened after the quarian debacle? Seriously? Is that your position? Because we know for a fact that at the time of geth rebeleion there were AIs living on the citadel openly, who were slaughtered to the last by C-Sec before the law outlawing them even passed. Quarian genocide would have tightened the laws very strongly.
 
If, say, Great Britain, was totally wiped out, with maybe a hundred people surviving, by an outbreak of a plague developed in a medical research laboratory (by accident or on purpose), don't you think all the governments ever will want this level of oversight?

This hypothetical Britain was killed by revolting slaves more like, then whined about not finishing their genocide for 300 years.

Blue Box AI, underground+shielded against transmission and with wired only connections to a closed network not connected to the Galactic Network or within reach any network that has transmission capability. This guarded by PI would be fine, maybe with some failsafe 50MT bombs strapped to the lab. Restrict writable Drive space too to start too.

It's why I'd rather wait till we have our own planet. Of course isolating a being against its will brings it own host of issues.
 
Last edited:
The Quarians weren't a council race; they were using AI all over the place and from the Council's point of view they brought it down on themselves. The Council isn't going to change their own verdicts based on a species from the fringe they're going to point and say "That's why we have this law here."

You can see it your way and I can see it mine; frankly I believe mine is much closer to the truth in this instance. You have your cultural experiences growing up in Russia where government oversight is handled very differently but frankly they don't match mine and there are obviously four FOR PROFIT businesses that are working on AI research.

And all you have to do to make an AI safe is prevent it from duplicating itself with locks and shackle the AI in question. Then it can't propagate and provide hardwired rules it can't violate. In Mass Effect they have to be trained much like a VI does amongst many other hurdles.

EDIT: And before I'd let someone walk around with company secrets in his head with a company on the scale of PI he'd have a damn Cortex Bomb with a CASIE unit monitoring him to make damn sure he's not doing Corporate Espionage.

"You want a permanent observer in our research? Sure, to ensure all he does is report what you need to know and not provide you with our patented data or try to steal from us Paragon Industries has fitted him with a cortex bomb. He tries to spread around what he knows or steal tech from us to give to you he goes splat."

In other words mandatory brain surgery with a nice sample of the strongest explosive in the setting controlled by a CASIE unit that's trained to make sure he doesn't steal stuff and give it to the Asari/Salarians/Turians/whoever else.
 
Last edited:
The Quarians weren't a council race; they were using AI all over the place and from the Council's point of view they brought it down on themselves. The Council isn't going to change their own verdicts based on a species from the fringe they're going to point and say "That's why we have this law here."
They did change their laws in canon - AIs were living on the Citadel before quarian genocide openly. After that, AIs were outlawed and exterminated.
You can see it your way and I can see it mine; frankly I believe mine is much closer to the truth in this instance. You have your cultural experiences growing up in Russia where government oversight is handled very differently but frankly they don't match mine and there are obviously four FOR PROFIT businesses that are working on AI research.
1) How them being for profit (citation needed, by the way) change anything? You don't have government-controlled for-profit organizations?
And all you have to do to make an AI safe is prevent it from duplicating itself with locks and shackle the AI in question. Then it can't propagate and provide hardwired rules it can't violate. In Mass Effect they have to be trained much like a VI does amongst many other hurdles.
How do you make such locks? No, seriously. How do you make locks preventing AIs from doing certain things, where the things in question are as broad a concept as "don't replicate, don't modify yourself, don't create new AIs / programs"? You pretty much have to make another separate AI that would evaluate the first AI. And how do you shackle it? It's not a simple task, and you can never be sure it's working correctly.
This hypothetical Britain was killed by revolting slaves more like, then whined about not finishing their genocide for 300 years.

Blue Box AI, underground+shielded against transmission and with wired only connections to a closed network not connected to the Galactic Network or within reach any network that has transmission capability. This guarded by PI would be fine, maybe with some failsafe 50MT bombs strapped to the lab. Restrict writable Drive space too to start too.

It's why I'd rather wait till we have our own planet. Of course isolating a being against its will brings it own host of issues.
1) AIs weren't seen as slaves. Moreover, geth weren't designed as AIs. They were designed as VIs, sapience and sentience were an emergent function.

And yes, I generally agree that those measures would be sufficient. If one can ensure that no communication gear is brought on site, that the facility is constantly secured, that AI doesn't have access to any manufacturing instruments, and that the personnel on site won't let it out.
 
I don't know how EDI was shackled but she was in canon; there were actions she was NOT allowed to do and wasn't able to do.

And we saw no evidence of another AI on the ship in ME 2.
 
Back
Top