Shepard Quest Mk VI, Technological Revolution

Look, the only fear mongering I can recall right now, is your posts against the fear mongering. I just check back 3 pages, to the start of this and my opinion remains unchanged.

The closest thing (not very close, but still) was Yog Speculating on if/how the council would require oversight.
 
Oh and planting a 50 megaton weapon underneath the research facility or making it in orbit so it can be blown up without destroying the biosphere isn't fearmongering? Okay... Or planning orbital strikes?

It seems the two of us have very different definitions of the term.
 
Oh and planting a 50 megaton weapon underneath the research facility or making it in orbit so it can be blown up without destroying the biosphere isn't fearmongering? Okay... Or planning orbital strikes?

It seems the two of us have very different definitions of the term.
Oh for fucks sake, its like you are entirely missing the point.

Even the people suggesting shit like that aren't saying we shouldn't research AI.
 
Oh and planting a 50 megaton weapon underneath the research facility or making it in orbit so it can be blown up without destroying the biosphere isn't fearmongering? Okay... Or planning orbital strikes?

It seems the two of us have very different definitions of the term.
Oh for fucks sake, its like you are entirely missing the point.

Even the people suggesting shit like that aren't saying we shouldn't research AI.
Van ropen gets it. Fearmongering is saying "If we do x, We'll all DIEEEEEEE!" We're discussing possible (perhaps paranoid) security measures for when we do it.

You are probably thinking of "paranoid" instead.

Definition of fearmongering in English:

noun

The action of deliberately arousing public fear or alarm about a particular issue:
paranoid

adjective para·noid \ˈper-ə-ˌnȯid, ˌpa-rə-\


medical : of, relating to, or suffering from a mental illness that causes you to falsely believe that people are trying to harm you
: having or showing an unreasonable feeling that people are trying to harm you, do not like you, etc. : feeling or showing paranoia
 
Last edited:
Okay, overly paranoid crap instead of fear mongering. I apologize for poor word choice.
I had a feeling it was something like that. Even if i might disagree on some of your points/conclusions, you did at least bother to cite sources...

And no one is safe from the tyranny of english... At least, amongst us that use it.
 
Let me suggest a compromise, instead of building an AI we instead dedicate ourselves to studying the divine secrets of the Old Machines?
 
Considering there's a nifty kill switch in every building ever built up to code that instantly kills all power running to it with the flip of one switch (called the main breaker) I'd say a 50 megaton bomb or an orbital strike or blowing up a million credit facility is a poor choice. Set a breaker for the computer/server/blue box and it has an on and off switch.

Don't give it access to the power systems.

AI or not there's nothing the damn thing can do with no electricity. No Power = No possible way for a computer to act up.
 
Considering there's a nifty kill switch in every building ever built up to code that instantly kills all power running to it with the flip of one switch (called the main breaker) I'd say a 50 megaton bomb or an orbital strike or blowing up a million credit facility is a poor choice. Set a breaker for the computer/server/blue box and it has an on and off switch.

Don't give it access to the power systems.

AI or not there's nothing the damn thing can do with no electricity. No Power = No possible way for a computer to act up.
Theres a nifty thing called a continuous power source, used in many office and coding environments, to ensure a power surge/blackout won't cause a massive loss of data, especially if autosaving would be inconvenient.

Though, really, a lot of the problems of AI could be solved if certain designs were a tad smarter....

Why does every door need to have frakkin WIFI?! JUST INSTALL A STUPID KEYPAD! And then hackers/rogue AI couldn't just wave an omnitool at it! Or perhaps just remove any electricity from the doors period, even if you have electric locks! Our storage areas already have doors weighted such that they could be lifted with ease by a child! And what stops us from having some sort of metal patio door if really needed!? (Note, these are only for places with easily breathable atmosphere.)

There are problems with the above of course.... but having every door everywhere basically be half an airlock, no matter the planet, struck me as odd and a waste. Civilian housing doesn't need doors that can stand up to power armor and mechs for a time. Usually.

Is there a single doormaker, with a true monopoly on all doors everywhere? The... Door Broker!

Edit: And No Neurotoxin in the Facility at all!
 
Last edited:
Theres a nifty thing called a continuous power source, used in many office and coding environments, to ensure a power surge/blackout won't cause a massive loss of data, especially if autosaving would be inconvenient.
So, y'know, don't use those if one of your security measures is "turn off the power"?
Or don't have one on the machine that most needs to be turned off by shutting off the power, at least.

I'm really not clear what the point of bringing up the existence of UPSs was. They're useful, but if they compromise security measures they shouldn't be used.
 
And a UPS unit has less than 15 minutes of power on my personal computer with just the monitor/keyboard/peripherals; unless you make one with an arc reactor how long is one going to run a high end system like a blueboxed AI?

Hell if I was building an AI I'd damn sure have a switch that opens and closes the line to allow it or disallow it to have power in case of a Skynet scenario. Goes with not giving the thing access to building systems and extranet/internet or production facilities.

EDIT: And make the doors manual with a DNA/Retinal check to get in, along with scans to make sure you don't bring in the wrong sorts of things... The same sort of security measures we have on the other research labs should suffice.
 
Last edited:
EDIT: And make the doors manual with a DNA/Retinal check to get in, along with scans to make sure you don't bring in the wrong sorts of things... The same sort of security measures we have on the other research labs should suffice.
Although apparently we don't, because a cracker just managed to ghost his/her/its way past all our security doors, past all our cameras, over the moat (yes, my design included a moat, with pressure/wave sensors, specifically to catch people operating under Tactical Cloak), and we only noticed because he managed to hit a tripwire on our research server.
 
Which we have no idea how they got in or anything of the sort and won't know until we finish the encounter. *shrug* After that we can patch whichever hole allowed them to get in; I'm personally expecting something of an inside job on that one.

EDIT: You can never discount the human/sentient/employee factor in these sorts of things. They're always going to be the weak point in whatever security setup you come up with.
 
Last edited:
And a UPS unit has less than 15 minutes of power on my personal computer with just the monitor/keyboard/peripherals; unless you make one with an arc reactor how long is one going to run a high end system like a blueboxed AI?
Well, just as an example, Moscow Duma (parliament / council) building is equipped with continuous power source capable of lasting them several days. The system takes the whole building. It's mandatory to have such systems in all government buildings in Russia. Just as example.

If a power shutdown / surge can damage / disrupt a multi-billion credit research project, the housing of said project was designed by an idiot.

The issue with "switch off power" is that either it opens you up to sabotage (and sabotage is something one needs to proof an AI research project against) or accidents, or it becomes hard to implement.
 
It's not setup as part of a security system and also is quite likely setup on Diesel/Gas generators not a battery. Much like prison power systems in the US. They have gasoline/diesel for several days and then need to refuel.

As part of a security system to stop an AI from going rampant the ability to interrupt power to the computer in question is ideal. It definitely beats having to set off a nuclear device or destroy it from orbit and have to rebuild the whole damn facility.

Losing a few days research to corruption or unsaved data is far preferable to a Skynet situation. Or a smoking hole/scattered debris from setting off a huge bomb or orbital strike or blowing a space station.
 
Last edited:
It's not setup as part of a security system and also is quite likely setup on Diesel/Gas generators not a battery. Much like prison power systems in the US. They have gasoline/diesel for several days and then need to refuel.

As part of a security system to stop an AI from going rampant the ability to interrupt power to the computer in question is ideal. It definitely beats having to set off a nuclear device or destroy it from orbit and have to rebuild the whole damn facility.

Losing a few days research to corruption or unsaved data is far preferable to a Skynet situation. Or a smoking hole/scattered debris from setting off a huge bomb or orbital strike or blowing a space station.
But then we get those pesky environmentalists picketing on our front lawn and that's not really a situation we can solve by gunning everyone down, well, we could, but then the government would see and get narky.
 
But then we get those pesky environmentalists picketing on our front lawn and that's not really a situation we can solve by gunning everyone down, well, we could, but then the government would see and get narky.
For turning off a computer in our complex with a kill switch? Why would environmentalists be involved in any fashion? I'm not the one who suggested destroying a lab to handle an AI problem.
 
For turning off a computer in our complex with a kill switch? Why would environmentalists be involved in any fashion? I'm not the one who suggested destroying a lab to handle an AI problem.
Well, the whole gas generator rather than fusion thing. Most of them would be reasonable and would see the reasoning, but then we get people like Space-Greenpeace, who would decry us using "fossil fuels" at all.
 
It's not setup as part of a security system and also is quite likely setup on Diesel/Gas generators not a battery. Much like prison power systems in the US. They have gasoline/diesel for several days and then need to refuel.
Both, actually. Generator takes too long to start running. In between, there are batteries.

In the end the problem runs into risk vs. benefit conflict. The easier it is to shut down, the easier it is to sabotage. The harder it is for an AI to access new data, the harder it is to make it grow, and the slower the research (and the more it feels trapped, likely). Etc.

Personally, I'd verge on the side of caution in the following way:

1) Secure site - underground, under one of Paragon Industries more secure facilities, full company of security forces in full gear on standby at all times

2) Complete isolation - researchers live on-site for the duration / shifts, contact with the outside world only through a controlled and screened landline for the duration of their contract

3) Completely cut-off network - no network connection between "recreational" network in the living quarters, the "work" network in the working area and the "research network" which AI have access to

4) No portable data storage devices on site, and that includes implants that can be connected to the outside systems (even if it requires the previous wearer to be killed for this to happen)

5) Passive, active surveillance of AI access points and server rooms via QEC at all times. Automatic and manual self-destruct devices so AI can't be physically stolen

6) Several self-destruct mechanisms, both physical and data-scrubbing ones.

These would at least make me feel a bit safer about this.
 
You realize that by saying no one with implants that can connect with outside sources you're pretty much saying Revy and most of the other researchers working for PI can't be involved?

EDIT: Some of those are reasonable precautions - a great many of them are NOT and are merely rampant paranoia and will result in very few people being willing to work on the AI project and if Revy wants to be involved she disappears and can't do anything for the company etc.

If she's working on the project (which due to your rules she can't and neither can most of our top researchers) and gets to leave while the rest are prisoners they're going to be VERY unhappy.

Your utter unwillingness to consider the fact that the one method you can be absolutely sure will work (namely turning everything off) should be done just demonstrates the paranoia further.

You'd rather blow it up than have a method of cleanly deactivating it. *shakes head* Which makes absolutely NO SENSE.
 
Last edited:
You realize that by saying no one with implants that can connect with outside sources you're pretty much saying Revy and most of the other researchers working for PI can't be involved?

EDIT: Some of those are reasonable precautions - a great many of them are NOT and are merely rampant paranoia and will result in very few people being willing to work on the AI project and if Revy wants to be involved she disappears and can't do anything for the company etc.

If she's working on the project (which due to your rules she can't and neither can most of our top researchers) and gets to leave while the rest are prisoners they're going to be VERY unhappy.

Your utter unwillingness to consider the fact that the one method you can be absolutely sure will work (namely turning everything off) should be done just demonstrates the paranoia further.

You'd rather blow it up than have a method of cleanly deactivating it. *shakes head* Which makes absolutely NO SENSE.
1) Living on the premises and having to go through a serious security procedure before leaving (if one can leave during the research) is actually quite a normal security measure for high-risk, high-secrecy projects

2) I fully realize the implications, both about the implants and limiting the pool of available researchers. The number of researchers can be increased with higher salaries and long-term bonuses, even if it will still be very limited. I fully realize the implications about implants.

3) Turning the power off won't, in fact, work. Not to prevent theft or sabotage at the very least.

4) I'd rather have an option of blowing the system up, than not having it. Deactivating it is, clearly, the first option. But I'm entirely unwilling for it to be the only option.
 
I didn't say it should be the only option; just that it should be the FIRST option. It doesn't slag months/years of research as the initial option.

You don't want it stolen? Build the damn bluebox into the floor or the wall; it can't be stolen without cutting out a section of duracrete plus whatever other security is in place and taking out 20 well trained power armor troops silently beforehand is going to be ludicrously difficult. Much less the security from the base above.

Serious security procedures and living on the premises I don't disagree with; making people prisoners I do and so would everyone else we already have escorted researchers for other projects and I don't see that imprisoning scientists because of AI research is in any way reasonable. And this project is actually MUCH less dangerous and high-risk than MANY of the other projects PI does. AI's are already all over the place in ME - most of them aren't even an issue to any organic.

You're a rampant paranoiac on the subject and aren't looking at the context provided by the games. IC until the heretic geth attack there are literally NO BAD AI - that Revy has a method of knowing about. The geth? Never leave the Perseus Veil and haven't attacked anyone who wasn't attacking them or violating their territory and drove off the Quarians 300 years ago after the Quarians attacked them first.
 
Last edited:
Back
Top