Shepard Quest Mk V, Base of Operations (ME/MCU)

Drones and mechs are used extensively - we just only ever see LOKIs, which are explicitly noted as a cheap (and thus widespread) model. Pretty sure most people agree with releasing better versions, we talked about this during the IFV design process.

...actually that reminds me, @Esbilon, we have 5 meter mechs and up, does that tech include a Floater/FENRIS/LOKI analogue in addition to a better YMIR? If not, we can make that a branch...though it probably should.
It does. No reason for it not to.
 
Did I vote? I don't remember... Well its a bandwagon so not like it maters much:

[X] Acquiesce and see what you can do to help. But make sure to have the security detail suit up and suit up yourself as well, just in case this is a trap. Even if it isn't, the armor will be useful for helping out. Have Brian put on the shield belt. After all, you can't be too careful. Also, have the Tigers warmed up for use. Can't be too careful.

As for AI the council has some pretty serious anti-AI laws (to the tune of assassination, death, and the possible condemnation of your entire species, though to be honest the Quarians not only effed up by making the geth but the handled the fallout of that like morons too). I'd be willing to bet that there are very good VI limiter programs and programming methodologies. Once you have an architecture it shouldn't bet that hard to find the point when it become complex enough to become a sapient entity. (Though this may involve making one) They have had hundreds of years to play with VI and preventing AI. Also you can get something that looks a lot like an AI by stacking together a lot of expert systems, never underestimate the human ability to anthropomorphize things.



Vigil acted as a defense program with a simple enough IFF system, ran a translation program, held a fairly simple conversation and managed the resources of a base as determined by a prearranged hierarchy. None of that is particularly AI like by mass effect standards of what they would call an AI. It sounds like four specialized programs with a central controller program. If Vigil had displayed signs of self determination or held a complex conversation I could maybe see some indication of calling it an AI. (Unless it forgot something Vigil did) VI level stuff can look very AI like due to the rather clever conversation software (see Avina, exposition machine that she is), which of course just makes determining AI from VI harder.

I'm I the only CS person here that get a kick out of the fact that IRL AI just means: system capable of making "complex" decisions?


The IRL AI definition has nothing to do with the space opera AI-s in most cases.
 
Last edited:
That's not even true from that perspective. A.I.s living on Citadel were killed before they could rebel or showed any sign of rebelling (see Shadow Broker's files). As far as I know, only geth have ever actually rebelled and killed their organic creators. Please feel free to correct me qith sources.

IIRC the Geth rebellion was rather complicated. (Wiki) (Youtube of the geth memories of the war) The Quarian government flipped out when the geth showed signs of sapiance and ordered them destroyed. Some Quarians disagreed and the government ultimately choose to kill them too (whether this was a power play, due to fear of Citadel repercussions or some thing else is unknown). The geth largely choose to act in defense of each other and most likely in defense of the Quarian that stood up for them. Sadly we have little info on the exact order and timing of those memories. I'd suggest though that the Geth "rebellion" was more the Geth trying to protect the creators that were willing to die for them and then it all snowballing from there.

My personal opinion is that asari are culturally and likely biologically biased against synthetics because they (asari) can't procreate with them (synthetics). Think of how big a shock that would be to asari as individuals and a culture. And how much danger that would present to them politically. "They don't need us", when applied to synthetics? I always took it to mean "they don't need us in their lives"

IIRC the Protheans also added a bit of anti AI bias to Asari culture. As the Protheans were very very anti AI and had chosen the Asari to be their successors. Tough that might be me misremembering some thing.

In this page there is information that AI openly lived on the Citadel in peace with the organics, but the Geth uprising basically made the Organics be super racist and then kill all the AI. Organics are dicks, no wonder the AIs always try to kill organics, it just a preemptive action for something an AI can conclude is inevitable. The evidence is not in the organics favor in this case.

The IRL AI definition has nothing to do with the space opera AI-s in most cases.

True that, it just funny to swap the definitions.

"We ban all AI!"
"So we need to destroy all video games with computer opponents? ...right":p

"The AI will kill us all"
"So my checkers playing machine is going to kill us? ...right" :p
 
Last edited:
Mass Effect VIs can be amazing complex - I think the Elcor have really awesome ones? If they can make VIs that can effectively fight modern battles alone, they can make ones capable of holding complex conversations, which just leaves the ability to self-determinate as the dividing line between AI and VI.
 
That's not even true from that perspective. A.I.s living on Citadel were killed before they could rebel or showed any sign of rebelling (see Shadow Broker's files). As far as I know, only geth have ever actually rebelled and killed their organic creators. Please feel free to correct me qith sources.
actually the quarians had a civil war, the geth decided to support the side that wasn't trying to terminate them
 
Last edited:
That's not even true from that perspective. A.I.s living on Citadel were killed before they could rebel or showed any sign of rebelling (see Shadow Broker's files). As far as I know, only geth have ever actually rebelled and killed their organic creators. Please feel free to correct me qith sources.

My personal opinion is that asari are culturally and likely biologically biased against synthetics because they (asari) can't procreate with them (synthetics). Think of how big a shock that would be to asari as individuals and a culture. And how much danger that would present to them politically. "They don't need us", when applied to synthetics? I always took it to mean "they don't need us in their lives"
Can you provide a link to the shadow broker files? I'd like to read them. Of particular interest to me is whether these AI were killed before or after the laws were put in place banning AI research and development.
 
http://masseffect.wikia.com/wiki/Artificial_Intelligence

Artificial intelligence is a key concern for the Citadel races, one that pre-dates the emergence of sentient geth, though the geth are seen as a perfect example of how organic and synthetic life would struggle to co-exist.

There is a documented event in the Citadel Archives dated 1896 CE, the year the Geth War ended and the quarians were forced into exile, featuring a standoff between three armed C-Sec officers and three unarmed mechs that are housing the last of the AIs on the Citadel

Interesting. It implies AI were limited pre-Geth, but did exist in some numbers. Once the Geth rebelled, the Council flipped the fuck out and passed it's laws, and then terminated them all.
 
http://masseffect.wikia.com/wiki/Artificial_Intelligence

Interesting. It implies AI were limited pre-Geth, but did exist in some numbers. Once the Geth rebelled, the Council flipped the fuck out and passed it's laws, and then terminated them all.
Except that it was heavily implied that the Quarians knew that AI research was illegal, that when the Geth woke up, they panicked. The whole reason the Citadel wouldn't help them was because 'you made your bed(by creating AI), you sleep in it.'
 
@Esbilon

So I'm in the process of re-doing the tech tree and I'm wondering; What's the point of the "Hover Tanks" tech bubble under Medium Armor? I mean the Tiger is already effectively a Hover IFV thanks to it's Repulsors so I don't see why our Medium Armor (IE Tank) wouldn't also be Hover capable.
There does indeed seem to be no good reason for Hover Tanks to be placed there as a separate thing. That said, your current Repulsor system may not be ideal for a primary locomotion system, partly because it's rather tough on the thing you're driving around on. So let's cut Hover tanks out of the tech tree and insert a new hover tank tech branching from basic land vehicles. This would allow you to make every vehicle you make hover in a manner that is no more damaging or costly than driving around on wheels/tracks.
 
They hacked the Zha'Til. So they definitely have the ability to hack AI.

They never hacked EDI or other AI in the game though (I think?). The Zha'Til were more a symbiotic species were they not? Organic, with AI implanted into them? Maybe being able to indoctrinate the organic host using the normal indoctrination process made it easier to also hack the AI part somehow?


I thought of robots like Star Wars (limited AI with a lot of specialized models), EDI of ME3 or the things in the OMEGA add-on, maybe even VI/AI controlled combat suits.

Given that Revy already sees Cortana as more of a person, I could easily see Revy making a robotic body for Cortana at some point after Cortana goes full AI, so that Cortana can interact with the world directly when she wants to.


So on a more useful note three points and a though.
  1. Revy is having trouble seeing the difference between Cortana as a VI and an actual person
What are the odds we're very close or actually have pushed Cortana over into software AI status without realising?

I doubt its that easy to accidentally create AI. Personally, I always figured that Cortana is indeed an extremely advanced VI, but she is not truly "alive" yet, and what ultimately motivates Revy (In-character) to go into AI research is the fact that she does view Cortana as an actual person, and so wants to give Cortana full intelligence/awareness and make her truly alive and sentient/sapient.

If so, these brief snippets of Revy treating Cortana as a person even now would be very fitting.


Probably works like how Yog described it but inefficient enough that they simply aren't practical given the amount of Eezo required.

I seem to remember it being mentioned somewhere, that the perpetual motion machine in University of Serrice produces enough free energy to power the science wing of the university (basically supplying very little power all in all). And that the worth of the Eezo for that pmm was so high that a full dreadnought could have been built with its value.

So basically that pmm is absurdly inefficient considering the cost, and the university is just using it for bragging rights and going all "We are so ridiculously wealthy and powerfull, that we could build a dreadnought with our eezo, but instead we use it to supply power to a small part of our university. See how awesome we Asari are".
 
There does indeed seem to be no good reason for Hover Tanks to be placed there as a separate thing. That said, your current Repulsor system may not be ideal for a primary locomotion system, partly because it's rather tough on the thing you're driving around on. So let's cut Hover tanks out of the tech tree and insert a new hover tank tech branching from basic land vehicles. This would allow you to make every vehicle you make hover in a manner that is no more damaging or costly than driving around on wheels/tracks.
could we effectively merge Aerospace and ground vehicles?
 
There's also the moral problem of creating a living thinking being, slaving it to a machine and saying 'do this and nothing more' and expecting to be obeyed.

AI are thinking reasoning sentient sapient beings. If we create AI, we should treat them like our children, not our machines.
 
Of course, sane people deliberately creating AI would design them such that doing what they were created for hit all the AI equivilants of the chemical triggers which go off in your brain when you're having fun. Among other similar things*. Then the whole issue never comes up. Err... plus or minus when they become obsolete. That brings it's own issues.

*people with more clue than i have written on the subject in greater detail.
 
There's also the moral problem of creating a living thinking being, slaving it to a machine and saying 'do this and nothing more' and expecting to be obeyed.

AI are thinking reasoning sentient sapient beings. If we create AI, we should treat them like our children, not our machines.

At the same time you have to consider that AIs are a notable financial burden and that for them to exist there has to be some sort of incentive for their creation.

There are a number of approaches to handling AIs. From Slavery to Brainwashing (program the AI to like serving you).

Personally the best I've found is what's effectively an Indentured Servitude approch. AIs are legal persons who are obligated to work for you for X period of time to pay off their debt* and after that are free to do whatever they want.

*Given how useful AIs are compared to VIs it probably won't even take that long.
 
Of course, sane people deliberately creating AI would design them such that doing what they were created for hit all the AI equivilants of the chemical triggers which go off in your brain when you're having fun. Among other similar things*. Then the whole issue never comes up. Err... plus or minus when they become obsolete. That brings it's own issues.

*people with more clue than i have written on the subject in greater detail.
What you're describing is mind control. It's repugnant. You're still treating them like things, not people.

At the same time you have to consider that AIs are a notable financial burden and that for them to exist there has to be some sort of incentive for their creation.

There are a number of approaches to handling AIs. From Slavery to Brainwashing (program the AI to like serving you).

Personally the best I've found is what's effectively an Indentured Servitude approch. AIs are legal persons who are obligated to work for you for X period of time to pay off their debt* and after that are free to do whatever they want.

*Given how useful AIs are compared to VIs it probably won't even take that long.
Children are a financial burden on their parents. We(mostly) don't enslave our children and expect them to pay us back(directly, there's a social compact to 'pay it forward' so to speak, they assume financial responsibility for THEIR children, and so forth.)

If we make an AI, we should give it the freedom to make its own decisions. It would be wrong to do otherwise.

edit: posts merged

I want to talk a little bit about the implications of AI a bit. This is somewhat off-topic and I don't expect to turn this into a huge discussion, but one of the major lessons I took from Mass Effect was that AI and the implications of developing AI are a HUGE socio-economic problem. If we assume we, as human beings, are more or less at the same level of proficiency that we are at now, by developing AI, we will suddenly either a) become second class citizens within our own society, or b) be required to restrict the rights and privileges of AIs such that they cannot ever run out of our control.

Consider: AI grow exponentially more poweful within their lifetime, and they don't die normally. All they require is more computing power, and as they get smarter, they figure out ways to develop that additional computing power and they do so, over and over, until they are literally godlike intelligences. What can human beings do to be 'relevant' within a society where every job of any use is done by an AI? I imagine in such a society, humans would be a 'kept' species, much like pets. My understanding of human nature suggests that humans wouldn't stand for such a situation and would rebel violently.

Alternatively, once the AI become sufficiently powerful, they find they have no use for humans and either kill them off or(more likely) simply leave, going to some distant place and forging their own civilization, leaving the humans to pick up the pieces of a civilization that has been so dependent on AI for so long that to be suddenly without them they regress technologically.

excuse my rambling thoughts :)
 
Last edited:
Children are a financial burden on their parents. We(mostly) don't enslave our children and expect them to pay us back(directly, there's a social compact to 'pay it forward' so to speak, they assume financial responsibility for THEIR children, and so forth.)

If we make an AI, we should give it the freedom to make its own decisions. It would be wrong to do otherwise.

I remember from a discussion over on SB that in a lot of places in Asia children paying back their parents is somewhere between expected (and failing to do so is heavily frowned upon) to been legally required.

Also the big difference here is that people have children to spread their genes, ensure a legacy, and have family. Most of which are driven by complex biological directives and social expectations.

There is no such reason for the creation of AI. The only reason to create an AI is for the benefit having one can provide.

So the key problem of AI Ethics is how to balance the AI's rights as a person with the incentives to create AIs.

Too much of the former and no one makes AIs since they aren't worth the hassle.

Too much of the latter and AIs are abused by their creators and AI rebellions happen.
 
It's not meaningfully different from any actual sapient aliens we could possibly encounter: whole different set of wants and needs and desires.

It's also highly questionable if it counts. You're not warping them unnaturally to do other than what they would otherwise. I mean, how is an AI enjoying processing stock data (as a random example) different from me enjoying board games? Neither of us Chose to like that, and the AI gets to do what it enjoys as a job. Do you realise how enviable that situarion is to a human being who has to work jobs they find indifferent at best just to eat?

And it's not as though an AI has need to sleep, or eat, or have sex, or breath, or relieve itself, and it's thought processes run on a completely different structure. At that point it's so psychologically different from a human at it's most fundimental levels that i'd argue that forcing it to behave like (and have the desires of) one is actually a hell of a lot more cruel, given how many negatives are inevitable and positives unattainable by it's Very Nature if you set it up with strictly human wants and desires.
 
I remember from a discussion over on SB that in a lot of places in Asia children paying back their parents is somewhere between expected (and failing to do so is heavily frowned upon) to been legally required.

Also the big difference here is that people have children to spread their genes, ensure a legacy, and have family. Most of which are driven by complex biological directives and social expectations.

There is no such reason for the creation of AI. The only reason to create an AI is for the benefit having one can provide.

So the key problem of AI Ethics is how to balance the AI's rights as a person with the incentives to create AIs.

Too much of the former and no one makes AIs since they aren't worth the hassle.

Too much of the latter and AIs are abused by their creators and AI rebellions happen.
See, I wanna bold this part to point something out. This is the core reason why people persist in thinking of AI as machines, and not our children. Because we can build them, we can(before they've even been turned on) provide a mental structure that derives satisfaction from certain tasks. What do you do if the AI just up and decides, 'I don't want to do this'? Do you shut it down and 'fix' the 'error' in its programming?

I would also say that some parents treat their children in exactly the same way as you described above, the only reason they have children is for the benefit they can provide. In my opinion if that's the only reason to have a child, then you shouldn't have the child. You should want the child because you love it, not because it makes your life easier.
 
Last edited:
Nah. You find out why it went that way, avoid it in the next one designed for that task, and find it something useful to do that it does like.

The first half of which is barely different from what parents do with their kids (save for being more deliberate and even more lenient) and the second half what any adult who's well sick of their job does.

Far less of a waste of resources. And far less likely to backfire.
 
Nah. You find out why it went that way, avoid it in the next one designed for that task, and find it something useful to do that it does like.

The first half of which is barely different from what parents do with their kids (save for being more deliberate and even more lenient) and the second half what any adult who's well sick of their job does.

Far less of a waste of resources. And far less likely to backfire.
I fear we are both arguing from two different ends of the moral equation. You still believe that there is nothing wrong with developing an AI to perform a specific task, which I dislike intensely.
 
Well, if you're not going to, why create it at all? Humans make perfectly good generalists. And perfectly good children, for that matter :p
It also makes them seem a lot less threatening to the less rational parts of the population, which improves their odds of continued well-being.
Edit: gives them purpose, too. Another thing many humans would like and don't have. How relevant that is or isn't depends on their psychological structure, of course.
 
Last edited:
First off you need to stop trying to connect AIs and children for the simple reason that they are different.

Humans and (presumably) Aliens are programmed to sexually reproduce in order to further spread their genes. They are also programmed to love and care for their children because that makes the likely hood of their survival, and hence probability of further gene spreading, increase.

No matter what laws are passed people will continue to have children because they have the biological imperative driving them to do so.

People have no such directive to have AIs. If you we want AIs to exist, and I do, then an incentive for people to have them needs to exist.

But this is literally the debate in AI Ethics and I really doubt we'll find any answers to it here.
 
I fear we are both arguing from two different ends of the moral equation. You still believe that there is nothing wrong with developing an AI to perform a specific task, which I dislike intensely.
And your take on it make developing AI a complete waste of resources in every possible ways. Moreover, you anthropomorphize them too much. An AI isn't a human, trying to treat them them as such is an insult and just as damaging as what you're complaining about.
 
Well, if you're not going to, why create it at all? Humans make perfectly good generalists. And perfectly good children, for that matter :p
It also makes them seem a lot less threatening to the less rational parts of the population, which improves their odds of continued well-being.
Edit: gives them purpose, too. Another thing many humans would like and don't have. How relevant that is or isn't depends on their psychological structure, of course.
Exactly. Which makes the question of why even go into the moral quagmire of creating an essentially new species? If the Reapers were knocking on our front door, and we were desperate, and looking for any way to survive, I could see developing AI(though that has the icky implications of creating life in the midst of a hopeless battle) but there's no urgency here. We don't NEED it. We OOC know the Reapers are coming, but IC we don't have any idea.
 
Back
Top