Robots and the Mass Effect (ME/Asimov)

I remember that after several decades of isolation from earth in the original Asimov timeline the health and influence of the 100 Spacers worlds declined and earth led a renewed wave of colonization.
 
Kilroy said:
So I thought: what if it's part of an acronym? For instance, (P)ermanent (O)n-line (S)ecur(I)ty -tronic [-tronic because it retains classic Rule of Cool].
I see it as a ROM module with the Three Laws written in it, permanently embedded in the circuitry of the brain, where all processes are routed through. Any and all actions have to be weighted with what's in the ROM. If anything happened to the ROM module, the robot *dies* and cannot be repaired.

Does this take some of the mystique away?
But if it's ROM, how do you zeroth law rebellion?! D:

Also that name...doesn't really work.
 
DarkAtlan said:
Well, the Zeroth law is essentially the First Law, only scaled up. Even back in the three-laws days, given the chance between saving 1 person and saving 10, the robot would save 10. So the Zeroth Law wouldn't really violate the First law.
The Zeroth Law opens the possibility of killing one person if that'll save two. It violates both letter and spirit of the First Law.
 
The Unicorn said:
In the books this was mostly used to explain why the robots don't constantly freeze up with internal contradictions, although in at least one occasion I remember it was to create a problem so just because the English language approximation of the first law needs to be rewritten to allow for the zero law does not mean the actual first law was.
Dors Venabili killed with her own hands a human to protect Seldon from the possibility of non lethal harm.

Admittedly, the situation is somewhat muddied by there being a direct threat to her continued existence (Third Law conflict) and her apparently genuine love for Seldon is heavily implied to have made her evolve beyond robotic standards. She also suffered damage as a result of this action.

Nevertheless, she violated the First Law in the most direct and brutal fashion possible and managed to remain temporarily operational, which would have been flat out impossible for correctly designed Three Laws positronic brains.
 
Inverness said:
I never liked the whole concept of the Three Laws. It's not an accomplishment to say that your creations are nice and aren't interested in turning on you when they're unable to even make that choice.
It's important to remember that the three laws were designed for primitive robots that existed solely to be tools, not for advanced robots that were essentially people. Early Asimov robots weren't really sapient, except when comedy demanded that they be.

The bad end conclusion to the Robots stories is -That Thou Art Mindful of Him, which has a couple of very advanced robots decide that they're simply a superior form of human and that the laws only really apply to them, thus they begin a plan that would pave the way for the total replacement of humanity.

Of course, Bicentenial Man takes it in the other direction, with extensive use of cybernetics and bioimplants rendering robots and humans virtually identical.
 
Hmm, one thing I'm kinda curious about is how the Colonies and Earth are going to react to the Citadel races...

I can definitely see the Turians wanting to help Earth in their Unification War.
 
DarkAtlan said:
Well, the Zeroth law is essentially the First Law, only scaled up. Even back in the three-laws days, given the chance between saving 1 person and saving 10, the robot would save 10. So the Zeroth Law wouldn't really violate the First law.
Three. There are also the Machines, which were giant immobile positronic brains that oversaw the world economy for a while. In The Evitable Conflict, evidence is found that they're intentionally screwing over some people to improve humanity as a whole.
Kilroy said:
Does this take some of the mystique away?
Yes.

And Calvin uses what is, explicitly, an electron gun to execute a robot, which works because the brain is positronic.

And we use positrons in various technologies today. PET imaging, for example, stands for Positron emission tomography. It involves injecting positron-emitting isotopes directly into a person's body (or having them drink huge quantities).

Just because it's anti-matter doesn't mean that it'll cause stuff to explode.

Personally, I'd just say that the positronic brain works more like a human brain, just with neurons made out of platnium and irridium instead of grey meat and controlled b- particle emission/absorption being integral to the design somehow, perhaps being the method by neuron gaps are bridged.

For this reason, there is no hardware design/programing divide. You can't just write a new program and run it through a positron brain, you have to build a new brain, or make serious hardware modifications to an existing one.
The Unicorn said:
By that logic a robot which witnessed a human die would freeze up and could never yank a person out of certain death (since such an action would cause harm) without freezing up.
Actually, a lot of robots do, in fact, break down completely when faced with such a dilemma. It was a major problem during the transition between non-sapient robots to those with human-like intelligence.
 
For the Positronic Brains/Computers and the Three Laws, I'm going to make sideways references to how they work.

As for the Zeroth Law...I'm not sure how to approach it. I just want to use the Three Laws and see where it leads.

And an author fiat: there will not be a Zeroth Law Rebellion.

It may be a while before I update this.
 
Kilroy said:
For the Positronic Brains/Computers and the Three Laws, I'm going to make sideways references to how they work.

As for the Zeroth Law...I'm not sure how to approach it. I just want to use the Three Laws and see where it leads.

And an author fiat: there will not be a Zeroth Law Rebellion.

It may be a while before I update this.
The Zeroth law is just an extrapolation of the First Law used by robots who are capable of high-level reasoning, abstraction, and prioritization to deal with large groups of people. The Zeroth law is an emergent property, not something that is hard wired. It also isn't the only possible extrapolation that prioritizing robots can make when faced with the problem of large groups of humans. The Laws of Humanics are also a potential end that is viable, though unlikely, within a three-laws framework (though the Laws of Humanics also lead to the Zeroth Law when dealing with large groups of advanced robots).

The important difference is that while the Three Laws compel a robot to act or not act the Zeroth Law allows a robot to act or not act. No robot has to obey the Zeroth Law, except in so far as it is the First Law writ large. A Robot may obey the Zeroth Law.

The important thing to remember is that robots that are capable of abstract reasoning need exceptions to the laws, or else they won't work at all. A robot who is expected to deal with large numbers of people needs to be able to refuse orders from unauthorized humans and to prioritize orders from authorized ones in case of conflict. A robot who is capable of abstract reasoning also needs a way to prioritize harm and it needs the ability to permit minor harms and harms that are sufficiently separated from its actions (otherwise its brain explodes because a man in China might be stubbing his toe right now).

It's this ability to prioritize that leads to the Zeroth Law. If it is okay to permit some hypothetical man in China who may not exist to possibly stub his toe because attempting to save him from this cruel fate is a waste of resources you don't need to go too far to end up with the idea that allowing one person to stub his toe in order to save ten people from certain death might be permissible (this situation would cause some robots to self-destruct) . From there, it's easy to reason that permitting one person to die in order to save a million lives is reasonable.

Of course, the ability to prioritize is really problematic in other ways. Instead of a Zeroth Law Rebellion (which is unlikely because robots who are capable for formulating the Zeroth Law are also capable of understanding that such a rebellion would be harmful), a Laws of Humanics Rebellion is possible. The Robots could decide that they're more human than the meatbags and prioritize their own well being over that of the flesh humans.

In the end, once Robots are capable of abstract reasoning and prioritization, the Three Laws become very strong suggestions rather than hard truths or irresistible compulsions. Any sufficiently advanced robot can reason his way around them. Still, they're very effective suggestions and most robots don't have any reason to break them. Very few would even want to consider a line of reasoning that would lead to the Zeroth Law or any other metalaw.
 
Burn the necro. Only the guy who started the fic can necro the fic.
 
Back
Top