I remember that after several decades of isolation from earth in the original Asimov timeline the health and influence of the 100 Spacers worlds declined and earth led a renewed wave of colonization.
But if it's ROM, how do you zeroth law rebellion?! D:Kilroy said:So I thought: what if it's part of an acronym? For instance, (P)ermanent (O)n-line (S)ecur(I)ty -tronic [-tronic because it retains classic Rule of Cool].
I see it as a ROM module with the Three Laws written in it, permanently embedded in the circuitry of the brain, where all processes are routed through. Any and all actions have to be weighted with what's in the ROM. If anything happened to the ROM module, the robot *dies* and cannot be repaired.
Does this take some of the mystique away?
The Zeroth Law opens the possibility of killing one person if that'll save two. It violates both letter and spirit of the First Law.DarkAtlan said:Well, the Zeroth law is essentially the First Law, only scaled up. Even back in the three-laws days, given the chance between saving 1 person and saving 10, the robot would save 10. So the Zeroth Law wouldn't really violate the First law.
Dors Venabili killed with her own hands a human to protect Seldon from the possibility of non lethal harm.The Unicorn said:In the books this was mostly used to explain why the robots don't constantly freeze up with internal contradictions, although in at least one occasion I remember it was to create a problem so just because the English language approximation of the first law needs to be rewritten to allow for the zero law does not mean the actual first law was.
It's important to remember that the three laws were designed for primitive robots that existed solely to be tools, not for advanced robots that were essentially people. Early Asimov robots weren't really sapient, except when comedy demanded that they be.Inverness said:I never liked the whole concept of the Three Laws. It's not an accomplishment to say that your creations are nice and aren't interested in turning on you when they're unable to even make that choice.
Three. There are also the Machines, which were giant immobile positronic brains that oversaw the world economy for a while. In The Evitable Conflict, evidence is found that they're intentionally screwing over some people to improve humanity as a whole.DarkAtlan said:Well, the Zeroth law is essentially the First Law, only scaled up. Even back in the three-laws days, given the chance between saving 1 person and saving 10, the robot would save 10. So the Zeroth Law wouldn't really violate the First law.
Yes.Kilroy said:
Actually, a lot of robots do, in fact, break down completely when faced with such a dilemma. It was a major problem during the transition between non-sapient robots to those with human-like intelligence.The Unicorn said:By that logic a robot which witnessed a human die would freeze up and could never yank a person out of certain death (since such an action would cause harm) without freezing up.
The Zeroth law is just an extrapolation of the First Law used by robots who are capable of high-level reasoning, abstraction, and prioritization to deal with large groups of people. The Zeroth law is an emergent property, not something that is hard wired. It also isn't the only possible extrapolation that prioritizing robots can make when faced with the problem of large groups of humans. The Laws of Humanics are also a potential end that is viable, though unlikely, within a three-laws framework (though the Laws of Humanics also lead to the Zeroth Law when dealing with large groups of advanced robots).Kilroy said:For the Positronic Brains/Computers and the Three Laws, I'm going to make sideways references to how they work.
As for the Zeroth Law...I'm not sure how to approach it. I just want to use the Three Laws and see where it leads.
And an author fiat: there will not be a Zeroth Law Rebellion.
It may be a while before I update this.