So, I saw a very interesting presentation today, from David Gunkel on the question of robots being given rights. Gunkel raised some very interesting notions, firstly, that 'rights' may be the wrong conceptualization, as we tend to naturalistically presume a dichotomy between 'full human rights' and 'no rights.' Instead, Gunkel argued we can conceive of giving robots rights on their own terms, distinct from human rights, and ontologically distinct from the categories of person and property. In practice, we routinely grant personhood (that is, some sense of rights) to non-human entities, from corporations to Lake Eerie, for reasons that have nothing to do with their sentience or humanity.

The bulk of his distinctions were in trying to escape this binary, but he raised some interesting notions (mostly in the Q&A) in regards to why we might grant actual robots rights now not 100 years from now with Wall-E, Mr. Smith or HAL, but to actual hardware we have in the real world.

Some of these are.

1. When an AI creates something that is not the product of its creators (AlphaGo's 'most creative' moves, computationally generated poetry) who should own the IP? Assigning it to the AI in question is one means of resolving this with somewhat interesting implications.
2. Privacy: Granting, say, Alexa, a right to privacy offers a fairly novel way of resolving a lot of issues of modern surveillance.
3. Lastly (and most interestingly), Gunkel cited the media equation research to suggest that we often feel empathy for and with robots, citing the case of soldiers feeling sympathy with and empathy for bomb-detecting robots assigned to their squads. Here, the argument to grant moral protections was phrased to bring our treatment of actual robots in line with our unconscious perceptions of them as social actors. Since we instinctively want robots to have rights and feel as they should in many contexts, why not legitimize this.

This is a very interesting question as it presupposes distinctly anthropocentric motivations, rather than anthropomorphic ones, to give robots moral rights. It is a distinctly practical and current framework that jumps away from endless trolley problem issues or science fiction.
 
Last edited:
Didn't Vox have an article about this? I vaguely remember reading about this exact argument.

I have to say no, non-sapients robots should not have rights. There is little point to giving rights to a construct that cannot and will not ever understand them. They don't even need an animal welfare equivalent in that there's no reason to believe that robots feel anything, much less pain or misery.

So, as long as we're talking about modern robots that are definitely non-sapient giving them rights of any kind is pointless. If they aren't intelligent they aren't workers, they're tools. And tools don't need rights, neither a wrench nor AlphaGo are persons and there isn't any reason to act as if they were.
 
Didn't Vox have an article about this? I vaguely remember reading about this exact argument.

I have to say no, non-sapients robots should not have rights. There is little point to giving rights to a construct that cannot and will not ever understand them. They don't even need an animal welfare equivalent in that there's no reason to believe that robots feel anything, much less pain or misery.

So, as long as we're talking about modern robots that are definitely non-sapient giving them rights of any kind is pointless. If they aren't intelligent they aren't workers, they're tools. And tools don't need rights, neither a wrench nor AlphaGo are persons and there isn't any reason to act as if they were.
That's kind of...completely missing the arguments that Shadell was posting about. The basic idea that this thinker seems to be positing is that it's not only possible to grant rights (i.e. socially recognized privileges) to robots that are completely separate from what we would normally accord to sentient or even living beings, but also to do so for reasons that are based on their utility to the humans that use them and are affected by them instead of assigning them based on inherent properties of the robots themselves.
 
1. When an AI creates something that is not the product of its creators (AlphaGo's 'most creative' moves, computationally generated poetry) who should own the IP? Assigning it to the AI in question is one means of resolving this with somewhat interesting implications.
Making them directly into common goods would probably be better, as either no one gets to use the IP if they own it as they can't really sell it.
 
That's kind of...completely missing the arguments that Shadell was posting about. The basic idea that this thinker seems to be positing is that it's not only possible to grant rights (i.e. socially recognized privileges) to robots that are completely separate from what we would normally accord to sentient or even living beings, but also to do so for reasons that are based on their utility to the humans that use them and are affected by them instead of assigning them based on inherent properties of the robots themselves.
What benefit would that possible offer?
 
If you actually read the OP you'd know.
Yes, I read the OP. I don't find those benefits to be especially compelling.

If our tendency to anthropomorphize or zoomorphize inanimate objects would be enough reasons to give them rights then we'll give literally everything rights and the concept will be diluted to meaninglessness. Returning us right back to where we started.

Robots are neither sapient nor sentient, giving them rights would be pointless.
 
Last edited:
Yes, I read the OP. I don't find those benefits to be especially compelling.

If our tendency to anthropomorphize or zoomorphize inanimate objects would be enough reasons to give them rights then we'll give literally everything rights and the concept will be diluted to meaninglessness. Returning us right back to where we started.

Robots are neither sapient nor sentient, giving them rights would be pointless.

Well if a tree can legally own itself, then it make sense for an AI to own it's own artistic creations. The issue then becomes said intellectual property is in a state of limbo because the AI can't actually allow other people to use it, unless the programmers managing it are seen as care takers I guess.
 
Back
Top