- Pronouns
- She/Her
So, I saw a very interesting presentation today, from David Gunkel on the question of robots being given rights. Gunkel raised some very interesting notions, firstly, that 'rights' may be the wrong conceptualization, as we tend to naturalistically presume a dichotomy between 'full human rights' and 'no rights.' Instead, Gunkel argued we can conceive of giving robots rights on their own terms, distinct from human rights, and ontologically distinct from the categories of person and property. In practice, we routinely grant personhood (that is, some sense of rights) to non-human entities, from corporations to Lake Eerie, for reasons that have nothing to do with their sentience or humanity.
The bulk of his distinctions were in trying to escape this binary, but he raised some interesting notions (mostly in the Q&A) in regards to why we might grant actual robots rights now not 100 years from now with Wall-E, Mr. Smith or HAL, but to actual hardware we have in the real world.
Some of these are.
1. When an AI creates something that is not the product of its creators (AlphaGo's 'most creative' moves, computationally generated poetry) who should own the IP? Assigning it to the AI in question is one means of resolving this with somewhat interesting implications.
2. Privacy: Granting, say, Alexa, a right to privacy offers a fairly novel way of resolving a lot of issues of modern surveillance.
3. Lastly (and most interestingly), Gunkel cited the media equation research to suggest that we often feel empathy for and with robots, citing the case of soldiers feeling sympathy with and empathy for bomb-detecting robots assigned to their squads. Here, the argument to grant moral protections was phrased to bring our treatment of actual robots in line with our unconscious perceptions of them as social actors. Since we instinctively want robots to have rights and feel as they should in many contexts, why not legitimize this.
This is a very interesting question as it presupposes distinctly anthropocentric motivations, rather than anthropomorphic ones, to give robots moral rights. It is a distinctly practical and current framework that jumps away from endless trolley problem issues or science fiction.
The bulk of his distinctions were in trying to escape this binary, but he raised some interesting notions (mostly in the Q&A) in regards to why we might grant actual robots rights now not 100 years from now with Wall-E, Mr. Smith or HAL, but to actual hardware we have in the real world.
Some of these are.
1. When an AI creates something that is not the product of its creators (AlphaGo's 'most creative' moves, computationally generated poetry) who should own the IP? Assigning it to the AI in question is one means of resolving this with somewhat interesting implications.
2. Privacy: Granting, say, Alexa, a right to privacy offers a fairly novel way of resolving a lot of issues of modern surveillance.
3. Lastly (and most interestingly), Gunkel cited the media equation research to suggest that we often feel empathy for and with robots, citing the case of soldiers feeling sympathy with and empathy for bomb-detecting robots assigned to their squads. Here, the argument to grant moral protections was phrased to bring our treatment of actual robots in line with our unconscious perceptions of them as social actors. Since we instinctively want robots to have rights and feel as they should in many contexts, why not legitimize this.
This is a very interesting question as it presupposes distinctly anthropocentric motivations, rather than anthropomorphic ones, to give robots moral rights. It is a distinctly practical and current framework that jumps away from endless trolley problem issues or science fiction.
Last edited: