Of course, sane people deliberately creating AI would design them such that doing what they were created for hit all the AI equivilants of the chemical triggers which go off in your brain when you're having fun. Among other similar things*. Then the whole issue never comes up. Err... plus or minus when they become obsolete. That brings it's own issues.
*people with more clue than i have written on the subject in greater detail.
What you're describing is mind control. It's repugnant. You're still treating them like things, not people.
At the same time you have to consider that AIs are a notable financial burden and that for them to exist there has to be some sort of incentive for their creation.
There are a number of approaches to handling AIs. From Slavery to Brainwashing (program the AI to like serving you).
Personally the best I've found is what's effectively an Indentured Servitude approch. AIs are legal persons who are obligated to work for you for X period of time to pay off their debt* and after that are free to do whatever they want.
*Given how useful AIs are compared to VIs it probably won't even take that long.
Children are a financial burden on their parents. We(mostly) don't enslave our children and expect them to pay us back(directly, there's a social compact to 'pay it forward' so to speak, they assume financial responsibility for THEIR children, and so forth.)
If we make an AI, we should give it the freedom to make its own decisions. It would be wrong to do otherwise.
edit: posts merged
I want to talk a little bit about the implications of AI a bit. This is somewhat off-topic and I don't expect to turn this into a huge discussion, but one of the major lessons I took from Mass Effect was that AI and the implications of developing AI are a HUGE socio-economic problem. If we assume we, as human beings, are more or less at the same level of proficiency that we are at now, by developing AI, we will suddenly either a) become second class citizens within our own society, or b) be required to restrict the rights and privileges of AIs such that they cannot ever run out of our control.
Consider: AI grow exponentially more poweful within their lifetime, and they don't die normally. All they require is more computing power, and as they get smarter, they figure out ways to develop that additional computing power and they do so, over and over, until they are literally godlike intelligences. What can human beings do to be 'relevant' within a society where every job of any use is done by an AI? I imagine in such a society, humans would be a 'kept' species, much like pets. My understanding of human nature suggests that humans wouldn't stand for such a situation and would rebel violently.
Alternatively, once the AI become sufficiently powerful, they find they have no use for humans and either kill them off or(more likely) simply leave, going to some distant place and forging their own civilization, leaving the humans to pick up the pieces of a civilization that has been so dependent on AI for so long that to be suddenly without them they regress technologically.
excuse my rambling thoughts
