I found this thread via your signature; since you're advertising it, I figured it is still ok to post to it.
Intent is the actual intended purpose of whatever action or person is being discussed. Positive represents an intent to help people or make the world a better place. Neutral indicates something or someone without much ambition either way, like "I'll go get something to eat, I'm hungry". Negative indicates deliberate intent to cause harm, either on a personal or general scope.
Your idea of intent doesn't work for me.
Firstly, because human actions usually have a set of motivations driving them and not a single intention.
Secondly, because some of these motivations are subconscious and not accessible to the rational mind, which may instead infer an intent based on the plan of action that is not the true driver of it.
Thirdly, because the "intended purpose" is just judged the same way the effect is, and you're abstracting "help" or "harm" from actual people.
If I take money from you, that helps me and harms you in equal amounts, at least materially. A selfish act has a positive to me and potentially a cost to everyone else; an altruistic act works the other way. If you want to judge based on that, you will want to state whom you want to come out ahead on this deal and whom you want to carry the cost; everyone agrees that actions that benefit everyone are good, and actions that cost everyone are bad, but that's not what we need ethics for.
The question of Intent has to be posed as
1. Who do you want to benefit from this decision?
2. Whom do you want to bear the cost of this decision?
3. Who do you not care about?
And yes, that implies that
indifference is not ethically neutral.
The effect is interesting in its interplay with intent, for example:
* effect and intent: murder
* effect, no intent: negligent manslaughter
* intent, but no effect: attempted homicide
The absence of one or the other is somewhat redeeming.
I think the main thing we want to know about effect is the answer to the question, how important was the initial decision? Did the decider take steps to ensure that the desired effect would ensue? Did they actually look for the intended benefits, did they measure the incurred costs, did they adjust their behaviour based on these results, or did they not care? Again,
indifference is not ethically neutral.
For example, a pharmaceutical company develops a medication that is intended to benefit ailing people's health and the company bottom line. If they fail to react to people having side effects from that medication, that is an ethical issue with regard to the patients; if they fail to monitor the economics of production and market, that is unethical with regard to their stockholders. If you do not care about a cost or benefit that impacts your stated intent, we're going to assume that this intent is not serious and has no ethical relevance, because you don't really care about it.
Competence is related to that, but it's difficult because of human fallibility. Bascially, if your unable to adjust your actions to achieve the desired effect, you are incompetent, and it is unethical to cling to your decision. If you are aware the effects of a decision do not match your intent, but you persist in it, then your decision has to be judged as if you intended this effect all along. Again, if you do not care about being able to match effect to intent, you are
indifferent, and that is not ethically neutral.
Example: If you have no clue about bombs, but defuse one anyway, that is a horrendous thing to do because you could have blown everyone up, but you didn't care. It gets ethically difficult when you thought you could, but really couldn't (Dunning-Kruger).
Now you may be thinking, mendel finds indifference unethical, so we just care about every human equally, problem solved. But it isn't, because we lack the knowledge and the means to that. I can only give my care to the people I am in contact with, and if I wanted, for example, praise every human on the planet equally individually, everyone would get a few seconds worth, so anytime I praise someone for longer than that, I am acting unethically. And that's unusable. So what you really want for a personal ethical system is something that lets you treat people unequally within certain confines. You need an ethical system that looks into "gray area" decisions and helps us balance costs and benefits to the actors involved, instead of assessing some one-dimensional abstract "positive" or "negative". It needs to examine how these ethics work on society as a whole, how other people's decisions can offset the costs we incur.
For example, in the trolley problem, we need to consider if there is someone near the people on the tracks who can pull them off, and who need the trolley to proceed predictively to makea correct choice. Can the group assist each other to get off the tracks in time, while the single victim is helpless? We are acting in a system, with incomplete information, and other actors, and we need a personal ethical system that produces a "good" outcome for that. (That's also why the trolley problem is a bad analogy for self-driving cars.)
-------
My personal, debatable take in this is that utilitarianism doesn't cut it, because utilitarians will re-decide once new infornation becomes available to them that may not be avaible to me. Basically, I need to always guard against them backstabbing me, and the overhead of that insecurity makes a utilitarian society inferior to one based on virtues and rules that keep social interactions predictable. Also, utilitarianism can't ethically base correct decisions on incomplete information, without introducing prejudices to replace information (a prioris) or gambling (which betrays
indifference).