Talk about your personal ethical systems

Some sort of deontology. It's something I still need to do more research on when I have more money and time to focus.

All I know is I find the ethical theories of Locke, Kant, Rawls and Nozick the most compelling. Even I know there's a helluva lot of differences between these four but at the core of each of them is the idea of individuals and rights. You cannot do an injustice - whatever that injustice may be as it obviously varies from philosopher to philosopher - to another person no matter what motivates you or what end you desire. I'm also fond of the self-ownership proposal of Locke, Nozick and others. My body is my property and I may do with it as I please just so long as it doesn't hurt anyone else.

I still need to read a lot more on this as I said. But human beings deserve respect and that respect can't be grounded in what use they have to others or the state.
 
For me, the most interesting part of a moral framework is how it deals with inaction and moral obligation.

The trolley problem is probably the most famous example, and it has a lot of variations that can poke holes in many rigid moral frameworks.

In the simplest expression of the problem: suppose five people are going to die unless you take a specific action. This action will cause one person to die, but will save the five.

The thing that I like about the problem is that you can just keep tilting the scales until you start getting odd results.

For example: according to strictly utilitarian ethics, it is immoral to do nothing (and thus the five people will die), even if the situation is not your fault, and you only just happened upon it.
If the moral framework you are using necessitates taking the action, then you tilt it further.
What if one innocent person will die if you take an action that you believe will save the five, but you are mistaken, and they die anyway? Now six people are dead, instead of five, as a result of your actions.

If you lean the other way, and your moral framework would necessitate you doing nothing (thus allowing the five to die), what if the one is a innocent rabbit, instead of a person?

Personally, I lean towards the camp that thinks that inaction should not be considered immoral, under (most) circumstances, but even then, I'm aware of the fact that even that starts to break down when you push it far enough.
 
Last edited:
For me, the most interesting part of a moral framework is how it deals with inaction and moral obligation.

The trolley problem is probably the most famous example, and it has a lot of variations that can poke holes in many rigid moral frameworks.

In the simplest expression of the problem: suppose five people are going to die unless you take a specific action. This action will cause one person to die, but will save the five.

The thing that I like about the problem is that you can just keep tilting the scales until you start getting odd results.

For example: according to strictly utilitarian ethics, it is immoral to do nothing (and thus the five people will die), even if the situation is not your fault, and you only just happened upon it.
If the moral framework you are using necessitates taking the action, then you tilt it further.
What if one innocent person will die if you take an action that you believe will save the five, but you are mistaken, and they die anyway? Now six people are dead, instead of five, as a result of your actions.

If you lean the other way, and your moral framework would necessitate you doing nothing (thus allowing the five to die), what if the one is a innocent rabbit, instead of a person?

Personally, I lean towards the camp that thinks that inaction should not be considered immoral, under (most) circumstances, but even then, I'm aware of the fact that even that starts to break down when you push it far enough.
Indeed. In a situation like the classic trolley problem, inaction can be attributed to paralysis caused by the complexity of the problem- in other situations, we have the classic saying, "All that is necessary for Evil to prosper is for good men to do nothing."

I suppose I would definitely describe myself as a Voluntarist by that Machiavelli link from earlier- especially since the essay kinda surprised me that there isn't some assumption of universal rules in Virtue Ethics. After all, if there isn't a universal standard for what Good or Evil are, then how can you define Good or Evil intentions? As a Christian, the deontological part of my system works under the postulate that as God is Omniscient and morally impeccable, He can be assumed to know the consequences of our actions better and so establishes rules to cover our own ignorance- those ignorant of those laws or with inadequate personal reason to believe them can be attributed good intentions in violating some of those laws (the ones that people debate over), but the laws exist because there is some consequence God is trying to spare us from. The ultimate goal of the system is to understand the reasoning behind the laws to the point they become superflous, but that doesn't do away with the basic deontological condition that Good is good and Evil is bad, and that that difference is important.

Someone said that Plato actually wasn't talking about intent when he laid out Virtue Ethics because he brought up Ignorance as part of culpability, but I don't think that necessarily contradicts intent being the basis of it unless he meant it universally- If there was no way for you to know something, I'd say it's unfair for you to be held responsible for it, or even if you could have known but didn't demonstrate irresponsibility in remaining ignorant. If you really should have known something that makes things go wrong, though, that is sufficient to put you at fault. In other words, I think stupidity is more condemning than ignorance, by the definition that stupidity is poor use of the knowledge you have.

Of course, in terms of ethical systems, I am surprised at the blowback you can get for having one. I ended up seeing that when an ethics discussion came up in another thread and I saw people actually arguing in favor of eschewing morals- when I expressed surprise, they got all Ayn Rand on me and started going off on how Altruism is so unreasonable because why don't you donate your house to starving African kids and yada yada- It was a genuine shock to my system; I don't understand people who don't subscribe to Virtue Ethics on some level, and seeing people that not only weren't interested in being a good person but that genuinely didn't see a paradox in saying 'calling something evil is evil.' I admit I'm paraphrasing, but if it's a strawman it's unintentional because that's what I got out of their arguments.

I guess my problem with Utilitarianism is that, at least instinctively, I feel like their arguments are unrealistic. The Trolley Problem is by nature an extreme example; not every situation is like that, and the more you have to tilt it to poke holes in an Idealistic approach, the less compelling it gets. It also seems to assume incompetence in the other systems- of course a Virtue Ethics adherent is going to want to assure the most good for the most people, and if pushed into a Trolley Problem I think many of them would be able to pull the lever- and in world politics like was mentioned earlier, I don't know that Utilitarians would be able to agree on which course would do the most good any better than Virtuists would be able to agree which course held the better intentions.

I guess another part of Virtue Ethics is some amount of faith that focusing on good intent will eventually lead to a better outcome than otherwise- if nothing else, because eventually worse intentions will cloud judgement to the point that less good is done even by Utilitarian standards. This is highlighted by the fact that I think most Utilitarians will have a line that they still won't cross for the 'greater good,' and that line will almost always fall under Virtue Ethics.

I apologize for the big wall of text, I can get very verbose. To make up for it, there is a statement that came to me a while back that I think is suitable profound to cap this off:

If something isn't worth dying for, it isn't worth killing for.
 
I found this thread via your signature; since you're advertising it, I figured it is still ok to post to it.
Intent is the actual intended purpose of whatever action or person is being discussed. Positive represents an intent to help people or make the world a better place. Neutral indicates something or someone without much ambition either way, like "I'll go get something to eat, I'm hungry". Negative indicates deliberate intent to cause harm, either on a personal or general scope.
Your idea of intent doesn't work for me.
Firstly, because human actions usually have a set of motivations driving them and not a single intention.
Secondly, because some of these motivations are subconscious and not accessible to the rational mind, which may instead infer an intent based on the plan of action that is not the true driver of it.
Thirdly, because the "intended purpose" is just judged the same way the effect is, and you're abstracting "help" or "harm" from actual people.

If I take money from you, that helps me and harms you in equal amounts, at least materially. A selfish act has a positive to me and potentially a cost to everyone else; an altruistic act works the other way. If you want to judge based on that, you will want to state whom you want to come out ahead on this deal and whom you want to carry the cost; everyone agrees that actions that benefit everyone are good, and actions that cost everyone are bad, but that's not what we need ethics for.

The question of Intent has to be posed as
1. Who do you want to benefit from this decision?
2. Whom do you want to bear the cost of this decision?
3. Who do you not care about?

And yes, that implies that indifference is not ethically neutral.

The effect is interesting in its interplay with intent, for example:
* effect and intent: murder
* effect, no intent: negligent manslaughter
* intent, but no effect: attempted homicide
The absence of one or the other is somewhat redeeming.

I think the main thing we want to know about effect is the answer to the question, how important was the initial decision? Did the decider take steps to ensure that the desired effect would ensue? Did they actually look for the intended benefits, did they measure the incurred costs, did they adjust their behaviour based on these results, or did they not care? Again, indifference is not ethically neutral.

For example, a pharmaceutical company develops a medication that is intended to benefit ailing people's health and the company bottom line. If they fail to react to people having side effects from that medication, that is an ethical issue with regard to the patients; if they fail to monitor the economics of production and market, that is unethical with regard to their stockholders. If you do not care about a cost or benefit that impacts your stated intent, we're going to assume that this intent is not serious and has no ethical relevance, because you don't really care about it.

Competence is related to that, but it's difficult because of human fallibility. Bascially, if your unable to adjust your actions to achieve the desired effect, you are incompetent, and it is unethical to cling to your decision. If you are aware the effects of a decision do not match your intent, but you persist in it, then your decision has to be judged as if you intended this effect all along. Again, if you do not care about being able to match effect to intent, you are indifferent, and that is not ethically neutral.

Example: If you have no clue about bombs, but defuse one anyway, that is a horrendous thing to do because you could have blown everyone up, but you didn't care. It gets ethically difficult when you thought you could, but really couldn't (Dunning-Kruger).

Now you may be thinking, mendel finds indifference unethical, so we just care about every human equally, problem solved. But it isn't, because we lack the knowledge and the means to that. I can only give my care to the people I am in contact with, and if I wanted, for example, praise every human on the planet equally individually, everyone would get a few seconds worth, so anytime I praise someone for longer than that, I am acting unethically. And that's unusable. So what you really want for a personal ethical system is something that lets you treat people unequally within certain confines. You need an ethical system that looks into "gray area" decisions and helps us balance costs and benefits to the actors involved, instead of assessing some one-dimensional abstract "positive" or "negative". It needs to examine how these ethics work on society as a whole, how other people's decisions can offset the costs we incur.

For example, in the trolley problem, we need to consider if there is someone near the people on the tracks who can pull them off, and who need the trolley to proceed predictively to makea correct choice. Can the group assist each other to get off the tracks in time, while the single victim is helpless? We are acting in a system, with incomplete information, and other actors, and we need a personal ethical system that produces a "good" outcome for that. (That's also why the trolley problem is a bad analogy for self-driving cars.)

-------

My personal, debatable take in this is that utilitarianism doesn't cut it, because utilitarians will re-decide once new infornation becomes available to them that may not be avaible to me. Basically, I need to always guard against them backstabbing me, and the overhead of that insecurity makes a utilitarian society inferior to one based on virtues and rules that keep social interactions predictable. Also, utilitarianism can't ethically base correct decisions on incomplete information, without introducing prejudices to replace information (a prioris) or gambling (which betrays indifference).
 
Ethics of Care
While consequentialist and deontological ethical theories emphasize generalizable standards and impartiality, ethics of care emphasize the importance of response to the individual. The distinction between the general and the individual is reflected in their different moral questions: "what is just?" versus "how to respond?". Gilligan criticizes the application of generalized standards as "morally problematic, since it breeds moral blindness or indifference."

Ethics of care - Wikipedia


One of the most popular definitions of care, offered by Tronto and Bernice Fischer, construes care as "a species of activity that includes everything we do to maintain, contain, and repair our 'world' so that we can live in it as well as possible. That world includes our bodies, ourselves, and our environment". This definition posits care fundamentally as a practice, but Tronto further identifies four sub-elements of care that can be understood simultaneously as stages, virtuous dispositions, or goals. These sub-elements are: (1) attentiveness, a proclivity to become aware of need; (2) responsibility, a willingness to respond and take care of need; (3) competence, the skill of providing good and successful care; and (4) responsiveness, consideration of the position of others as they see it and recognition of the potential for abuse in care.
 
Back
Top