Voting is open
Going to be honest, I didn't expect you of all people to be the spoilsport.

Hey.

That's mean.

:(

I'm totally down for pointless buffoonery, but I'd rather we not intentionally break the fourth wall in a manner that is likely to have IC consequences. I care about the story's integrity first and foremost.

...

[Χ] Bake OPSEC cookies with Ami while discussing WMD plans
  • Observe the strictest OPSEC cookies
 
Hey.

That's mean.

:(

I'm totally down for pointless buffoonery, but I'd rather we not intentionally break the fourth wall in a manner that is likely to have IC consequences. I care about the story's integrity first and foremost.

...

[Χ] Bake OPSEC cookies with Ami while discussing WMD plans
  • Observe the strictest OPSEC cookies
Remember kids, it's not a fourth wall break if it's a psychotic break!
 
@Velorien

Is spending one FP on a declaration "Hazou/Jiraiya has some Ichiraku Ramen on them." and going through with this bit of trolling something you would allow?
 
I really dislike this. Firstly because it's such a tired trope and doesn't make for good scenes; secondly because it can really be disruptive when taken seriously / bordering on YOUTHSUIT; thirdly because it's out of character for Hazou.
Metafiction is not a tired trope, but fine, fine. No frivolity. Sorry, @Oneiros.
Also, are we considering at all how we're going to invite Ami to this? If she really is under pressure, going to her with "Hey let's go to a completely OPSEC place and discuss important things" might not actually fly / might be forcing her to commit in advance
@Noumero One thing we could do is find another private room as Ami did in our previous meeting. Of course, it's gonna be a lot harder for us to do that...
Neither we nor Ami would be able to trust that "a private room" is actually secure, thanks to the Hyuuga and other unknown sensory specialists. Even the waters aren't completely secure: if, e. g., the Minami were forewarned, they probably could have compromised the location using their prisms.

We can't have Jiraiya bury us, as it would put Ami at his mercy and she'll be unlikely to agree. I suppose we could explain the need for privacy to her and, if she disagrees with our choice of location, listen to her suggestions? I'm not sure we'll be able to trust that they're not compromised in her favour, however.
We spend a fate point to declare that we have some ramen from Ichiraku ramen and then share it with Ami, telling her that Teuchi made it for us.
Are we actually able to do that?
I apologize, but can you imagine my reac—
I care about the story's integrity first and foremost.
... I see. I suppose I should cross you off the list of prospective Anafabula Conspiracy members. A shame.
 
... I see. I suppose I should cross you off the list of prospective Anafabula Conspiracy members. A shame.

Well well well.
Seems like a place just opened up for a prospective member:ninja:
Truly, Ami master plan is working....now I owe her a favor, to win the help of a voter, truly a.....well,that's a secret.

But well, if the problem is the integrity of the story, we can just go back to the "Time went strange", that doesn't break the fourth wall and it's just a local outbreak of Out energy caused by a sealing failure.
Probably just an Aspect of [UNPRONOUNCEABLE] entering the mortal plane for an attosecond, he/She/it/[COGNITOHAZARD] loves playing with time.
 
Disclaimer: I'm not an expert on psychology in any form.

You seem to have a strong opinion based at least in part on the results of the Milgram experiment.
If I understand you correctly, you argue that the Milgram experiment is strong-ish evidence for humans being easily swayed by greater-good arguments to commit what an outside-view person would describe as morally wrong.

From what I understand, the Milgram experiment is however an experiment in obedience to authority, attempting to show how people can ignore their moral compunctions in favor of not disobeying. It seems to play into human fears of disappointing/angering a perceived superior, instead of appealing to a sense of greater good.

If you have some psychology research to link that specifically tackles the greater-good argument and finding/linking it isn't to great an effort, I might be interested in it (ok, so I tried to make this sentence not sound antagonising and I'm not sure wether I succeeded. So here's a note that this sentence is intended to sound neutral-ish with a pinch of genuine interest).

I also found this analysis of the Milgram experiment, which at a glance seems to present a more nuanced picture than what I recall having learned about the experiment in school: Obedience seemed to have varied quite a lot depending on factors like peer pressure and distance(?).
The experiment is often misunderstood because people just listen to the short version. The nuanced version is that it was about how humans will listen to people in positions of apparent authority when they make greater good arguments. To make regular people do really evil things one or the other wasn't enough, it had to be both put together.

This should make sense if you think about other less controlled situations where humans engaged in horrible crimes against humanity. Germans in WW2 were told by their government that Jews were a threat to the nation. U.S. soldiers in Vietnam were told by their commander that the civilians of Mai Lai village were protecting the guerillas and killing them all was the only way to save American lives, a pattern common to many other war crimes. People in the antebellum South were told by their church and government that blacks weren't capable of looking after themselves and needed to be slaves for their own good.

You see the pattern, I trust.

I feel like this argument assumes your highest priority is avoiding wrong action, rather than maximizing net good. "Greater good" arguments can also persuade people to make sacrifices in pursuit of a legitimately good end - for example, taxation used to fund public works programs is a case of imposing a personal cost on people to produce the greater good of projects requiring central coordination. You could argue that such arguments have on net tended to hurt more than help, but failing to even address their positive aspects seems like an oversight.
The particular kind of greater good argument to be concerned about is the doing a lesser evil to avoid a greater evil kind. Engage in this unethical medical research that will result in the horrible deaths of hundreds now and it will save many thousands later, we promise. That sort of heinous atrocity that people convince themselves is right because it's justified.

The concept of taxation isn't really the sort of lesser evil/greater good I'm talking about. That's just everyday cooperation to create a civilization, and there are other ways of framing it that support the idea besides the greater good style of argument. If everybody pays for services then everybody gets services, and we don't want to live in a world where that doesn't happen. If anyone really doesn't want to participate they're free to leave and live somewhere that doesn't have civilization.

The essence of what I'm suggesting is that greater good arguments should be ignored because they have undue influence on the human brain's ability to make decisions. Decisions should be made exclusively on other types of arguments to avoid accidentally committing horrible wrongs in the mistaken belief that they are justified because the horrible wrongs lead to the best outcome.
 
Last edited:
The experiment is often misunderstood because people just listen to the short version. The nuanced version is that it was about how humans will listen to people in positions of apparent authority when they make greater good arguments. To make regular people do really evil things one or the other wasn't enough, it had to be both put together.

This should make sense if you think about other less controlled situations where humans engaged in horrible crimes against humanity. Germans in WW2 were told by their government that Jews were a threat to the nation. U.S. soldiers in Vietnam were told by their commander that the civilians of Mai Lai village were protecting the guerillas and killing them all was the only way to save American lives, a pattern common to many other war crimes. People in the antebellum South were told by their church and government that blacks weren't capable of looking after themselves and needed to be slaves for their own good.

You see the pattern, I trust.


The particular kind of greater good argument to be concerned about is the doing a lesser evil to avoid a greater evil kind. Engage in this unethical medical research that will result in the horrible deaths of hundreds now and it will save many thousands later, we promise. That sort of heinous atrocity that people convince themselves is right because it's justified.

The concept of taxation isn't really the sort of lesser evil/greater good I'm talking about. That's just everyday cooperation to create a civilization, and there are other ways of framing it that support the idea besides the greater good style of argument. If everybody pays for services then everybody gets services, and we don't want to live in a world where that doesn't happen. If anyone really doesn't want to participate they're free to leave and live somewhere that doesn't have civilization.

The essence of what I'm suggesting is that greater good arguments should be ignored because they have undue influence on the human brain's ability to make decisions. Decisions should be made exclusively on other types of arguments to avoid accidentally committing horrible wrongs in the mistaken belief that they are justified because the horrible wrongs lead to the best outcome.
Greater Good arguments were also used by the good guys, to get their populations to do horrific things for more important outcomes. We largely don't criticize them for it, though, because their Greater Good was real and their path to it was rationalizable.

For all I normally find myself standing on your side of the debate in this thread, this lie does not kill anyone, it does not fund genocides, it does not sink boats, it does not tarnish social norms. When your Greater Good says the "ultimate goal must definitely be the removal of the Jews altogether", sure, be suspicious, but when it's instead "stop a genocide and protect civilians from chakra beasts", it's probably not the kind of argument to throw out.
 
Last edited:
Greater Good arguments were also used by the good guys, to get their populations to do horrific things for more important outcomes. We largely don't criticize them for it, though, because their Greater Good was real and their path to it was rationalizable.

For all I normally find myself standing on your side of the debate in this thread, this lie does not kill anyone, it does not fund genocides, it does not sink boats, it does not tarnish social norms. When your Greater Good says the "ultimate goal must definitely be the removal of the Jews altogether", sure, be suspicious, but when it's instead "stop a genocide and protect civilians from chakra beasts", it's probably not the kind of argument to throw out.
I respect your candor, thank you.

There is a fallback argument that might persuade you if distrust of greater good arguments doesn't. It's a more practical argument. The 'religion is messy and bites the hand that meddles in it' argument. I really don't like the unfathomable multitude of ways that trying to use Keiko's position of religious of significance to the villagers for leverage to alter their society to our desires could go wrong.

She could be revealed as a fraud and the villagers violently turn on her. There could be a bloody schism. The pangolins could start telling the villagers to do things we don't like, or make us do it if we want to keep the summoning contract. We might have to resolve questions or disputes with no good answer where people get hurt either way. There could be a variety of unpredictable effects if the religious believers encounter the outside world. We might have to take action to enforce religious unity in nasty ways or let our control fragment.

I want to influence societies with generosity, compassion, and honesty. Not by telling them that yes, I am their messiah, and here's how to live a better life. Even if it would work, which is far from a guarantee.
 
@Velorien

Is spending one FP on a declaration "Hazou/Jiraiya has some Ichiraku Ramen on them." and going through with this bit of trolling something you would allow?
In principle, yes, that is exactly the kind of minor detail you can spend FP on. More of an issue is that there's no reason for Hazō, who's spent a total of a couple of months living in Leaf, to have Ichiraku Ramen promoted to his attention. Jiraiya might plausibly have some for reasons of Naruto, but again, how would Hazō know?
 
There is a fallback argument that might persuade you if distrust of greater good arguments doesn't. It's a more practical argument.
I should have clarified this earlier, but what I said wasn't intended as an endorsement of the idea. It seems pretty half-baked to me, and I don't see a clear path to it solving the Pangolin issue. I agree with your argument modulo unimportant specifics.
 
In principle, yes, that is exactly the kind of minor detail you can spend FP on. More of an issue is that there's no reason for Hazō, who's spent a total of a couple of months living in Leaf, to have Ichiraku Ramen promoted to his attention. Jiraiya might plausibly have some for reasons of Naruto, but again, how would Hazō know?
Didn't Hazou know about Keiko's not!date with Shikamaru where they ate Ichiraku Ramen instead of the Yabai Cafe, and therefore knew that they ended up eating at Ichiraku Ramen?

Edit: Also, we actually did end up eating at a ramen place back when we first went to Leaf, though the name was never said.
 
Last edited:
The particular kind of greater good argument to be concerned about is the doing a lesser evil to avoid a greater evil kind. Engage in this unethical medical research that will result in the horrible deaths of hundreds now and it will save many thousands later, we promise. That sort of heinous atrocity that people convince themselves is right because it's justified.

The concept of taxation isn't really the sort of lesser evil/greater good I'm talking about. That's just everyday cooperation to create a civilization, and there are other ways of framing it that support the idea besides the greater good style of argument. If everybody pays for services then everybody gets services, and we don't want to live in a world where that doesn't happen. If anyone really doesn't want to participate they're free to leave and live somewhere that doesn't have civilization.

The essence of what I'm suggesting is that greater good arguments should be ignored because they have undue influence on the human brain's ability to make decisions. Decisions should be made exclusively on other types of arguments to avoid accidentally committing horrible wrongs in the mistaken belief that they are justified because the horrible wrongs lead to the best outcome.

"Everyday cooperation to create a civilization" is just another way of phrasing "doing a lesser evil (taking people's stuff) to produce a greater good (civilization)." Yes, you're theoretically free to fuck off into the wilderness and not play taxes, but you're not free to refuse to cooperate while still reaping the benefits. Considering that the latter situation is theoretically ideal for you personally, being forcibly deprived of that option constitutes an evil being done to you.

Really, what I think it comes down to is that taxation as a prerequisite to existing in civilization isn't the sort of extreme evil (unethical medical experimentation, genocide, etc.) that you're concerned with. To that I'd then say that just banning those particular actions seems more suitable than banning the argument that could lead to them. At least, unless you think there's a reason other than the greater good that such things should be permissible?
 
"Everyday cooperation to create a civilization" is just another way of phrasing "doing a lesser evil (taking people's stuff) to produce a greater good (civilization)." Yes, you're theoretically free to fuck off into the wilderness and not play taxes, but you're not free to refuse to cooperate while still reaping the benefits. Considering that the latter situation is theoretically ideal for you personally, being forcibly deprived of that option constitutes an evil being done to you.

Really, what I think it comes down to is that taxation as a prerequisite to existing in civilization isn't the sort of extreme evil (unethical medical experimentation, genocide, etc.) that you're concerned with. To that I'd then say that just banning those particular actions seems more suitable than banning the argument that could lead to them. At least, unless you think there's a reason other than the greater good that such things should be permissible?

Not really, no. And I agree, we do have strong moral and ethical rules against things like torturing people or killing innocents. Greater good arguments are just dangerous because they can be used to convince regular people to ignore the usual moral constraints. Keeping to the usual moral constraints regardless of the arguments for violating them is exactly the safe course to avoid committing well-intentioned atrocities.
 
The experiment is often misunderstood because people just listen to the short version. The nuanced version is that it was about how humans will listen to people in positions of apparent authority when they make greater good arguments. To make regular people do really evil things one or the other wasn't enough, it had to be both put together.

This should make sense if you think about other less controlled situations where humans engaged in horrible crimes against humanity. Germans in WW2 were told by their government that Jews were a threat to the nation. U.S. soldiers in Vietnam were told by their commander that the civilians of Mai Lai village were protecting the guerillas and killing them all was the only way to save American lives, a pattern common to many other war crimes. People in the antebellum South were told by their church and government that blacks weren't capable of looking after themselves and needed to be slaves for their own good.

You see the pattern, I trust.


The particular kind of greater good argument to be concerned about is the doing a lesser evil to avoid a greater evil kind. Engage in this unethical medical research that will result in the horrible deaths of hundreds now and it will save many thousands later, we promise. That sort of heinous atrocity that people convince themselves is right because it's justified.

The concept of taxation isn't really the sort of lesser evil/greater good I'm talking about. That's just everyday cooperation to create a civilization, and there are other ways of framing it that support the idea besides the greater good style of argument. If everybody pays for services then everybody gets services, and we don't want to live in a world where that doesn't happen. If anyone really doesn't want to participate they're free to leave and live somewhere that doesn't have civilization.

The essence of what I'm suggesting is that greater good arguments should be ignored because they have undue influence on the human brain's ability to make decisions. Decisions should be made exclusively on other types of arguments to avoid accidentally committing horrible wrongs in the mistaken belief that they are justified because the horrible wrongs lead to the best outcome.
I mean, I support what most would consider unethical medical research too, so long as it's not like, homeopathic bullshit medical research. (With the patients' informed consent, of course, but still -- I can guarantee there's plenty of people out there who'd go for it if it were an option.)
 
Not really, no. And I agree, we do have strong moral and ethical rules against things like torturing people or killing innocents. Greater good arguments are just dangerous because they can be used to convince regular people to ignore the usual moral constraints. Keeping to the usual moral constraints regardless of the arguments for violating them is exactly the safe course to avoid committing well-intentioned atrocities.

Well, only because you don't recognise the atrocities you commit by sticking to the status quo until they stop being status quo. If you reject greater good in general, you can't say partake in a civil war to abolish slavery.

(of course evaluating whether a civil war to abolish slavery is actually the best course is beyond me, and anyone at any time really; but you can't just reject it apriori because it's a greater good argument)
 
Last edited:
For those of you just joining us: Hi there! We're currently experiencing Consequentialism vs Deontology, Episode VII: The Greater Good Awakens.

Episode VIII is scheduled for the same time next month, where once again no one will change anybody's mind and everyone involved will feel disgusted by and disappointed in the other side!
 
I'm willing to do a lot to get Hidden Mountain to win the Best Village award.

 
Voting is open
Back
Top