So, do we want to complete the chakra measurement research proposal for Kabuto now before the tournament?
I'd like that (means we can get started on feedback and revision before the tourney and get to it quicker), but since it's not urgent we can put it off for an update or two. Maybe do it and that truck seal thing in the same update?
 
Assuming I've understood you correctly, this is actually a perfect example of what I'm talking about. You are making the assumption that because someone is doing something that is suboptimal by your standards, they are either stupid or irrational. You are completely missing the much more likely option: They have different standards than you do, or constraints that you are not aware of.
Personal experience suggests very strongly that, no, this probably is a case of incompetence, either theirs or someone else's they're constrained to work under. The more I learn about a subject, generally the closer I get to being convinced that nobody has a clue, and everyone really is just totally confused. On a good day you might be able to convince me that 1-in-1000 computer scientists actually realizes how utterly insane the field is, for example, but numbers that low aren't enough to fix anything.

You can argue that this is just another kind of politics, and everyone's doing their best under their own world-model, but the point holds that if everyone's world models were actually sane none of these problems would exist. So that's where the issue is: in the minds, not the institutions!

We live in a world where over a dozen states use voting machines shown to be trivially insecure, a world where Intel built both the Itanium and iAPX 432, a world where Trump is the president, a world where the US almost nuked North Carolina, a world where worrying about AI risk is considered alarmism... incompetence is a thing all right.
 
Last edited:
Personal experience suggests very strongly that, no, this probably is a case of incompetence, either theirs or someone else's they're constrained to work under. The more I learn about a subject, generally the closer I get to being convinced that nobody has a clue, and everyone really is just totally confused. On a good day you might be able to convince me that 1-in-1000 computer scientists actually realizes how utterly insane the field is, for example, but numbers that low aren't enough to fix anything.

You can argue that this is just another kind of politics, and everyone's doing their best under their own world-model, but the point holds that if everyone's world models were actually sane none of these problems would exist. So that's where the issue is: in the minds, not the institutions!

We live in a world where over a dozen states use voting machines shown to be trivially insecure, a world where Intel built both the Itanium and iAPX 432, a world where Trump is the president, a world where the US almost nuked North Carolina, a world where worrying about AI risk is considered alarmism... incompetence is a thing all right.

It's funny. My personal experience is basically the reverse of that. It's that people are largely competent, as in, capable of solving problems effectively, but that they have no incentive to do so. Talking about incompetence as if it was an inherent trait of a person also seems ungenerous and needlessly fatalistic. Spectacular failures are often results of a critical mass of mistakes being independently made by a collection of people; more often, systemic issues are largely to blame. In either case, making a mistake that proved to have major consequences is not necessarily a sign of incompetence.
 
Here is the issue I have, personally: How do I know whether a character is being stupid or there are circumstances that prevent them from choosing the optimal route? I am talking in general terms now and not about the seal tax in particular since I am fine with the explanations given there.

Basically, I am trying to figure out how we collectively missed the possibility of the fourth event stampede which is clearly a stupid actor being reckless and not thinking things through. On the one hand, your argument is that most of the time people are not stupid and we should stop assuming they are. Which is exactly what we did because none of us predicted Chunin candidates would be so stupid as to do this. Yet they did.

So logically that means that people sometimes are stupid, even in the rational setting; what I got from that retconned chapter and the previous ones is basically: "People are acting intelligently most of the time. Except when they are not."

So if we acknowledge the fact that even when most of the world can be smart there is always a chance that something or someone is still stupid about something because there is a precedence for someone willing to risk WW4.

Suppose the following game was part of the chuunin exams: each person chooses a number N between 0 and 100 (inclusive), and you get points based on how close your guess was to 2/3 of the average of the Ns.

It's clear that "everyone puts in 0" is the only equilibrium. Do you think that's what happens? If the entrants are us and ISC, maybe. When the New York Times ran this, the average was 30.

(Note that, in addition to expecting some level of imperfect play, there are also sneaky strategic reasons to not put down 0–for example to mess with Nara stinkers who'll put down 0, or to collude with teammates and friends ("I don't have a shot anyway, I'll put 100 and you put 20"). I think there's the possibility analogous ideas were in play wrt the stampede.)

Another fun example - Douglas Hofstadter's Luring Lottery, in which the readership of Scientific American suckered themselves out of 52 dollars of EV each. Metamagical Themas: Sanity and Survival - Gwern.net
 
Feel free to ignore me, but my watsonian justification for the explosive seal regulations is that they are a form of regulatory capture. Wherein the clans, or coalitions of clans, are already buying up the vast majority of tower-priced seals in order to build strategic stockpiles and furnish their clan-nin with cheap seals, fucking over clanless-nin in the process. Seems remarkable consistent with the crapsack caste-based society we've all come to love.
 
We live in a world where over a dozen states use voting machines shown to be trivially insecure, a world where Intel built both the Itanium and iAPX 432, a world where Trump is the president, a world where the US almost nuked North Carolina, a world where worrying about AI risk is considered alarmism... incompetence is a thing all right.

All of those things are results of people starting from a different value system than you. (Except maybe the Intel one).
 
All of those things are results of people starting from a different value system than you. (Except maybe the Intel one).
My experience is that most people laugh about AI risk because they misjudge the risk and haven't really thought it through, not because they actually think the short-term gains outweigh the existential risk. Of course, some people merely disagree about the risk, but they don't tend to be the people calling it alarmism.

E: Actually, admittedly I have heard a fair number of people who don't think (or don't think they think) that it would be bad for everyone to die off in the night, but don't get me started on that...
 
Last edited:
As far as I can tell, the major downside to solid shot Macerators is that we can't apply any buffs to the roll?

E: For Hazou's ranged options, to clarify
 
As far as I can tell, the major downside to solid shot Macerators is that we can't apply any buffs to the roll?

E: For Hazou's ranged options, to clarify
Yes. If we adjusted it to fire a single rock and emerge from the seal gradually -- so as to push things away above the seal away instead of failing to active or worse -- we could use it with another seal (ie explosive, banshee, etc.) for the same effect and same action economy (two supplementals) as it takes for Kei to throw, though.
 
Yes. If we adjusted it to fire a single rock and emerge from the seal gradually -- so as to push things away above the seal away instead of failing to active or worse -- we could use it with another seal (ie explosive, banshee, etc.) for the same effect and same action economy (two supplementals) as it takes for Kei to throw, though.
*blinks*

How do we do that again?
 
*blinks*

How do we do that again?
Place the seal to be attached with some sort of sticky substance above the macerator, activate both. Rock emerges, is sent flying with the other seal attached.

This has the added bonus of, to observers, looking as if we performed a hand seal-less jutsu to punch a rock and attached the seal to it on top of that.
 
My experience is that most people laugh about AI risk because they misjudge the risk and haven't really thought it through, not because they actually think the short-term gains outweigh the existential risk. Of course, some people merely disagree about the risk, but they don't tend to be the people calling it alarmism.

E: Actually, admittedly I have heard a fair number of people who don't think (or don't think they think) that it would be bad for everyone to die off in the night, but don't get me started on that...

Even with how people do risk calculations in regards to AI it's a more complicated answer than just "people are stupid/incompetent" you have to look at the culture within which this threat is taking place, the culture which habitually puts short-term benefits over long-term societal and environmental impacts. This is also the same culture which tends to enforce a "progress for progress's sake mindset"
 
Place the seal to be attached with some sort of sticky substance above the macerator, activate both. Rock emerges, is sent flying with the other seal attached.

This has the added bonus of, to observers, looking as if we performed a hand seal-less jutsu to punch a rock and attached the seal to it on top of that.
Ah. Okay, so the draw there is an action economy thing (if Im reading you correctly).

Hmmm. Thanks.
 
"people are stupid/incompetent"
Let me be a little careful here, I'm not calling people stupid.

stupid: having or showing a great lack of intelligence or common sense
incompetent: not having or showing the necessary skills to do something successfully

And from this clarification I respond to your post: a lack of competence because your culture has bad habits is still a lack of competence.
 
Let me be a little careful here, I'm not calling people stupid.

stupid: having or showing a great lack of intelligence or common sense
incompetent: not having or showing the necessary skills to do something successfully

And from this clarification I respond to your post: a lack of competence because your culture has bad habits is still a lack of competence.

They lack competence by your definition of competence, not their own. I can think of a lot of different justifications for why people should not be worried about AI as a threat, without even going into how likely a rogue AI is. That wouldn't make me incompetent for having a different value system than you (as clearly the vast majority of people do according to yourself), it would just mean I have different priorities
 
They lack competence by your definition of competence, not their own. I can think of a lot of different justifications for why people should not be worried about AI as a threat, without even going into how likely a rogue AI is. That wouldn't make me incompetent for having a different value system than you (as clearly the vast majority of people do according to yourself), it would just mean I have different priorities
Can you give an example so I know what you're talking about here? I'm struggling to think of a value difference that people regularly hold that would account for people branding AI risk as alarmism. Even the "I don't care if everyone dies" people still tend to understand that not everyone agrees with them...
 
Back
Top