Status
Not open for further replies.
I have decided that it would be funniest if the Angel gets tired of OL's behavior and, instead of doing anything extreme or confrontational, calls in a favor from Eris. Suddenly Paul had to deal with Eris standing over him trying to convince him to kowtow to the nice angel.

And how is that in any way in-character for the goddess of discord?
 
I'm not 100% sure that's true. They have a clear emphasis on defining one's utility function and they recognize that different agents have different utility functions. That sounds like cultural relativism to me. They just aren't absolute cultural relativists -- they believe in a small, relatively well-defined set of absolute ideals, and while situational things might mean the local optimum doesn't align with those ideals, they have disdain for peoples who don't think it's even worth trying.
Okay, what is cultural relativism, and why is it bad?

I mean, I could just google it, but I probably wouldn't come up with the exact same definition as is being used here.
 
I haven't actually read much LessWrong, so this isn't a rhetorical question: They believe in objective right and wrong, nigh-omnipotent beings (if you mean superintelligences, they don't exist currently), and an afterlife?
They believe that superintelligences might exist outside of the realm that we can actively influence.

EDIT: This is hard to read. Let me substitute "Omega" for "superintelligent beings".

They believe in a principle known as acausal trade wherein if two rational agents can simulate the other sufficiently well then they don't have to actually interact in order to make a deal, because if the simulation is that good then if they DID meet they WOULD agree with the deal that was struck.

They believe that since Omega can run simulations of high fidelity, then you have to treat simulations as equivalent to reality since this might BE a simulation. And if it IS a simulation then Omega can reward or punish the individuals in the simulation, including rewarding the simulated version of you after your death, which -- from your perspective inside the simulation -- is indistinguishable from actual heaven. And Omega might demand that you contribute to making sure it gets created in the future, and if it does get created and you didn't help it, your simulated self gets punished for it -- indistinguishable from actual hell. So since you want your simulated self to be rewarded or not punished (because you might BE that simulated self and not know it) you have to play nice with Omega.

Objective right and wrong is a somewhat more nebulous thing, not actually tied in with the above. They believe that a moral actor will always act to bring the most benefit to the most people, and they have a concept of utility functions that describe what a person or group considers to be "benefit". They believe that living is better than not living (and by extension that death is an obstacle to be overcome) and that rational thinking is better than superstition. One big difference between the LW idea of morality and that of other religions is that LW recognizes that there is no global rule that says that certain actions are always moral or always immoral; rather, they believe that you can objectively determine the most moral act you can do in a given situation.

Okay, what is cultural relativism, and why is it bad?
Cultural relativism ISN'T bad. It's the notion that different cultures can have different ideals instead of all cultures having to conform to the same ideals. On the surface some of their publications look like they're distinctly anti-relativist, but at least from what I've seen that's more of a framing tool to try to get the reader to be able to conceptualize a truly alien mindset. Unfortunately, people who just read them and go with their gut reactions will come away feeling like the stories were portraying the alien mindset as inferior and immoral, when the INTENT is that the reader is supposed to reflect on that gut reaction and then use that to analogize how the other cultures must also feel.
 
Last edited:
Okay, what is cultural relativism, and why is it bad?

I mean, I could just google it, but I probably wouldn't come up with the exact same definition as is being used here.

My understanding is it is the idea that one culture should not judge or try to impose its values on another culture. Stopping there, its not bad. Live and let live is a fine idea. The problem comes when it is brought to an extreme, like the babyeaters in the story referenced earlier. Of course few people who identify as cultural relativists would support such an absolute posistion, just like few of the anti-relativists would claim to have an absolutely correct set of values that all beings should follow. Unfortunately members of both sides do tend to strawman the other.
 
Basically, guesswork. Objectively, the SI has no way to determine which self-proclaimed prophets are actual prophets, and which double-Jonahs are prophets and keeping quiet about it. However, it isn't an unreasonable guess.
Given this is a DC world, it kind of is an unreasonable guess.

Because as far as meta-knowledge is concerned, the Silver City doesn't have prophets, and even the Bible tends to get a lot more wrong then it does right.
 
They believe that superintelligences might exist outside of the realm that we can actively influence.

EDIT: This is hard to read. Let me substitute "Omega" for "superintelligent beings".

They believe in a principle known as acausal trade wherein if two rational agents can simulate the other sufficiently well then they don't have to actually interact in order to make a deal, because if the simulation is that good then if they DID meet they WOULD agree with the deal that was struck.

They believe that since Omega can run simulations of high fidelity, then you have to treat simulations as equivalent to reality since this might BE a simulation. And if it IS a simulation then Omega can reward or punish the individuals in the simulation, including rewarding the simulated version of you after your death, which -- from your perspective inside the simulation -- is indistinguishable from actual heaven. And Omega might demand that you contribute to making sure it gets created in the future, and if it does get created and you didn't help it, your simulated self gets punished for it -- indistinguishable from actual hell. So since you want your simulated self to be rewarded or not punished (because you might BE that simulated self and not know it) you have to play nice with Omega.

Objective right and wrong is a somewhat more nebulous thing, not actually tied in with the above. They believe that a moral actor will always act to bring the most benefit to the most people, and they have a concept of utility functions that describe what a person or group considers to be "benefit". They believe that living is better than not living (and by extension that death is an obstacle to be overcome) and that rational thinking is better than superstition. One big difference between the LW idea of morality and that of other religions is that LW recognizes that there is no global rule that says that certain actions are always moral or always immoral; rather, they believe that you can objectively determine the most moral act you can do in a given situation.


Cultural relativism ISN'T bad. It's the notion that different cultures can have different ideals instead of all cultures having to conform to the same ideals. On the surface some of their publications look like they're distinctly anti-relativist, but at least from what I've seen that's more of a framing tool to try to get the reader to be able to conceptualize a truly alien mindset. Unfortunately, people who just read them and go with their gut reactions will come away feeling like the stories were portraying the alien mindset as inferior and immoral, when the INTENT is that the reader is supposed to reflect on that gut reaction and then use that to analogize how the other cultures must also feel.
Wait, does lesswrong believe that we should treat reality as if it's a simulation? I thought that the general conclusion was, even if simulation theory is true, we should just treat reality as if it's real because it's simpler and there's no way to tell anyway. Also, I would think you'd run into pascal's wager issues if you assume reality is a simulation and is run by a entity that will punish/reward. Sure, Omega might reward you for helping the most people. Or it might punish you because the simulation was actually to test the maximum number of paperclips a single person could accumulate, and you scored very poorly.

Acasual trade makes sense, I'd agree with that.

I'm surprised that they went beyond the concept of utility functions, and went to describe "morality". I would think the conclusion would be that "morality" simple isn't a sensible descriptor - reality is composed of actors following defined utility functions, and "good", "bad", and the scale you place those words on, "morality", have no true basis in reality.
 
They believe that since Omega can run simulations of high fidelity, then you have to treat simulations as equivalent to reality since this might BE a simulation. And if it IS a simulation then Omega can reward or punish the individuals in the simulation, including rewarding the simulated version of you after your death, which -- from your perspective inside the simulation -- is indistinguishable from actual heaven. And Omega might demand that you contribute to making sure it gets created in the future, and if it does get created and you didn't help it, your simulated self gets punished for it -- indistinguishable from actual hell. So since you want your simulated self to be rewarded or not punished (because you might BE that simulated self and not know it) you have to play nice with Omega.
Roko's Basilisk, right. But didnt Yukdowsky denounce it as a modern version of Pascal's Wager, which it basically is? And it's not any more convincing, either. Besides, you have to have to assume a number of things about said future AI, and every additional detail required to make Roko's Basilisk a compelling argument make it less and less likely to actually occur.

A quick search shows a wiki page on the topic. Which includes Yudkowsky denouncing the whole idea, and a mention that it was mostly rejected on LessWrong.

So it kind of seems like you're making generalizations from false premises.
 
Wait, does lesswrong believe that we should treat reality as if it's a simulation?
Yes. In essence, they take the view that if it is possible to simulate a universe (or as much of a universe as we can detect; given that FTL isn't a thing, you can have lower resolution on things that humans can't ever see anyway) then it is much more likely that we are inside a simulation. The reasoning goes something like this:

1. Assume that simulating a universe is possible.

2. Assume that people who are capable of simulating a universe will do so for any number of reasons. (anthropology, testing economic theories, video games, whatever)

3. Given the above, it is possible that we are in a simulation.

4. Given the above, it is possible that the people simulating us are also in a simulation.

5. Given the above, there is no particular reason to assume that we are at the top of the stack. (ie, that we are not a simulation)

6. Given that any universe stack would by definition have many more simulated instances than real instances, because there is only one real instance while there can be many simulated universes, is is significantly more likely that we are a simulated universe.

7. Therefore, as it is more likely that we are a simulation, we should treat that as true.
 
Therefore, as it is more likely that we are a simulation, we should treat that as true.

That sounds like the Nick Bostrom argument. The question becomes: what does "we should treat that as true" mean? It may be something as simple as we should direct our physics research to look for signs, but otherwise it shouldnt effect anyone's daily life. Another possibility is that we should all commit suicide because life is meaningless or we will all get uploaded to the higher level reality. I think the first scenario is the more likely one for most people, though those are hardly the only two possible reactions and different people will react differently.
 
Wait, does lesswrong believe that we should treat reality as if it's a simulation? I thought that the general conclusion was, even if simulation theory is true, we should just treat reality as if it's real because it's simpler and there's no way to tell anyway.
If you can't tell the difference, then treating reality as if it were a simulation and treating a simulation as if it were reality are the same thing. In both cases you should act as if your current life is all you have -- because from your own perspective, the end of the simulation is still death to you -- but you should also act as if your actions reflect on some other version of yourself. See below...

Also, I would think you'd run into pascal's wager issues if you assume reality is a simulation and is run by a entity that will punish/reward. Sure, Omega might reward you for helping the most people. Or it might punish you because the simulation was actually to test the maximum number of paperclips a single person could accumulate, and you scored very poorly.
Roko's Basilisk, right. But didnt Yukdowsky denounce it as a modern version of Pascal's Wager, which it basically is? And it's not any more convincing, either. Besides, you have to have to assume a number of things about said future AI, and every additional detail required to make Roko's Basilisk a compelling argument make it less and less likely to actually occur.
You are both correct about this, and Yudkowsky did denounce it in the sense that you shouldn't give up all of your worldly possessions to try to make the Basilisk real. However, the underlying philosophy that LEADS to the Basilisk is still sound -- the conclusion is "you should resolve to never give in to acausal blackmail, because if the Basilisk determines that nothing it could do would have made you support it, then there would be no point in sending your simulated selves to hell."

I confess to overgeneralizing. I was trying to convey the basic idea in broad strokes, not communicate the full nuances of an idea that I don't personally subscribe to.

A more relevant example might be this: If you're a simulation, then you might be a simulation of someone. And if you're a simulation of someone, it's awfully hard to tell the difference between that person and you. That person might as well BE you. And since you know YOU don't want to suffer, you know that the person you're a simulation of also doesn't want to suffer. So if you don't want the "real" version of you to suffer, you should behave in such a way that someone watching the simulation won't use your actions as evidence against the real you.

Equivalently, you might be the person being judged, and not know it. You want to make sure your mindset is such that any accurate simulation of you would behave in a way that wouldn't get YOU in trouble.
 
Last edited:
They believe in a principle known as acausal trade wherein if two rational agents can simulate the other sufficiently well then they don't have to actually interact in order to make a deal, because if the simulation is that good then if they DID meet they WOULD agree with the deal that was struck.
Broadly correct. For related thought experiments, see Parfait's Hitchhiker and Newcomb's Box.

They believe that since Omega can run simulations of high fidelity, then you have to treat simulations as equivalent to reality since this might BE a simulation. And if it IS a simulation then Omega can reward or punish the individuals in the simulation, including rewarding the simulated version of you after your death, which -- from your perspective inside the simulation -- is indistinguishable from actual heaven. And Omega might demand that you contribute to making sure it gets created in the future, and if it does get created and you didn't help it, your simulated self gets punished for it -- indistinguishable from actual hell. So since you want your simulated self to be rewarded or not punished (because you might BE that simulated self and not know it) you have to play nice with Omega.
The idea is more that A] certain forms of decision theory lead to this conclusion (which is basically analogous to converting to Christianity in response to hearing about Pascal's Wager) and that B] that's obviously not the correct answer (even if we can't explicitly articulate what's wrong with Pascal's Wager, most people agree that it's clearly unconvincing), and therefore C] we need to come up with a model of decision theory that responds more appropriately to Pascal's Wager-type situations. They don't endorse the whole 'AI will torture you forever if you don't cave to its demands' thing any more than ethicists endorse running people over with trolleys.

Objective right and wrong is a somewhat more nebulous thing, not actually tied in with the above. They believe that a moral actor will always act to bring the most benefit to the most people, and they have a concept of utility functions that describe what a person or group considers to be "benefit". They believe that living is better than not living (and by extension that death is an obstacle to be overcome) and that rational thinking is better than superstition. One big difference between the LW idea of morality and that of other religions is that LW recognizes that there is no global rule that says that certain actions are always moral or always immoral; rather, they believe that you can objectively determine the most moral act you can do in a given situation.
This is not particularly accurate. I mean, Yudkowsky is himself a utilitarian, so he makes utilitarian assumptions. But objective right and wrong doesn't really come into it - it's just the standard utilitarian line that for any given utilitarian, there ought to be a consistent and decidable utility function which that utilitarian can in principle use to decide which actions are right and which are wrong. Also the assumption that you (the reader) prefer life to death and pleasure to pain and knowledge to ignorance because obviously.

But, moreover: Why are we still talking about this?
 
This is not particularly accurate. I mean, Yudkowsky is himself a utilitarian, so he makes utilitarian assumptions. But objective right and wrong doesn't really come into it - it's just the standard utilitarian line that for any given utilitarian, there ought to be a consistent and decidable utility function which that utilitarian can in principle use to decide which actions are right and which are wrong. Also the assumption that you (the reader) prefer life to death and pleasure to pain and knowledge to ignorance because obviously.
Yudkowsky isn't the end-all-and-be-all of the community. What I describe is a reasonable generalization of how a lot of members of the community see it, especially considering how it gets pointed out that people who DON'T have life > death in their utility function need to be corrected. Also "suffering" is explicitly defined as having negative utility, and if it had positive utility it wouldn't be suffering. (The fact that pain != suffering has already been mentioned in this discussion.)

But, moreover: Why are we still talking about this?
Because there's no other subject of discussion distracting us away from it? :p
 
Last edited:
Moved LessWrong discussion to PM to preempt being yelled at for being off topic. Quote/Message me if you want to be involved.
I would actually like to dig in to this with you, if you don't mind.

I don't think that "Genuinely regret that said theft was necessary," is at all what the angel is going for.
  1. Angels are probably deontologists, to the point that anyone who isn't Kant would consider insane. I think they entirely reject the concept of "necessary evils." You might have heard of the principal of double effect, but that has a bunch of necessary components which aren't present here. The act which causes good and bad effects can't be evil in and of itself, and theft qualifies as evil.
  2. The angel isn't asking for an apology, he is asking OL to repent for his sin. That means
    1. Acknowledging that what you did wasn't just wrong, it was a sin, caused by a moral flaw within you.
    2. You will strive to fix that flaw, so that you won't do it again in the future.
None of these apply to OL right now. I don't know if they can ever apply to him again, because of all the weirdness of orange enlightenment.
To use your example, I don't think the angel would consider killing someone in self defense to be a sin, and doing so would not require repentance.

TLDR: The repentance this angel is looking for requires rejecting/abhorring/denouncing whatever part of yourself led you to sin. OL can't do that ever again, because he has explicitly accepted all of his desires.
I don't think we have enough information on the angels to say this with any certainty as yet.
 
you think just explaining "i needed the fruit to free thousands of souls from a hell plane, as well as remove a powerful demon, and create a powerful force for good" would help?
 
2.10 "You simply have to repent of your sin and ask the Lord's forgiveness. And he shall be merciful and grant it, and your sin will be wiped away."

I stare, the corners of my mouth turning downwards. "Well that's not going to happen."

fucking hell paul. you've bent more for less. You straight up stole powerful magic from them and faked a demonic incursion to get away. You have caused them more than enough bother for what is essentially an apology and a face-saving bit of fiction. The angel with the burning sword has been aggressively reasonable up until now, don't turn this into a life and death fight because you don't like his boss.
 
Last edited:
fucking hell paul. you've bent more for less. You straight up stole powerful magic from them and faked a demonic incursion to get away. You have caused them more than enough bother for what is essentially an apology and a face-saving bit of fiction. The angel with the burning sword has been aggressively reasonable up until now, don't turn this into a life and death fight because you don't like his boss.
As has been mentioned several times, 'repenting' is more than just saying 'sorry, my bad', it's admitting that what you did was wrong and that you won't do it (or anything like it) again.

And while OL regrets the aggrievation he caused the angel and the Silver City, he doesn't regret actually taking the fruit and the results of that theft.
He's not going to lie to the angel (who would likely spot the lie anyways) and say he repents when he doesn't actually.
 
Yeah, no. If you fake repentance, it's not actually repentance. The angel is asking for more than simply a polite fiction: he is asking that Paul genuinely regret the action and strive to do it no more. Paul can do neither of these.

he could probably regret the theft since he now has reason to believe it was unceassary. but you have a point about how he's not really capable of guilt as it's typically thought of.
 
I dunno. Maybe he could try and apologize before going "screw you. Lets fight!".

I mean up to this point he has tried literally nothing to get along with them. Maybe a apology would work and maybe it wouldn't but SOME attempt at diplomacy would be nice.
 
Status
Not open for further replies.
Back
Top