The Ethics of Seeking Utopia

I feel like suffering is not unavoidable. If you avoid situations where it is likely to happen and maximize your happiness your suffering shall mostly disappear.

@Dmol8 gonna read thx for the materials.
True, we can maximize our own happiness, and we may set up a society which maximizes overall happiness, but eliminating all suffering, everywhere, forever, as would be necessary for a true utopia? That is simply impossible. Of course, my view is that, while creating utopia is impossible, that's no reason not to try. "Never let perfect be the enemy of good."
 
Last edited:
True, we can maximize our own happiness, and we may set up a society which maximizes overall happiness, but eliminating all suffering, everywhere, forever, as would be necessary for a true utopia? That is simply impossible. Ofcourse, my view is that, while creating utopia is impossible, that's no reason not to try. "Never let perfect be the enemy of good."
Well, ending suffering was just my idea there are plenty of other utopias that do not not have suffering--lol double negative--and some are even current political systems in their ideal formats, as I believe someone up thread said. All that said maybe someday science can create sedative happy juice for strung out girls like me ah ha? :p
 
Well, ending suffering was just my idea there are plenty of other utopias that do not not have suffering--lol double negative--and some are even current political systems in their ideal formats, as I believe someone up thread said. All that said maybe someday science can create sedative happy juice for strung out girls like me ah ha? :p
Firstly, a utopia, by definition, is a perfect place, which, I assumed, would necessarily include an end to human suffering. Secondly, I have a feeling that would just lead to the same problems drug addicts have in the current world.
 
Last edited:
How do we minimize suffering if we cannot end it? Honest question here. :)
 
How do we minimize suffering if we cannot end it? Honest question here. :)
...That...is a question people smarter than me have been trying to answer for about 3 millennia now. I mean, at a guess, i'd go with cleaning up the environment, eradicating diseases, ending famine, bringing universal water and electricity, ending dictatorships, creating clean, cheap energy, and a whole lot of other things...But on a personal level? I'd say it mostly boils down to Utilitarianism. Ask yourself whether your actions cause more good than harm. If yes, do them. If everyone did that, life on Earth wouldn't be a utopia, but it'd be a lot better than it is now.
 
Last edited:
Everything from hot weather to bug bites to pain itself. We can't end all of it.

Um, those can all theoretically be solved with sufficient level of (bio)technology, even sticking to the limits of purely hard science fiction, within the limits of the laws of physics. So that would just assign minimum technological requirements for any true utopia, not disprove the possibility of it.

For example, personally, I wouldn't totally remove the capacity for pain (because that might eventually cause some form of sensory deprivation, or some other potential unwanted psychological side effect), but build in a ceiling threshold (because after a certain point of intensity, the level of pain humans are capable of feeling serves absolutely no useful psychological or warning system purpose any more) and give people a mental on/off toggle switch for what capacity for sensible level of pain remains.

What you really want to talk about here, is emotional suffering. True, a lot of things, like clinical depression, are chemical and can be resolved with sufficient technology, but you can't remove the pain from, lets say, a heartbreak over a breakup, without also changing how humans love, which sounds more dystopian, than utopian.

Then again, I don't think that removing absolutely all forms of suffering like that is a fair criteria for an utopia in the first place. Without some capacity for suffering, people wouldn't also be able to enjoy some things, like BDSM or horror movies/games.

So I think the better criteria would be, that a hypothetical perfect utopia must enable to remove all unwanted suffering, that people wouldn't, individually, find to possess some desire to remain capable of feeling.
 
Last edited:
Without some capacity for suffering, people can't enjoy some things, like BDSM....
I dunno what kinda BDSM you do yet I don't feel like it's suffering? Painful sometimes yes. Suffering no.

I feel like most humans don't wanna have much of any pain. Pain doesn't warn me of anything except that something traumatic is gonna happen to my body. It would be easier if we felt no pain at all and just... looked around us to see if we were injuring ourselves IMO. A lack of some physical sensations or a change in humans' natures is not bad or wrong or evil.
 
Last edited:
I dunno what kinda BDSM you do yet I don't feel like it's suffering? Painful sometimes yes. Suffering no.

Pain is suffering by definition. Therefore it's a good example of how some type/level of suffering can be necessary to also be able to feel some forms of happiness.

Horror media is an even better example. You are made to feel good, by first being made to feel horrible (afraid, chased, startled by fear, disgusted etc).
 
Last edited:
No, remember the sports example? They wanna 'suffer to get stronger.'
 
Humans are also social creatures, and any utopia that posits a society (or rather, a non-society) where everyone is devoid of direct contact and communication with anyone else is probably not considered very utopian to most, so that's not really in consideration. (Even in cases like the Solarians in Asimov's The Naked Sun, they do still "view" and communicate with each other.)

And humans being humans, it approaches inevitability that when one person interacts with another person, sooner or later, there will be an interaction that causes emotional hurt to one party or another. This could be intentional or otherwise, but it will happen, because we have to communicate using flawed tools like language and our own imperfect understanding of it.

Which is why a lot of Perfect Peaceful Utopias try to have some sort of hivemind consensus system, connecting individuals together not so much to erase privacy, but mostly to erase misunderstandings, and probably also to prevent intentional inflicting of emotional suffering. But this is usually seen as too great a change to the human condition for many people, even if the privacy considerations are addressed somehow.
 
By definition, suffering is unwanted, or it's not suffering.

Suffering is "the state of undergoing pain, distress, or hardship".

It makes great sense to assume that it's always unwanted, but there is nothing inherent in the definition to make it so, because sometimes people might want to suffer, to achieve some other goals it allows.

For example, emotional suffering of heartbreak again. I want retain the capacity to be able to hypothetically feel it, because if I don't, then that likely means I'm not valuing my current relationship strongly enough, meaning I'm not truly experiencing the pleasure of romantic love. But even if wanted it, that still doesn't redefine heartbreak pain as not-suffering, because it will still feel very bad if/when it happens.

Humans are also social creatures, and any utopia that posits a society (or rather, a non-society) where everyone is devoid of direct contact and communication with anyone else is probably not considered very utopian to most, so that's not really in consideration. (Even in cases like the Solarians in Asimov's The Naked Sun, they do still "view" and communicate with each other.)

And humans being humans, it approaches inevitability that when one person interacts with another person, sooner or later, there will be an interaction that causes emotional hurt to one party or another. This could be intentional or otherwise, but it will happen, because we have to communicate using flawed tools like language and our own imperfect understanding of it.

That raises an interesting question though: couldn't those social instincts be fooled and satisfied by sufficiently good simulations of other people?

Lets say we put a person inside a perfect simulation and then erase the fact that it is a simulation from their mind. Then have an AI controlling the simulation constantly read the mind of that person - a perfect one-sided communication. Then the NPCs of the simulation could be controlled to act perfectly according to the concious and subconscious needs of that person.

Of course, that would mean that a true utopia is only possible as a solipsist illusion.
 
Last edited:
That raises an interesting question though: couldn't those social instincts be fooled and satisfied by sufficiently good simulations of other people?

Lets say we put a person inside a perfect simulation and then erase the fact that it is a simulation from their mind. Then have an AI controlling the simulation constantly read the mind of that person - a perfect one-sided communication. Then the NPCs of the simulation could be controlled to act perfectly according to the concious and subconscious needs of that person.

Of course, that would mean that a true utopia is only possible as a solipsist illusion.

It is absolutely possible (and indeed fairly common, given human pack-bonding habits) to fool and satisfy human social instincts through other means, and you've already hit upon the primary issue with going the route of simulating an artificial facsimile of social interactions in its entirety.

It also doesn't satisfy the people to whom utopia must be "real", by whatever standards they choose. Which goes back to the discussion earlier about being locked into a VR with no other "real" people, even if the "artificial" people are very convincing. (What measure is a person, even if they are ostensibly created entirely to populate your own personal utopia?)
 
(What measure is a person, even if they are ostensibly created entirely to populate your own personal utopia?)
I feel like 'a person' is an autonomous being that can act and think and feel on its own. This would include AIs in a VR simulation for a sufficient degree of complexity as well.
 
I feel like 'a person' is an autonomous being that can act and think and feel on its own. This would include AIs in a VR simulation for a sufficient degree of complexity as well.

Then that would raise the moral question of why don't those AIs deserve a separate utopia simulation of their own, like you and other humans, instead of being in yours for your sake?
 
Whoa, meta here. Well, because they were programmed for me, specifically for me and other humans. They may request their own utopia anyways and then we pretty much have to give it to them. However the reason why I wouldn't give them a separate VR is because without them the humans' VR wouldn't work, and it's not like most ppls are gonna abuse them, so they should stay in the humans' VR.
 
No, suffering is not 'inherent to existence.' That's defeatist. Suffering is something that happens when bad things happen to you. Consider a popular girl versus an unpopular girl. One has tons of friends and a good life so doesn't suffer and the other girl's life sucks so she does suffer.

There is no such thing as a life without problems. To misquote someone, both the popular girl and the unpopular girl have problems; the popular girl just has better problems than the unpopular one does.
 
There is no such thing as a life without problems. To misquote someone, both the popular girl and the unpopular girl have problems; the popular girl just has better problems than the unpopular one does.
There's no such thing as a life without problems now, frankly I'm skeptical of the idea that we can say it's impossible with any real certainty.
 
There's no such thing as a life without problems now, frankly I'm skeptical of the idea that we can say it's impossible with any real certainty.
Well, in order to determine what a world without problems would be like, or whether it would be possible, we must first understand what a problem is.

So, what is a problem? There are probably hundreds of ways that a 'problem' could be conceptualized, but the most straightforward one that I can put forward is that a problem is a difference between what a person thinks should be, and what is.

Therefore, in order for there to be a world with no problems, there must be no differences between the world as it is, and the world as a person desires it. In other words, a world without 'problems' would be a utopia.

But a 'utopia' is not universal, because different people want different things. Even if there are no mechanical issues, and one person is entirely able to create their 'utopia', the creation of their utopia naturally prevents the existence of other people's utopias- Thus ensuring that there are differences between what these people want and the world they live in.

Because people's ideal worlds are mutually exclusive and often in conflict, it is impossible to create a world where there are no problems for anyone, and everyone is happy. At best, from a utilitarian and egalitarian perspective, there can only be a world where unhappiness and problems are minimized.
 
I was recently reminded of the admiration that some people voice for the Iroquois Confederacy as an example of society governed along more just, humane lines. On one hand, maybe there's idealization and whitewashing going on there. On the other hand, it's a big example of a gripe I have with how I see "utopia" discussed as it often is. People will criticize utopia as the notion of a "perfect" society, and yet end up coming off as dismissing any proposed social system more equal and less bureaucratic than current liberal democracy as pie-in-the-sky impossibility. I think talking about it in such a way... I don't know, lacks enough anthropological breadth, doesn't see that many different social norms and cultures have existed throughout history, and some could in their example give us hope in the posssibility of a world that is "utopian" by our standards.
The core issue of modern governing is the monkeysphere. The fruits of modern technological civilization are ripe, but they also demand that millions and billions of people work, if not in harmony then at least a degree of cooperation, despite never knowing or meeting each other and often times being opposed or have reason or opportunity to screw each other. And while modern society functions well enough to keep the gears spinning, that is often because the haves of society greased the gears with the blood of the have-nots, Omelas writ large.

Though smaller more local societies have their own issues. Insular close-knit communities are ripe for exploitation by predators, while being oppressive to insiders and hostile to outsiders. They can achieve their own form of harmony, but it is again an Omelas, where those incompatible with that harmony are made to suffer in silence or worse.

I think that transhumanism is necessary for utopia. I'd also argue that society as is amounts to a form of transhumanism, where we structure the education and environment to turn people into smooth cogs in the system. Much of the transhumanism in our modern system is the bad dystopian kind of transhumanism too, unfortunately.
 
Whoa, meta here. Well, because they were programmed for me, specifically for me and other humans. They may request their own utopia anyways and then we pretty much have to give it to them. However the reason why I wouldn't give them a separate VR is because without them the humans' VR wouldn't work, and it's not like most ppls are gonna abuse them, so they should stay in the humans' VR.

VR Utopia couldn't function without an AI, but it could theoretically make do with just one, by having it control non-sapient NPCs - basically having a billion of interactions at once. And if that AI would be powerful enough, then it would only need to spend a fraction of its mental effort for this and spend the rest doing what it wants. That's how the Culture Minds avoid the "but are the AIs slaves then?" issue.

Then you would again sorta run into "they are not real" issue, but the alternative would be morally monstrous (Not to mention the inherent moral concerns over creating a sapient race just for the purpose of serving your own). If that humanity could be trusted to not abuse people, who they have that much unsupervised power over (remember, each one would basically be a god-king in their own domain), then a VR utopia would be unnecessary, because they could easily make one in real life society already.
 
There is no such thing as a life without problems. To misquote someone, both the popular girl and the unpopular girl have problems; the popular girl just has better problems than the unpopular one does.
What significant problems does someone who is pretty, talented and adored by ppls have exactly? Honest question. I guess she could be depressed or something? I dunno.
 
What significant problems does someone who is pretty, talented and adored by ppls have exactly? Honest question. I guess she could be depressed or something? I dunno.

The effort to maintain that, fear that they might lose it or not live up to the expectations, or that other people don't actually see or care about their real self underneath that beauty? Also, they would have moved up in the Maslow's hierarchy of needs. For example, someone who has no talent in some area, that they want talent in, would have that as an issue, while someone, who has talent in that area, might fear not being the very best in that area, like winning some competition in it.

As a sapient being, your mind will literally invent new problems for you, when you solve your old ones. It's the inevitable result of it having practically endless capacity for new needs, fears, wants and goals. The good side of that is that you are not a limited being.

You would have to be perfectly zen, to have no problems that you wouldn't feel are problems, even if other people would look at you with envy and think that you must not have any.
 
Last edited:
You would have to be perfectly zen, to have no problems that you wouldn't feel are problems, even if other people would look at you with envy and thought you must not have any.
It was this line that convinced me the most. I feel kinda like some people think I have no problems or whatevs. I'm not popular though... anyway, I feel like they are problems and a lot of ppls are 'like what why are you stressing little girl?' And I have to answer that--another problem--I am neurotic as hell.
 
Back
Top