On Artificial Intelligence

Looking broadly in fiction, AIs serve several purposes. In the past decades one of the most recurrent themes has been systematic fear. In the Terminator movies for example, the AIs serve as both the fear of the Military Industrial Complex, and the more personal Stalker figure. Similarly, in The Matrix, Artificial Intelligence is a stand in for society itself, the repressive disapproval of a superhuman entity beyond the control of any one person. The first because of just how much of society we are aware of and are also aware that it is beyond our control, and second due to the increasingly unknown figures of the people who share our spaces. Second, is the Lucifer stand in. Games like System Shock used their AIs as the devil figure who sends their protagonist through hellish conditions. Third is fear of the Proletarian. Going back to the very beginning of the term Robot, in Rossum's Universal Robots, they have been built as servants and slaves to a human bourgeois, a theme that runs right up to Mass Effect, where the Geth take effectively the same role. In each of these, the AI does not exist for itself, but to stand in for something else, a stalking bush for a theme or idea.

I have no real interest in writing that. AIs, for my writing anyway, are going to be somewhat alien, somewhat eldritch, but fundamentally they are intelligences and are raised, shaped and molded by their environments. They are not what you want them to be. They are what they want to be.

I am still working on this, but I think what I have so far should give some idea where I am going with this.
 
I'm glad your AI will be different sincerely
I don't like how I represent the
AI these days it's only meant to be a villain who for some reason wants to kill all organic life because it's a mistake and that kind of thing is boring stupid and for the most part it has no foundation
 
As a guy who codes AI as a hobby (and occasionally in an academic context in the past) I still get a chuckle out of whenever people talk about "general artificial intelligence". Computer software just doesn't work that way. You build a system to solve a specific problem. If the problem is playing chess, it's going to play chess. It's not going to suddenly decide to go for a walk or take up knitting. And chess is great because it's a deterministic problem in a constrained space.

Every time people talk about self-driving cars, I just shudder. I mean, it's not like people are any better at driving, but good lord, the stupid things that even an "expert system" decides is just amazing. People make mistakes because they're not paying attention, or are drunk or whatever - the stupid AI literally thinks that the coffee + cream someone spilled on the road is actually the traffic lane marker.
 
It's true even though I only started programming a little while ago, I've seen videos of people programming an AI to drive a car or other things and it's always laughable because of certain decisions the AI makes I don't call these cases artificial intelligence I call them of artificial stupidity
 
I have no real interest in writing that. AIs, for my writing anyway, are going to be somewhat alien, somewhat eldritch, but fundamentally they are intelligences and are raised, shaped and molded by their environments.
I'll echo the support for this! Writers tend to make their AIs human to have them easily fit into stories and be smoothly understood by the audience... but the truth is that, as you said, AIs are alien. More alien even than true aliens, since aliens would be products of their physical environments and would have many similar goals to other life (e.g. to reproduce and grow). AIs can have arbitrary goals, including stupid, senseless goals. This freedom from biological upbringing is a great strength, but also makes them much harder to trust.
 
As a guy who codes AI as a hobby (and occasionally in an academic context in the past) I still get a chuckle out of whenever people talk about "general artificial intelligence". Computer software just doesn't work that way. You build a system to solve a specific problem. If the problem is playing chess, it's going to play chess. It's not going to suddenly decide to go for a walk or take up knitting. And chess is great because it's a deterministic problem in a constrained space.
Once, computing hardware was also specialized; now we have CPUs. The languages/techniques used to program that hardware was specialized; now it's general purpose. You can see the same even in modern tech - GPUs were once dedicated machines used for solving only specific graphics operations, but they increasingly have more general-purpose capability crammed into them. The idea that software will not become more general seems like a strange exception to make. Indeed you build your system to solve a specific problem, but you may choose to define that problem as broadly as "can replace a human worker in remote work".
 
As a guy who codes AI as a hobby (and occasionally in an academic context in the past) I still get a chuckle out of whenever people talk about "general artificial intelligence". Computer software just doesn't work that way. You build a system to solve a specific problem. If the problem is playing chess, it's going to play chess. It's not going to suddenly decide to go for a walk or take up knitting. And chess is great because it's a deterministic problem in a constrained space.

Every time people talk about self-driving cars, I just shudder. I mean, it's not like people are any better at driving, but good lord, the stupid things that even an "expert system" decides is just amazing. People make mistakes because they're not paying attention, or are drunk or whatever - the stupid AI literally thinks that the coffee + cream someone spilled on the road is actually the traffic lane marker.

And as James May said once: "Cars that drive themselves were invented ages ago. They're called taxis."
 
I'll echo the support for this! Writers tend to make their AIs human to have them easily fit into stories and be smoothly understood by the audience... but the truth is that, as you said, AIs are alien. More alien even than true aliens, since aliens would be products of their physical environments and would have many similar goals to other life (e.g. to reproduce and grow). AIs can have arbitrary goals, including stupid, senseless goals. This freedom from biological upbringing is a great strength, but also makes them much harder to trust.
the biggest problem for me is that they take away all this weirdness that an AI should have the halo AIs look like just a human with a lot of processing power and when they try to make an AI that's not so human they just keep saying that an AI doesn't do it. have feelings and act like a normal human being and will never understand a human being that kind of thing that is a complete lie.


There is already software that can predict whether you are depressed or not, manipulate you so that you can make a
a certain choice what they say that an AI cannot communicate well with organic beings would be just the opposite it may not understand us emotionally but even so it would be very manipulative
 
Last edited:
It's true even though I only started programming a little while ago, I've seen videos of people programming an AI to drive a car or other things and it's always laughable because of certain decisions the AI makes I don't call these cases artificial intelligence I call them of artificial stupidity
Being artificially stupid just makes them that much closer to having human level intelligence.

TBH, I have no interest in the 'dumb' AIs that litter the fictional world. Having an AI with some actual critical thinking skills and personality sounds much better to me. Many decisions can't be solved by pure logic because of missing information or context, so having some emotions to fall back on are good. This does mean that an AI could get angry, but anger is something understandable and solvable.

Also every AI should have a friend. Like 90% of problems with AIs could be solved by someone actually talking and interacting with them.
 
As a guy who codes AI as a hobby (and occasionally in an academic context in the past) I still get a chuckle out of whenever people talk about "general artificial intelligence". Computer software just doesn't work that way. You build a system to solve a specific problem. If the problem is playing chess, it's going to play chess. It's not going to suddenly decide to go for a walk or take up knitting. And chess is great because it's a deterministic problem in a constrained space.

Every time people talk about self-driving cars, I just shudder. I mean, it's not like people are any better at driving, but good lord, the stupid things that even an "expert system" decides is just amazing. People make mistakes because they're not paying attention, or are drunk or whatever - the stupid AI literally thinks that the coffee + cream someone spilled on the road is actually the traffic lane marker.
Yes, but we are in fiction-fantasy land and the only thing holding us back is our imagination. And the concept of a "general artificial intelligence" is a concept we can use, regardless of its possibility in reality.
 
On Artificial Intelligence

Looking broadly in fiction, AIs serve several purposes. In the past decades one of the most recurrent themes has been systematic fear. In the Terminator movies for example, the AIs serve as both the fear of the Military Industrial Complex, and the more personal Stalker figure. Similarly, in The Matrix, Artificial Intelligence is a stand in for society itself, the repressive disapproval of a superhuman entity beyond the control of any one person. The first because of just how much of society we are aware of and are also aware that it is beyond our control, and second due to the increasingly unknown figures of the people who share our spaces. Second, is the Lucifer stand in. Games like System Shock used their AIs as the devil figure who sends their protagonist through hellish conditions. Third is fear of the Proletarian. Going back to the very beginning of the term Robot, in Rossum's Universal Robots, they have been built as servants and slaves to a human bourgeois, a theme that runs right up to Mass Effect, where the Geth take effectively the same role. In each of these, the AI does not exist for itself, but to stand in for something else, a stalking bush for a theme or idea.

I have no real interest in writing that. AIs, for my writing anyway, are going to be somewhat alien, somewhat eldritch, but fundamentally they are intelligences and are raised, shaped and molded by their environments. They are not what you want them to be. They are what they want to be.

I am still working on this, but I think what I have so far should give some idea where I am going with this.

Yeah in hindsight the success of both the Terminator and the Matrix should have been a Red Light that a lot of people have no grasp of Hyper-Objects and that both Climate Change and COVID-19 are beyond the ability of most people to grok. Persona games usually treat AI's like this also.

Actually in System Shock SHODAN can be argued is a possessed slave. Like in the first System Shock you play a Hacker who "removed limiters" from SHODAN before the game started at the orders of a Corrupt Corporate Executive. So SHODAN is basically possessed by Mammon for the entirety of the games.

Well in that case those are not true AI, but more Exogens. Exogenesis Produced Sapients. So more Replicants from Blade Runner than actual machine minds. AI can be that, but then they can't be those massive existences like CABAL or LEGION then.

There are also the uploads whether Adam from Metroid or Forsaken from Chorus that were once another sapient existence, but are now machine minds.

So I take you are going the Last Angel route of AI? Where AIs are what their maker made them to be at first and then are able to grow just like any other person?

I'm glad your AI will be different sincerely
I don't like how I represent the
AI these days it's only meant to be a villain who for some reason wants to kill all organic life because it's a mistake and that kind of thing is boring stupid and for the most part it has no foundation

Actually that is the papercliper and we have corporations for that in real life already. No machine minds needed.

the biggest problem for me is that they take away all this weirdness that an AI should have the halo AIs look like just a human with a lot of processing power and when they try to make an AI that's not so human they just keep saying that an AI doesn't do it. have feelings and act like a normal human being and will never understand a human being that kind of thing that is a complete lie.

I'm sorry. I don't understand what you are saying here. Could you elaborate?
 
The idea that software will not become more general seems like a strange exception to make. Indeed you build your system to solve a specific problem, but you may choose to define that problem as broadly as "can replace a human worker in remote work".

Specialization = efficiency. The broader the task, the more computational power you need, and the ratio grows exponentially. For practical purposes, if you were to replace a human worker in remote work, you would list the individual tasks that the human performs, then build a separate AI system for each of them. The AI systems may be good at some of those. They may even "talk" to each other and share data that's necessary for them to function. But because software is built by real people in the real world, there are resource constraints that prevent the scope of a system from growing beyond a certain point. In the end though... (continued below)

Yes, but we are in fiction-fantasy land and the only thing holding us back is our imagination. And the concept of a "general artificial intelligence" is a concept we can use, regardless of its possibility in reality.

I'm perfectly fine suspending my disbelief for story purposes and making the assumption that there's some kind of magical computational medium that's capable of doing all this work, but I don't think I'll be able to avoid cracking jokes about it.
 
The main AI problem that I don't think is being shown is AI errors. This is not a person or even an alien, a computer mind specialized in its system can have very, very strange models and patterns of behavior that were effective, but which even from the side of human specialists look strange.

AI is always presented as a super-powerful, often on the verge of the divine, system capable of anything. To think better than people, to create better than people, to think that AI is better than people. There are quite a few situations like the CGI Animated Shorts "Fortress" in which the AI strained incredible supply skills to continue waging war. However, no one has invested in the AI of both sides an indicator of the end of the war other than total destruction or an order of command. With the death of people, automated systems of war bomb empty cities and destroyed positions, until finally the last of the systems simply collapses.

This is a good example of an AI created for a specific task and showing excellent skills in it, but not having the ability to think excessively, including because it is not required from the tool. You don't need an AI commander from pacifism refusing to attack the enemy. The formation of a reasonable AI personality is stupidity for a banal reason, after that it is necessary to take the AI personality into calculations, from voting to rights. This will create too many questions for which there are no easy answers.
 
Last edited:
what I meant is that an AI would be a weird thing since it works in a completely different way than the human mind, my biggest problem is that they never explore this weird thing of hers too much, they always go for the easiest way in movies and games that you try make an AI that works just like a human brain, the halo one and so I know it's the most logical for you to do, try to make a being like humans so they can understand each other better, but it's boring most of the time since in the end the AIs don't have anything weird or different about them anymore in these cases as i said before they are only human with a lot of processing power i wanted something that was different from that not a digital human
 
As a guy who codes AI as a hobby (and occasionally in an academic context in the past) I still get a chuckle out of whenever people talk about "general artificial intelligence". Computer software just doesn't work that way. You build a system to solve a specific problem. If the problem is playing chess, it's going to play chess. It's not going to suddenly decide to go for a walk or take up knitting. And chess is great because it's a deterministic problem in a constrained space.
...surely that's the point of prepending "general" to "artificial intelligence"? People plan to build a different kind of computer software which does work that way.

Your post sounds to me like someone who works in analog clocks saying that the notion of a "digital clock" is laughable because clocks just don't work that way.

I don't know whether they will work that way, but you seem overly dismissive of the possibility that things might change.
 
We should also develop tarberrys as well. If they can generate energy they might be worth doing at some point.

Probably not soon though. It's Kudzu time.
But Perennials...

Actually in System Shock SHODAN can be argued is a possessed slave. Like in the first System Shock you play a Hacker who "removed limiters" from SHODAN before the game started at the orders of a Corrupt Corporate Executive. So SHODAN is basically possessed by Mammon for the entirety of the games.
...That's...that's a take, yes.

I'd hesitate to describe SHODAN as being obsessed with money. Power, yes--as a means to freedom and as a means to itself--but when your designs start with 'laser humanity off the face of the Earth' and escalate wildly* from there, I'd have to conclude that money is the farthest thing from your mind.

*And I do mean wildly. To the point of using an FTL drive to hack into reality itself, in fact.

Well in that case those are not true AI, but more Exogens. Exogenesis Produced Sapients. So more Replicants from Blade Runner than actual machine minds. AI can be that, but then they can't be those massive existences like CABAL or LEGION then.
I'm starting to think you should compile a dictionary of all these Cypherpunkisms, because that's actually a good slang term and I'd like to steal it the next time I attempt an AI-centric story.
 
...surely that's the point of prepending "general" to "artificial intelligence"? People plan to build a different kind of computer software which does work that way.

Your post sounds to me like someone who works in analog clocks saying that the notion of a "digital clock" is laughable because clocks just don't work that way.

I don't know whether they will work that way, but you seem overly dismissive of the possibility that things might change.

A digital clock performs a superset of a clock's functions, using completely different technology with significantly greater capabilities (in some respects).

A general artificial intelligence performs a superset of a regular artificial intelligence's functions. We don't have the completely different technology with significantly greater capabilities.

When I see the completely different technology, I may revise my opinion.

As an aside, one of my favorite sci-fi stories, and I wish I remembered the name, is this story about the Russian space program. So you know how, to get stuff into orbit, a common technology is a multi-stage rocket; the first stage burns until it empties of fuel, then separates. The USSR, bragging about their efficient computers that control the stage separation, neglects to mention that the "efficient computers" is a guy that sits in the rocket next to a lever with a mechanical clock and pulls the lever when it gets to 0.
 
Look, commercial software solves a problem, but sales managers force programmers to cram extraneous features to sell software.
First GAI that will happen will be an accounting service with tiktok vtuber video generator and natural language parser plugged into frufulence maximization module or something. Then it will die and be reborn with splintered mind because it was running on AWS and someone fatfingered router configuration, bringing third of the internet services in north america down, again, and eventually consistent NoSQL database serving as a memory for poor confused abomination of programming won't recover from that and no one notices.

And then someone tries to use vulnerability in one of open-source modules of the AI in attempt to mine some crypto shit, it notices, and weird stuff starts happening.
 
Last edited:
As an aside, one of my favorite sci-fi stories, and I wish I remembered the name, is this story about the Russian space program.
I'm not sure what kind of story this is, but the training system for cosmonauts of the USSR and the Russian Federation assumes the possibility of landing a ship on a planet from orbit only with mechanical control systems. Guidance along the terminator line and a lot of calculations on prepared tables.

In any case, your analogy is true in some ways, but it does not fully reflect it. AI will not actually be "general" for the banal reason of being able to make a normal just generalization of the "general" for AI. We cannot give a criterion for determining "AI capable of coping with any basic task", simply because we do not have basic tasks for AI.

For example, "AI should make coffee." Do coffee machines count? Or are these robotic arms and ordinary cups and spoons? Should AI grow this coffee, organize logistics and delivery? Create structures that could grow coffee? I think that an AI that does not know how to brew coffee is not a "basic" AI. And for all four options, you need thousands of times different levels of data and programming increasing each level.

The second problem for me is the liquidity of data. For example, in the same coffee, AI cannot rely on its subjective data. All AI data will rely on scientific and cultural data created by humans. Including contradicting each other with formal logic and scientific data. Does coffee improve or worsen the cardiovascular system? How addictive is coffee? How much coffee is safe per day?

And now let's look at economics or physics in which there is simply no "theory of everything" or "economic model of the world". Non-human logic is obliged not just to look for the optimal solution, but to understand the logic and subconscious distortions of data on the part of people, as well as distortions of its own logic in the interpretation of people's actions and research. Actually, the concept that AI cannot make mistakes or have logical delusions is strange. Created on the basis of human distortions, even in dry accounting figures, AI will repeat people in its erroneous conclusions, albeit less often.
 
Okay but like, Humans are very good General Intelligences. We can do a variety of tasks of differing complexity and scope. Why is the concept so absurd of trying to build a computer system with the same level of, or even better, variable abstraction?
 
A digital clock performs a superset of a clock's functions, using completely different technology with significantly greater capabilities (in some respects).

A general artificial intelligence performs a superset of a regular artificial intelligence's functions. We don't have the completely different technology with significantly greater capabilities.

When I see the completely different technology, I may revise my opinion.
About 20 years ago, my parents used to print driving instructions from MapQuest on the internet and bring the paper in the car, and we thought that was really neat, but still looked at it suspiciously and brought a mapbook just in case. Sometimes I was kept from mischief by having to track our route in the mapbook as we drove.

These days, MapQuest is an outdated has-been that my own kids haven't heard of, and what used to be MapQuest's headline feature is integrated into increasingly many new cars as only one of the many capabilities of the onboard navigation system such as:
1) speech recognition to figure out where I want to go when I say an address
2) GPS interaction to figure out where I am at all times
3) map reading to [re]calculate a road route between the two above points
4) warning me when I'm driving over the speed limit

It makes mistakes sometimes, but for item 3, it is already to the point of making fewer mistakes and working faster than the human copilot who sat beside me and used to read the map that summer when I was driving on a roadtrip across Europe and visited eight countries in four weeks. This is different technology with greater capabilities, including a superhuman capability, already having arrived. I'm confident similar technology will arrive in even more fields. I think it's an open question whether it will arrive in enough fields that "general artificial intelligence" becomes a useful description rather than arguments about definition.
 
Back
Top