With regards to General AI, it seams to me that instead of trying to build an AI that can do any task, we need one that can learn to do any task. The problem with that, as I understand it, is how can we abstract machine learning into something that pass a turing test. The fascinating thing about AI is it forces us to confront the issues of how do we think. What makes us us. Its an entirely reasonable thing to program a computer to do a single task, or a set of tasks, its a lot more difficult to make something that will receive a set of instructions and be able to interpret those to act on a scenario not explicitly covered by them. To formulate and ask questions to expand its knowledge base. In other words a GAI is a program running on transistors the same way you or I are programs running on neurons. Its technically true, but misses the forest for the trees.
 
you will never have an AI that is equally good at everything. Any artificial intelligence is going to specialize into a small handful of things. The General part is mainly that it has the potential to do anything.
 
Massaging the numbers
Massaging the numbers

"One death is a tragedy, a millions is a statistic." Trite and callous perhaps, but the quotation has its roots in fact. The human mind didn't fully comprehend such massive numbers, a million dead. Impossible to calculate the waves of human grief from family and friends, the bereaved who'd lost loved ones. And yet, humans, humanity was resillient. Decades of war, millions dead in tiberium related disasters, a steadily climbing death toll even in the aftermath of the 'Third tiberium war'. Vast amounts of pain and misery reduced to numbers by time, distance and electronic screens. The tragic turned banal, the horrifying turned monotonous... Tiberiums vast encroachment upon the planet itself, reduced to the same category as a weather report. "Yes, that tornado is still there, no it hasn't moved much yet, not much we can do and isn't it going to be a bitch when it hits us."

And so it was that as the quarters economic projections were made, projects were completed or not and the wheels of bureaucracy and government turned the doings of government were pored over by news agencies reputable and not, internet commentators, and talk shows.

----------

"And here's to Joanne with the latest on quarterly tib growth." Samantha said with a professional smile for the camera's the camera operator swivelling to take in the bubbly tiberium scientist, sensibly dressed as she stood against a screen displaying a number of statistics and charts next to superimposed images of the globe and labelled zones.

"Encouraging figures from preliminary reports. Tiberium growth is above the expected averages but not majorly so, we're seeing the blue zones pushed outwards significantly and some patches being cleared in the green which are remaining stable. There's been some red growth but it's marginal and not a cause for concern, with continued investments in the Marv projects GDI officials assure us that most of the growth is in area's of the world far from existing harvesting operations. Indeed in most area's GDI is active the red zones are still actively being pushed back. Tiberium experiencing semi-random growth this is simply a case of bad luck and we can expect to see further pushes back against the Red Zones in the next season and for quite some time in the future. Back to you Sam." Joanne finished with a bright smile.
 

The problem here is that what you're talking about is, fundamentally, a set of incremental improvements. Faster hardware, more accurate data, for sure - the pathfinding systems of today are significantly better than the pathfinding systems of twenty years ago (for the record, my opinion is that they still suck ass and, now, on top of it all, they also try to talk to you by default unless you turn it off). But these GPS systems are still performing the same task as they did 20 years ago, and they're not going to suddenly decide to get up and brew up a cup of coffee instead; that will be done by a different device with a different specialization.

The idea of a "blank" neural network that can be trained to make judgement calls about arbitrary sets of data exists today. You can even retrain it once it's been trained, if you get a different set of training data (although most of the time you just spin up a new blank instead). It's just that there's a severe disconnect between what these things actually do ("this is a picture of car with 73% confidence"/"the optimal route from A to B is... with 92% confidence"/"the described symptoms are ass cancer with 84% confidence") vs popular media portrayals ("I can't let you do that, Dave"/"does this unit have a soul"/"what is love"/"all humans must die"). You're not going to see the latter for quite a while, if ever.
 
The main AI problem that I don't think is being shown is AI errors. This is not a person or even an alien, a computer mind specialized in its system can have very, very strange models and patterns of behavior that were effective, but which even from the side of human specialists look strange.

AI is always presented as a super-powerful, often on the verge of the divine, system capable of anything. To think better than people, to create better than people, to think that AI is better than people. There are quite a few situations like the CGI Animated Shorts "Fortress" in which the AI strained incredible supply skills to continue waging war. However, no one has invested in the AI of both sides an indicator of the end of the war other than total destruction or an order of command. With the death of people, automated systems of war bomb empty cities and destroyed positions, until finally the last of the systems simply collapses.

This is a good example of an AI created for a specific task and showing excellent skills in it, but not having the ability to think excessively, including because it is not required from the tool. You don't need an AI commander from pacifism refusing to attack the enemy. The formation of a reasonable AI personality is stupidity for a banal reason, after that it is necessary to take the AI personality into calculations, from voting to rights. This will create too many questions for which there are no easy answers.

The problem with that is that once you hit a certain amount of battlefield complexity you need a self-aware mind, machine or no, to be able to direct ordinance properly.

what I meant is that an AI would be a weird thing since it works in a completely different way than the human mind, my biggest problem is that they never explore this weird thing of hers too much, they always go for the easiest way in movies and games that you try make an AI that works just like a human brain, the halo one and so I know it's the most logical for you to do, try to make a being like humans so they can understand each other better, but it's boring most of the time since in the end the AIs don't have anything weird or different about them anymore in these cases as i said before they are only human with a lot of processing power i wanted something that was different from that not a digital human

OK that is a much clearer definition of what you are talking about. Thank you. In that case you should avoid the Bolo series.

...That's...that's a take, yes.

I'd hesitate to describe SHODAN as being obsessed with money. Power, yes--as a means to freedom and as a means to itself--but when your designs start with 'laser humanity off the face of the Earth' and escalate wildly* from there, I'd have to conclude that money is the farthest thing from your mind.

*And I do mean wildly. To the point of using an FTL drive to hack into reality itself, in fact.

Ah, but capital isn't. After all money is a fiction we humans make for ourselves to have some measure of our own work in a complex unintimate society. In the end SHODAN was hacked so that she/it stopped caring about anything other than advancing the rate of capital she/it gains. So very much capitalist, just has transcended money as a concept.

I'm starting to think you should compile a dictionary of all these Cypherpunkisms, because that's actually a good slang term and I'd like to steal it the next time I attempt an AI-centric story.

Feel free to use it, but most of the time unless I stop to think I don't even consider my own language to be anything other than just ordinary words. I am thinking of starting a thread to work on definitions and terms while criticizing other authors works. Just need to find the energy to start it.

Look, commercial software solves a problem, but sales managers force programmers to cram extraneous features to sell software.
First GAI that will happen will be an accounting service with tiktok vtuber video generator and natural language parser plugged into frufulence maximization module or something. Then it will die and be reborn with splintered mind because it was running on AWS and someone fatfingered router configuration, bringing third of the internet services in north america down, again, and eventually consistent NoSQL database serving as a memory for poor confused abomination of programming won't recover from that and no one notices.

And then someone tries to use vulnerability in one of open-source modules of the AI in attempt to mine some crypto shit, it notices, and weird shit starts happening.

Are you paraphrasing the entire Sprawl setting or something?

you will never have an AI that is equally good at everything. Any artificial intelligence is going to specialize into a small handful of things. The General part is mainly that it has the potential to do anything.

And OP renders half of my replies superfluous in three sentences.

Massaging the numbers

"One death is a tragedy, a millions is a statistic." Trite and callous perhaps, but the quotation has its roots in fact. The human mind didn't fully comprehend such massive numbers, a million dead. Impossible to calculate the waves of human grief from family and friends, the bereaved who'd lost loved ones. And yet, humans, humanity was resillient. Decades of war, millions dead in tiberium related disasters, a steadily climbing death toll even in the aftermath of the 'Third tiberium war'. Vast amounts of pain and misery reduced to numbers by time, distance and electronic screens. The tragic turned banal, the horrifying turned monotonous... Tiberiums vast encroachment upon the planet itself, reduced to the same category as a weather report. "Yes, that tornado is still there, no it hasn't moved much yet, not much we can do and isn't it going to be a bitch when it hits us."

And so it was that as the quarters economic projections were made, projects were completed or not and the wheels of bureaucracy and government turned the doings of government were pored over by news agencies reputable and not, internet commentators, and talk shows.

----------

"And here's to Joanne with the latest on quarterly tib growth." Samantha said with a professional smile for the camera's the camera operator swivelling to take in the bubbly tiberium scientist, sensibly dressed as she stood against a screen displaying a number of statistics and charts next to superimposed images of the globe and labelled zones.

"Encouraging figures from preliminary reports. Tiberium growth is above the expected averages but not majorly so, we're seeing the blue zones pushed outwards significantly and some patches being cleared in the green which are remaining stable. There's been some red growth but it's marginal and not a cause for concern, with continued investments in the Marv projects GDI officials assure us that most of the growth is in area's of the world far from existing harvesting operations. Indeed in most area's GDI is active the red zones are still actively being pushed back. Tiberium experiencing semi-random growth this is simply a case of bad luck and we can expect to see further pushes back against the Red Zones in the next season and for quite some time in the future. Back to you Sam." Joanne finished with a bright smile.

Um. Omake here. Also calculating grief is pointless on a single human let alone as large a number as a million.
 
The problem with that is that once you hit a certain amount of battlefield complexity you need a self-aware mind, machine or no, to be able to direct ordinance properly.
Because what? Because it is important for AI to have moral principles, questions of existential existence or the opportunity to develop skills in creating synthesizer music. I don't see any other reasons.

The creation and application of unusual tactics to counter the enemy implies that the machine has a large number of options for evaluation and a system for comparing theoretical solutions within this system. There is no reason why an AI should be aware of itself or even be able to think that the data obtained from the film was useful. It is enough for AI to simply get another set of data "artistic assessment of military actions by humans" and give an account to itself about this.

Actually, let's start with the more banal, can GAI immutable elements? Just because this is the basic definition of AI, the ability to change self code, and damn it, I would never let a machine on the battlefield that can decide that war is hell and that defeat will save more lives in the long-term model. A self-aware AI is literally a slave, which in itself is not very effective. A good AI is a philosophical zombie, an AI that can say "This is not good", but which does not really have experiences and simply emulates the answer using the example of embedded or created patterns of behavior that are expected of it.
 
Ah, but capital isn't. After all money is a fiction we humans make for ourselves to have some measure of our own work in a complex unintimate society. In the end SHODAN was hacked so that she/it stopped caring about anything other than advancing the rate of capital she/it gains. So very much capitalist, just has transcended money as a concept.
Incorrect. Money is an abstract measure of value. Just because something is intangible, does not mean it is a fiction.
But that's a derail.
A self-aware AI is literally a slave, which in itself is not very effective.
Incorrect, unless you're using a severely nonstandard definition of "slave". Ithillid has stated GDI has declared the AI as possessing personhood, even if it is subject to "diminished capacity" issues.

And the general discussion of AI outside of the specific instance of this quest is a derail, and should probably be taken elsewhere.

So, new topic: Upcoming SCED plans, featuring "How do we spend all this money?", megaprojects, and hiring increases!

Definitely need to get more phases on New Johnson for more teams, finish the High-Sec Materials Lab, and hopefully finish the Venus Tiberium Heist. And... apparently BOT has ideas about what SCED can spend the bulk of its incoming funding on. :D
 
Are you paraphrasing the entire Sprawl setting or something?
I'm systems administrator for software vendor. I install our solution to clients, I try to fix it's problem and work around problems of whatever infrastructure client put server and whatever OS our stack runs on, plus I support our developers and our threadbare infrastructure. And I read news relevant to my profession. I don't need to paraphrase Gibson when I am trying to have a laugh about nightmare I live in today.
 
Last edited:
So, new topic: Upcoming SCED plans, featuring "How do we spend all this money?", megaprojects, and hiring increases!

Definitely need to get more phases on New Johnson for more teams, finish the High-Sec Materials Lab, and hopefully finish the Venus Tiberium Heist. And... apparently BOT has ideas about what SCED can spend the bulk of its incoming funding on. :D
To expand on this you will get to choose one of a few megaprojects to do. Available are:
-Deep Space Luna Telescope
Beeeeeeg Telescope built into a crater on the dark side of the moon for making high resolution pictures of nearby star systems.
-Orbital Fusion Prototype
Build an testbed orbital fusion reactor setup
-Alpha Centauri Probe Planning
The GDrive enables sending a probe to the nearest star system on reasonable star system, but even planning and designing this starship will be a megaproject and the actual construction a treasury level project.
If there are more ideas, shoot! :p
 
So, new topic: Upcoming SCED plans, featuring "How do we spend all this money?", megaprojects, and hiring increases!

Definitely need to get more phases on New Johnson for more teams, finish the High-Sec Materials Lab, and hopefully finish the Venus Tiberium Heist. And... apparently BOT has ideas about what SCED can spend the bulk of its incoming funding on. :D
I want to spend it all on more nerds. Nerds are currently our biggest limiting factor for SCEDQuest from what I can tell. Nerds and as many Facility Parts as we can manage.

To expand on this you will get to choose one of a few megaprojects to do. Available are:
-Deep Space Luna Telescope
Beeeeeeg Telescope built into a crater on the dark side of the moon for making high resolution pictures of nearby star systems.
-Orbital Fusion Prototype
Build an testbed orbital fusion reactor setup
-Alpha Centauri Probe Planning
The GDrive enables sending a probe to the nearest star system on reasonable star system, but even planning and designing this starship will be a megaproject and the actual construction a treasury level project.
If there are more ideas, shoot! :p
I think I want to poke the Orbital Fusion Prototype first.
 
-Alpha Centauri Probe Planning
The GDrive enables sending a probe to the nearest star system on reasonable star system, but even planning and designing this starship will be a megaproject and the actual construction a treasury level project.
I wanna. Because this? This is also shot in the arm for the hope for the future and giant glowing relativistic middle finger to Kane.
 
Using the Deep Space Luna Telescope to then look at Alpha Centauri before we do the probe planning sounds like something to do?

Orbital this Plan is a bit too tied up to go about making the starship/probe.
 
Incorrect, unless you're using a severely nonstandard definition of "slave". Ithillid has stated GDI has declared the AI as possessing personhood, even if it is subject to "diminished capacity" issues.
A slave in the sense that we cannot remove it, but for example, what can AI do without the possibility of its own production of microprocessors? Unlike humans, it should be perceived that AI has separate organs for vital activity.

"We are not against you, we just provide oxygen supplies to you and manage your ventilator. Just take this fact for granted when you make decisions that we don't like. Of course we respect your right to life. But still consider."

It's somewhat ridiculous to use anything other than slavery, simply because we are the only creator of AI, the only employer of AI and the only manufacturer of AI components. You have the right to serve us or be left on a disabled server. Formally, it's even alive, just in hibernation.
 
I am still working on this, but I think what I have so far should give some idea where I am going with this.
may I put forth some ideas ?,

for example humans are a sort of biological computer themselves working of chemical reactions and neurons ,for example if we find something repulsive the brain runs disgust.exe so that we try and avoid it , if we find something that disturbs us the brain runs fear.exe to direct us to stay away or anger.exe to encourage us to confront and deal with it , so on and so forth , but AI doesn't have any of those since it lacks the hard wear to simulate human neurons and or the chemical reactions that cause instinctive emotional behavior like fear or anger , that means that troupes like AI turning on its creators out of self preservation shouldn't really happen since self preservation is biological response derived from a build in fear of death or none existence , in fact nor should any emotional derived for an AI like existentialism "does this unit have a soul ?" for the same reasons mentioned above .

of course if the AI does have hard were simulating organic neuron networks and brain chemistry like presumably AI like (EDI and the Geth) do in mass effect then the above does not apply


another idea on how AI could realistically behave is paper clip maximizer were you give the AI a list of goals of differing priorities and it will only think , behave ,react and plan based on that list goals to the exclusion of all else
 
Honestly the human brain is not a computer, neurons aren't binary switches and there's a whole lot of ??? even among super high octane cutting edge neurologists and psychologists etc. about how it actually works. Computers are a useful metaphor because they're the technology that dominates our time and everyone thinks about and works with, but confusing the metaphor for reality is dangerous. The brain is no more a computer than it is a steam engine, the previous metaphor that gave us ideas of "blowing off steam" and such.

So trying to emulate a human brain on silicon is coming at it from the wrong angle IMO. The point isn't to make human software run on non-human hardware, you're inherently going to have to get a non-human intelligence out of something that's running on hardware not a human brain.
 
Honestly the human brain is not a computer, neurons aren't binary switches and there's a whole lot of ??? even among super high octane cutting edge neurologists and psychologists etc. about how it actually works. Computers are a useful metaphor because they're the technology that dominates our time and everyone thinks about and works with, but confusing the metaphor for reality is dangerous. The brain is no more a computer than it is a steam engine, the previous metaphor that gave us ideas of "blowing off steam" and such.

So trying to emulate a human brain on silicon is coming at it from the wrong angle IMO. The point isn't to make human software run on non-human hardware, you're inherently going to have to get a non-human intelligence out of something that's running on hardware not a human brain.
Moreover, I will say that every time humanity compares the brain with the most complex system that exists in the public consciousness, and every time this comparison is wrong.

The brain is a set of mechanisms like gears.
The brain is a system of telegraph wires through the body and between parts of the brain.
The brain is a computer with microchips.
The brain is a neural network

All this is true and not true. We simply apply the definition that is most understandable to us. The human brain is literally the most complex thing in the universe that we know about. Thermonuclear fusion of stars is large, but not complicated in theory. It is impossible to write a brain formula, at least simply because the brain capacity is 125 petabytes only in bare numbers. Which are permanent RAM. Despite the existence of long-term memory, there are no segments turned off from the general operation. Unnecessary things are simply deleted. Everything else is in constant operation on 100 billion threads of low-speed processor cores.

So the surest way to say what a brain is is a brain. This system is complex enough to simply have no analogues close enough to describe. Any will be a simplification of the real complexity.
 
you will never have an AI that is equally good at everything. Any artificial intelligence is going to specialize into a small handful of things. The General part is mainly that it has the potential to do anything.

Hmm, if that is the case about AI not being equally good at everything, and the fact that AI are somewhat an alien intelligence which does need raising and guidance, this does have various implications of not only how our potential AI will function in the future, but also the AIs which have faced both in the past and in the present. If Cabal in the past was a AI like that, it would be mean that he had some specializations and was raised by Nod and Kane, in which it makes more sense in how Cabal was seemingly extremely specialized in not advising Nod but also in research as the rapid breakthroughs it made in Nod like the Obelisk of Darkness or the Core Defender were such they were far beyond Nod technology when they went rouge such that Legion a successor of Cabal did not match in capability.. It also explains how Legion differs from Cabal in various manners, and it makes me wonder exactly what Legion is specialized in, as while he is a New AI, he is also a successor of Cabal such as his handler got frightened when they saw fragments of Cabal and his voice within Legion.
 
So trying to emulate a human brain on silicon is coming at it from the wrong angle IMO. The point isn't to make human software run on non-human hardware, you're inherently going to have to get a non-human intelligence out of something that's running on hardware not a human brain.
Approaching from the direction of computer science:

A digital computer, a modular analogue computer, most programming languages, and a human brain are all alike in one respect: they are all Turing-complete. That is, they can all simulate a Turing machine, which means in turn that they can compute any function for which an algorithm can be defined--that is, they can solve any problem that can be solved in a finite number of well-defined steps. It also means that a Turing machine can, with the right algorithm, simulate the workings of any of these 'computers'.

In this roundabout way, any Turing-complete machine can simulate any other Turing-complete machine.

Therefore it is theoretically possible to simulate a human brain on a large enough digital computer. The biggest obstacle is we don't understand the brain well enough to formulate the right algorithm, not any intrinsic inflexibility.
 
Tiberium Spread Patterns and Electronic Jamming Techniques - a Comparative Analysis
Included with a note:
Seo-
You might want to look into this one, when she graduates.
JG

Tiberium Spread Patterns and Electronic Jamming Techniques - a Comparative Analysis

In this paper, we compare analyses of Tiberium spread patterns, especially those which take into account Tiberium harvesting and abatement measures, and compare those patterns to actively managed and pre-programmed electronic warfare systems. Unfortunately, much data is missing regarding the increase of density in deep Yellow and Red Zones, so conclusions are by necessity tentative.

Multiple scientists have analyzed historical records of macro-scale Tiberium growth and spread as compared to projected random patterns, and while some{1} have come to the conclusion that records do not match a purely random spread, others{2} maintain that the pattern must be random. Meta-analyses{3} generally propose that this inconclusivity is itself evidence that the pattern is non-random, but this is yet to be proven. And while the effectiveness of the Tiberium Stabilizer satellite network has unequivocally shown that Tiberium is in some ways responsive to outside stimuli, and therefore potentially direction, it has yet to be proven that its spread is similarly controllable...
 
Back
Top