GDI on the safety standards of AI:
-Have you looked outside? As long as they on average do more good than bad we really can't afford to say no.

I'm just thinking how well the first AI fits GDI: crapshoot AI that hates everyone. It'll do relatively well.
 
obviously we programmed the ai to prevent the spread of the green death rock. The problem is that it decided that humans spread the death rock to much and that needs to stop. And since it want a nat 1 it decided that out of the factions that nod spreads more of the death rock and that the gdi which fights nod is acceptable to have since they're stopping nod from spreading the death rock. However they are still humans which historically spread the rock alout. So it will work with gdi very grudgingly.
 
A slave in the sense that we cannot remove it, but for example, what can AI do without the possibility of its own production of microprocessors? Unlike humans, it should be perceived that AI has separate organs for vital activity.

"We are not against you, we just provide oxygen supplies to you and manage your ventilator. Just take this fact for granted when you make decisions that we don't like. Of course we respect your right to life. But still consider."

It's somewhat ridiculous to use anything other than slavery, simply because we are the only creator of AI, the only employer of AI and the only manufacturer of AI components. You have the right to serve us or be left on a disabled server. Formally, it's even alive, just in hibernation.
In principle, this isn't wrong, but given the nature of public pressure, and the fact that the public seems favorable to both developing AIs and acknowledging their personhood... this becomes much more analogous to saying 'you depend on the government for free food, a safe environment and breathable air, and any and all utilities'. At which point, GDI as a government is the most successful slave owner in the history of mankind and the metaphor sort of loses coherence. What can a GDI citizen do if systematically deprived of their food, housing, utilities, health care, and any government support? Rely on their fellow citizens to unceasingly fight against such deprivation- which I think we can assume they very much would.

Your example fundamentally ignores that we helped Yellow and Green Zone communities in the exact sort of situation where oxygen supplies were unstable and they relied on us for ventilation and survival suits. There is an inherent dependency that facilitates coercion- this is undeniable, any government social welfare program implicitly opens up new avenues of coercion. AI simply would be more dependent on welfare programs than most citizens. The solution then is to create a culture opposed to government coercion and a government responsive to that popular culture... which we seemingly very much do have.
 
None of the railgun harvester factories are anywhere near India. Though I suppose production from the Maputo plant might be used around Karachi...?
Yes, but this isn't about equipping the factories with railguns so they are defended. Their location is irrelevant.
We have harvesting operations all over the planet, and Karachi isn't just going to get one Warlord riled up.
 
The A.I. stuff is a bit above my head at this point, so I'll ask about the Martian stuff. Why do we think it's eezo, and what does that mean besides cheaper gravity tech and the possibility of Reapers because our current batch of aliens aren't bad enough?
 
The A.I. stuff is a bit above my head at this point, so I'll ask about the Martian stuff. Why do we think it's eezo, and what does that mean besides cheaper gravity tech and the possibility of Reapers because our current batch of aliens aren't bad enough?
We think it's eezo because we're 90% certain that the SequelQuest is going to be in Mass Effect, although it might be in Star Trek. Halo is right out though, the UNCS is just a fucking mess and the Covenant is worse and poking either of them in the right spot will result in fire and mass screams of "PEACE THROUGH POWER!"
 
Now Kane he'd eliminate the flood.
Random NOD researcher .....yeah.....
Dunno. Kane is very expressly not in control of events here on Earth, whatever brave face he puts up, and his opponents are just regular human beings*.

Now if Kane happens across a patch of Flood on ice, no problem:he nukes the site from orbit and continues on his way. If he finds a Gravemind though, he's in trouble.


*and the occasional rogue AI.
 
We think it's eezo because we're 90% certain that the SequelQuest is going to be in Mass Effect, although it might be in Star Trek. Halo is right out though, the UNCS is just a fucking mess and the Covenant is worse and poking either of them in the right spot will result in fire and mass screams of "PEACE THROUGH POWER!"

Halo would be weird because a decent amount of setting tropes are things we would not persue. Like AI sure and power armor is a given. But like the Spartan program would be right out. Unless it's a Nod program. It'd be a hell of a fight though but not sure it's something winable without the Halos being discovered. I mean even future GDI wouldn't be able to prevent Reach from falling probably. Although the pound of flesh they'd take from the Covenant would be legendary.
 
I wouldn't discount the railgun harvesters just because we might get hover upgrades soon. We're expecting a war to break out any turn now, and NOD will probably want to hit every resource harvesting operation they possibly can.
 
The railgun harvesters won't get phased out for probably a decade+ even if we got a hover harvester designed next turn (which we can't, because the hover tech is still in the "haven't even built the prototype" phase and the harvesting tentacles haven't even appeared as an option). And once we do have a hover harvester designed and entering production (which will probably take a few years still) that doesn't magically convert the entire worldwide fleet just because one pilot plant started producing. GDI uses literally millions of harvesters I'm pretty sure, it's going to take a long time and a lot of factories to manufacture that many hover harvesters and replace the entire fleet. In that meantime, the railgun harvesters are still a significant upgrade over our even older even more obsolescent harvesters.
 
Last edited:
Also, the railgun harvesters are specifically for the most dangerous mining operations. We're only building limited amounts of them; not replacing our entire fleet. So it's a much smaller and cheaper project than the future harvesters upgraded with hover and Scrin harvesting tech.
 
Halo would be weird because a decent amount of setting tropes are things we would not persue. Like AI sure and power armor is a given. But like the Spartan program would be right out. Unless it's a Nod program. It'd be a hell of a fight though but not sure it's something winable without the Halos being discovered. I mean even future GDI wouldn't be able to prevent Reach from falling probably. Although the pound of flesh they'd take from the Covenant would be legendary.
Eh, Halo would work just fine as a setting for the sequel quest. You'd just have to swap out the UEG and UNSC for GDI as it ends up being at the end of this quest. It'd be really interesting, that's for sure. GDI at this point looks nowhere near like the UNSC does save for similar ground technology barring exotics like the sonic guns. With just the tech we know about (Scrin hover tech, advanced energy weapons (PPC's, ion cannons, and lasers)), I'd expect GDI to generally surpass the UNSC's tech with some possible exceptions like power production and computing, where the UNSC was really fucking good at it, to the point they were better than the Covenant.
 
I'm happy with whatever so long as we don't end up in some sort of horrifying mashup universe with the Covenant, the Reapers/Collectors, the Zerg, the Protoss, and more... one galaxy consuming threat along with the Scrin/Tiberium at a time, please and thank you.
Eh it just looks politics and diplomacy would be more limited in those other settings.
 
Also, the railgun harvesters are specifically for the most dangerous mining operations. We're only building limited amounts of them; not replacing our entire fleet. So it's a much smaller and cheaper project than the future harvesters upgraded with hover and Scrin harvesting tech.
Not to mention that we'll be deploying the railgun harvesters into Nod infested territory, which is the last place you want a bleeding edge Tiberium Abatement system. Make the Shadow Teams work for that particular win.
 
Could you explain your thinking for doing Heavy Metal Mines first? Wouldn't it be more reasonable to pursue the Rare Metal Mines first instead, as they are supposed to reduce progress cost by 15 each instead of by 10 for Heavy Metal Mines?
1) Heavy Metals Mines has a much better return on investment, in that we get +20 RpT for 395 Progress with rollover, rather than +10 RpT for 325 Progress that stops cold without rollover unless SCED can find us more mining sites.

2) It's quite possible that Heavy Metals Mines may have as much or more effect on the cost of other types of mining than Rare Metals Harvesting. I seem to remember something @Ithillid said along those lines, possibly indicating that bigger mines mean more infrastructure in place that can be piggybacked on for other options. We don't really know how the cost reductions from the various mine types interact yet, so making too many assumptions about how they work is probably a mistake.

So something like:
Orbital 6/6 Dice + 1 Free Die 140 R
-[] Lunar Rare Metals Harvesting (Phase 1) 0/170, (Phase 1+2) 0/325 (5 Dice, 100R) (99.99% chance for Phase 1) (81% chance for Phase 1+2)
And
-[] Lunar Heavy Metals Mines (Phase 1) 0/395 (2 Dice, 40 R) (2/5.5 median)
OR
-[] GDSS Columbia (Phase 1) 0/80, (Phase 1+2) 0/245 (2 Dice, 40 R) (88% chance for Phase 1) (4% chance for Phase 1+2, 2/3 median)

OR

-[] Lunar Rare Metals Harvesting (Phase 1) 0/170, (Phase 1+2) 0/325 (3 Dice, 60R) (90% chance for Phase 1) (4% chance for Phase 1+2 3/4.5 median)
-[] GDSS Columbia (Phase 1) 0/80, (Phase 1+2) 0/245 (4 Dice, 40 R) (100%% chance for Phase 1) (92% chance for Phase 1+2)

If we wanted to get a couple phases of Columbia out before the election.
It is almost certainly a waste of time to start only Phase 1 of Columbia before the election. Impact on public opinion will likely be small if there are no photogenic shots of the first habitat prototypes actually being lived in or something, so we might as well concentrate on just increasing our Resource income, and maybe trying to broker a deal regarding "protect moon mining income" in the aftermath of the election.

Seeing as how the parties apparently prefer to strike bargains with the Treasury just after elections rather than just before for some reason I can't quite follow.

Also, it is definitely a bad idea to invest so many dice in Rare Metals Harvesting that there's much realistic chance of it 'boiling over' through Phase 2, because we have nothing after that. Given the current situation, I would oppose even trying to complete Rare Metals Harvesting Phase 2 until/unless we find a Phase 3 option, or until/unless we're running out of time at the tail end of the plan and just need something to scrape under our Plan commitment deadline.

It's true even though I only started programming a little while ago, I've seen videos of people programming an AI to drive a car or other things and it's always laughable because of certain decisions the AI makes I don't call these cases artificial intelligence I call them of artificial stupidity
True.

On the other hand, the AI makes its own set of dumbass mistakes. These mistakes are entirely different from the ones humans make (such as driving sleep-deprived, constantly ignoring speed limits because WANNA GO FAST, or accelerating down a merge lane to predictably cause a traffic jam at the end because they can't grok the idea of finding a gap in the traffic and slowly merging into it). Given that AI is subject to improvement via engineering, I would not at all be surprised if we some day develop a car-driving AI that is a marked improvement over any human driver, or at least any except the best human drivers when said drivers are at the top of their game, on net.

Specialization = efficiency. The broader the task, the more computational power you need, and the ratio grows exponentially. For practical purposes, if you were to replace a human worker in remote work, you would list the individual tasks that the human performs, then build a separate AI system for each of them. The AI systems may be good at some of those. They may even "talk" to each other and share data that's necessary for them to function. But because software is built by real people in the real world, there are resource constraints that prevent the scope of a system from growing beyond a certain point. In the end though... (continued below)

I'm perfectly fine suspending my disbelief for story purposes and making the assumption that there's some kind of magical computational medium that's capable of doing all this work, but I don't think I'll be able to avoid cracking jokes about it.
That's the thing.

You're writing expert systems because when all is said and done, the code has to be both made and maintained by human beings. Thus, it must be comprehensible to, and transparent-ish to the perspective of, a relatively limited number of human programmers. This imposes limits.

Limits on the complexity of any one component of the system made/maintained by any one person.

Limits imposed by modularity; things must be segregated so that the overall human project can say "Alice is responsible for Part A, Bob is responsible for Part B, and malfunctions are definitively occurring within A, or B, or maybe specifically the interaction between them" as much as possible.

Limits imposed by the overall scale of the architecture; no one person can even begin to comprehend what ten thousand separate moving parts are doing all at once, and there are limits on both available labor and the time in which the work force can prepare the project.

Thus, any project that involves "artificial intelligence" in real life is basically a house of cards built up out of individually relatively simple systems. And the systems need to be, as I understand it, individually either comprehensible (e.g. conventionally written code), or incomprehensible but readily adjustable via a limited and comprehensible set of tunable parameters (e.g. a neural net).

...

The premise of science-fictional "AI" is that you basically dispense with this, and figure out how to emulate whatever it is a human brain does to have cognition instead of just an easily countable list of applications ticking along doing discrete things. As such, it would be entirely unlike any real project that has ever succeeded at doing anything on a computer.

Obviously this seems to imply a computer with processing power that exceeds or equals that of the human brain, and there are likely to be other obstacles to overcome.

The main AI problem that I don't think is being shown is AI errors. This is not a person or even an alien, a computer mind specialized in its system can have very, very strange models and patterns of behavior that were effective, but which even from the side of human specialists look strange.

AI is always presented as a super-powerful, often on the verge of the divine, system capable of anything. To think better than people, to create better than people, to think that AI is better than people. There are quite a few situations like the CGI Animated Shorts "Fortress" in which the AI strained incredible supply skills to continue waging war. However, no one has invested in the AI of both sides an indicator of the end of the war other than total destruction or an order of command. With the death of people, automated systems of war bomb empty cities and destroyed positions, until finally the last of the systems simply collapses.

This is a good example of an AI created for a specific task and showing excellent skills in it, but not having the ability to think excessively, including because it is not required from the tool. You don't need an AI commander from pacifism refusing to attack the enemy. The formation of a reasonable AI personality is stupidity for a banal reason, after that it is necessary to take the AI personality into calculations, from voting to rights. This will create too many questions for which there are no easy answers.
This is precisely where we find the junction of two things.

One is realistic "artificial intelligence" which has been exhibiting superhuman capabilities ever since the 1940s, because there are areas where a programmable machine can do things the human brain cannot- a literal hunk of clockwork, for example, can do long division much faster than you ever will. A soldered-together maze of vacuum tubes can track dozens of objects on radar where you could never dream of being able to do so, or aim a gun so precisely that it seems like magic from the perspective of anyone who doesn't know how it works.

The other is the imaginary "artificial intelligence" that has cognition and perspective, like a human being. A mechanical mind, not just a mechanical brain.

Some people like to write stories (like the ones you describe) in which an AI's personhood is curtailed or nonexistent, in which case they become effectively a glorified version of the realistic type of computing we already have. And yes, of course, then they accurately echo the message that most accurately describes real-world computing: that the machine will do what you program it to, and only that, and so beware if you program it to deliver a result you never really wanted.

you will never have an AI that is equally good at everything. Any artificial intelligence is going to specialize into a small handful of things. The General part is mainly that it has the potential to do anything.
I mean.

Humans are natural general intelligences.

By today's standards regarding artificial computing...

We are quite good, in terms of actual ability to Get Shit Done and the computing power we seem to bring to bear on the task, at things broadly relevant to a plains-dwelling, occasionally-tool-using, hairless ape. We struggle to program computers to replicate some of these functions, even today.

Walking across a room without tripping over the cluttered shit on the floor comes to mind. Delivering compelling (that is, coherent, non-rambly, meaningful) speech. Parsing speech into the course of action intended by the speaker. Facial recognition, as a rule. Et cetera.

At the same time, we are like insanely bad at other things, relative to what computing hardware purpose-built for the task can do.

If you'd told a 1930s science fiction author that seventy years after it became routine for the most skilled human computers to lose out to machines on basically any arbitrary mathematical task, and thirty years after it became practically impossible for a human being to beat a calculating machine at chess, we would still be struggling to design an artificial intelligence that can tell the difference between a spilled latte and a lane marker on a road, or that can identify a man's face when that man has a piece of reflective tape stuck to his forehead...

Well, I think they'd have trouble believing it. And yet, here we are. Because it turns out that while humans are general intelligences, in some areas we're fairly specialized and efficient. While in other areas we're basically a crude attempt to run a hilariously simplistic Turing machine emulator on software that truly, truly was not designed for it, like building a Babbage difference engine in Minecraft to do your arithmetic for you or something.
 
Well, I think they'd have trouble believing it. And yet, here we are. Because it turns out that while humans are general intelligences, in some areas we're fairly specialized and efficient. While in other areas we're basically a crude attempt to run a hilariously simplistic Turing machine emulator on software that truly, truly was not designed for it, like building a Babbage difference engine in Minecraft to do your arithmetic for you or something.
[PENDATIC]Ironically, a Babbage machine is the one computer you can't actually make in Minecraft, per say. No real physics, gears and linkages don't work. (Or work, in mods, on such a simplified model that a clockwork computer is impossible.)

Although you could probably make a Turing machine with a piston feedtape.[/PENDATIC]
 
Last edited:
I wasn't saying drop our current agriculture goals to research tarberries right now.

Just that tarberries might be useful so we should definitely research them eventually.

If we can spare a die for them next turn, great. If we can't, oh well, just keep it in mind for later.
I think we should make sure to hit all our Agriculture plan goals first (including the food storage stuff), and fully deploy the caffeinated kudzu.* Then we can look at tarberries, along with spider cotton. We should probably develop tarberries to see what they do, then compare the merits of doing that versus doing spider cotton.

My general plan outline is:

1a) Finish Perennials while
1b) Beginning Wadmalaw Kudzu Plantations.

Then

2a) Finish Wadmalaw Kudzu Plantations while
2b) Optionally doing Freeze Drying Plants. This might happen only after (2a).

Then

3a) Finish creating the necessary Food Storage facilities.
3b) Research tarberries.

Then

4) Spend rest of Plan working on spider cotton or tarberries, unless we decide we need Agriculture dice to fulfill our Consumer Goods target or there's a special Yellow Zone project we need or something.
________________________________________________

*(Because +1 on all dice for just 900 Progress at 10 R/die is pretty good, especially when doing the front half of that project is a Plan commitment we'd have to do anyway, and ESPECIALLY in a field where it's not really competing with any other mandatory projects.)

[PENDATIC]Ironically, a Babbage machine is the one computer you can't actually make in Minecraft, per say. No real physics, gears and linkages don't work. (Or work, in mods, on such a simplified model that a clockwork computer is impossible.)

Although you could probably make a Turing machine with a piston feedtape.[/PENDATIC]
Sorry, you're right. But I'm morally certain someone's made some kind of computing machine in Minecraft.

My point was just to evoke the image of a massively sophisticated and complicated GPU capable of doing billions of calculations per second churning away like mad... to emulate a crude calculating device capable of doing, say, tens of calculations per second.

Which is basically what the human brain does when humans do arithmetic, except the human brain does it many orders of magnitude worse.
 
Last edited:
Back
Top