Could you explain your thinking for doing Heavy Metal Mines first? Wouldn't it be more reasonable to pursue the Rare Metal Mines first instead, as they are supposed to reduce progress cost by 15 each instead of by 10 for Heavy Metal Mines?
1)
Heavy Metals Mines has a much better return on investment, in that we get +20 RpT for 395 Progress
with rollover, rather than +10 RpT for 325 Progress that stops cold without rollover unless SCED can find us more mining sites.
2) It's quite possible that
Heavy Metals Mines may have as much or more effect on the cost of
other types of mining than
Rare Metals Harvesting. I seem to remember something
@Ithillid said along those lines, possibly indicating that bigger mines mean more infrastructure in place that can be piggybacked on for other options. We don't really know how the cost reductions from the various mine types interact yet, so making too many assumptions about how they work is probably a mistake.
So something like:
Orbital 6/6 Dice + 1 Free Die 140 R
-[] Lunar Rare Metals Harvesting (Phase 1) 0/170, (Phase 1+2) 0/325 (5 Dice, 100R) (99.99% chance for Phase 1) (81% chance for Phase 1+2)
And
-[] Lunar Heavy Metals Mines (Phase 1) 0/395 (2 Dice, 40 R) (2/5.5 median)
OR
-[] GDSS Columbia (Phase 1) 0/80, (Phase 1+2) 0/245 (2 Dice, 40 R) (88% chance for Phase 1) (4% chance for Phase 1+2, 2/3 median)
OR
-[] Lunar Rare Metals Harvesting (Phase 1) 0/170, (Phase 1+2) 0/325 (3 Dice, 60R) (90% chance for Phase 1) (4% chance for Phase 1+2 3/4.5 median)
-[] GDSS Columbia (Phase 1) 0/80, (Phase 1+2) 0/245 (4 Dice, 40 R) (100%% chance for Phase 1) (92% chance for Phase 1+2)
If we wanted to get a couple phases of Columbia out before the election.
It is almost certainly a waste of time to start
only Phase 1 of
Columbia before the election. Impact on public opinion will likely be small if there are no photogenic shots of the first habitat prototypes actually being lived in or something, so we might as well concentrate on just increasing our Resource income, and maybe trying to broker a deal regarding "protect moon mining income" in the aftermath of the election.
Seeing as how the parties apparently prefer to strike bargains with the Treasury just after elections rather than just before for some reason I can't quite follow.
Also, it is definitely a bad idea to invest so many dice in
Rare Metals Harvesting that there's much realistic chance of it 'boiling over' through Phase 2, because we have nothing after that. Given the current situation, I would oppose even trying to complete
Rare Metals Harvesting Phase 2 until/unless we find a Phase 3 option, or until/unless we're running out of time at the tail end of the plan and just need something to scrape under our Plan commitment deadline.
It's true even though I only started programming a little while ago, I've seen videos of people programming an AI to drive a car or other things and it's always laughable because of certain decisions the AI makes I don't call these cases artificial intelligence I call them of artificial stupidity
True.
On the other hand, the AI makes
its own set of dumbass mistakes. These mistakes are entirely different from the ones humans make (such as driving sleep-deprived, constantly ignoring speed limits because WANNA GO FAST, or accelerating down a merge lane to predictably cause a traffic jam at the end because they can't grok the idea of finding a gap in the traffic and slowly merging into it). Given that AI is subject to improvement via engineering, I would not at all be surprised if we some day develop a car-driving AI that is a marked improvement over any human driver, or at least any
except the best human drivers when said drivers are at the top of their game, on net.
Specialization = efficiency. The broader the task, the more computational power you need, and the ratio grows exponentially. For practical purposes, if you were to replace a human worker in remote work, you would list the individual tasks that the human performs, then build a separate AI system for each of them. The AI systems may be good at some of those. They may even "talk" to each other and share data that's necessary for them to function. But because software is built by real people in the real world, there are resource constraints that prevent the scope of a system from growing beyond a certain point. In the end though... (continued below)
I'm perfectly fine suspending my disbelief for story purposes and making the assumption that there's some kind of magical computational medium that's capable of doing all this work, but I don't think I'll be able to avoid cracking jokes about it.
That's the thing.
You're writing expert systems
because when all is said and done, the code has to be both made and maintained by human beings. Thus, it must be comprehensible to, and transparent-ish to the perspective of, a relatively limited number of human programmers. This imposes limits.
Limits on the complexity of any one component of the system made/maintained by any one person.
Limits imposed by modularity; things must be segregated so that the overall human project can say "Alice is responsible for Part A, Bob is responsible for Part B, and malfunctions are definitively occurring within A, or B, or
maybe specifically the interaction between them" as much as possible.
Limits imposed by the overall scale of the architecture; no one person can even begin to comprehend what ten thousand separate moving parts are doing all at once, and there are limits on both available labor and the time in which the work force can prepare the project.
Thus, any project that involves "artificial intelligence" in real life is basically a house of cards built up out of individually
relatively simple systems. And the systems need to be, as I understand it, individually either comprehensible (e.g. conventionally written code), or incomprehensible but readily adjustable via a limited and comprehensible set of tunable parameters (e.g. a neural net).
...
The premise of science-fictional "AI" is that you basically dispense with this, and figure out how to emulate whatever it is a human brain does to have
cognition instead of just an easily countable list of applications ticking along doing discrete things. As such, it would be entirely unlike any real project that has ever succeeded at doing anything on a computer.
Obviously this seems to imply a computer with processing power that exceeds or equals that of the human brain, and there are likely to be other obstacles to overcome.
The main AI problem that I don't think is being shown is AI errors. This is not a person or even an alien, a computer mind specialized in its system can have very, very strange models and patterns of behavior that were effective, but which even from the side of human specialists look strange.
AI is always presented as a super-powerful, often on the verge of the divine, system capable of anything. To think better than people, to create better than people, to think that AI is better than people. There are quite a few situations like the CGI Animated Shorts "Fortress" in which the AI strained incredible supply skills to continue waging war. However, no one has invested in the AI of both sides an indicator of the end of the war other than total destruction or an order of command. With the death of people, automated systems of war bomb empty cities and destroyed positions, until finally the last of the systems simply collapses.
This is a good example of an AI created for a specific task and showing excellent skills in it, but not having the ability to think excessively, including because it is not required from the tool. You don't need an AI commander from pacifism refusing to attack the enemy. The formation of a reasonable AI personality is stupidity for a banal reason, after that it is necessary to take the AI personality into calculations, from voting to rights. This will create too many questions for which there are no easy answers.
This is precisely where we find the junction of two things.
One is realistic "artificial intelligence" which has been exhibiting superhuman capabilities ever since the 1940s, because there are areas where a programmable machine can do things the human brain cannot- a literal hunk of clockwork, for example, can do long division
much faster than you ever will. A soldered-together maze of vacuum tubes can track dozens of objects on radar where you could never dream of being able to do so, or aim a gun so precisely that it seems like magic from the perspective of anyone who doesn't know how it works.
The other is the imaginary "artificial intelligence" that has cognition and perspective, like a human being. A mechanical
mind, not just a mechanical
brain.
Some people like to write stories (like the ones you describe) in which an AI's
personhood is curtailed or nonexistent, in which case they become effectively a glorified version of the realistic type of computing we already have. And yes, of course, then they accurately echo the message that most accurately describes real-world computing: that the machine will do what you program it to,
and only that, and so beware if you program it to deliver a result you never really wanted.
you will never have an AI that is equally good at everything. Any artificial intelligence is going to specialize into a small handful of things. The General part is mainly that it has the potential to do anything.
I mean.
Humans are natural general intelligences.
By today's standards regarding artificial computing...
We are
quite good, in terms of actual ability to Get Shit Done and the computing power we seem to bring to bear on the task, at things broadly relevant to a plains-dwelling, occasionally-tool-using, hairless ape. We struggle to program computers to replicate some of these functions, even today.
Walking across a room without tripping over the cluttered shit on the floor comes to mind. Delivering compelling (that is, coherent, non-rambly, meaningful) speech.
Parsing speech into the course of action intended by the speaker. Facial recognition, as a rule. Et cetera.
At the same time, we are like
insanely bad at other things, relative to what computing hardware purpose-built for the task can do.
If you'd told a 1930s science fiction author that seventy years after it became routine for the most skilled human computers to lose out to machines on basically any arbitrary mathematical task, and thirty years after it became practically impossible for a human being to beat a calculating machine at chess, we would still be struggling to design an artificial intelligence that can tell the difference between a spilled latte and a lane marker on a road, or that can identify a man's face when that man has a piece of reflective tape stuck to his forehead...
Well, I think they'd have trouble believing it. And yet, here we are. Because it turns out that while humans
are general intelligences, in some areas we're fairly specialized and efficient. While in other areas we're basically a crude attempt to run a hilariously simplistic Turing machine emulator on software that truly,
truly was not designed for it, like building a Babbage difference engine in Minecraft to do your arithmetic for you or something.