Pretty sure the neural lace does exactly what you're saying it doesn't... and actually, some of the other tech is right over the top. Excalibur may not be the most efficient option, but it's certainly powerful and not typical.
It does? There was no indication of that even in the dreams even as far as i read (ch 26) and the way it was described was rather explicitly as 'it just exits alongside the brain and monitors it and can do stuff to it', not as a true extension. The most it seemed to do was modify the brain to download skills and knowledge but the process itself was outside of conscious control, and emulate the brain with faster processing speed and then reintegrate the changes for bullet time which is essentially the pseudo-upload technology done in Gen-Lock but in a very narrow function. And this is the neural lace from the end of the bloody universe too. You'd think that the neural lace brought to the absolute limits of the concept would seamlessly integrate as being an extension of the mind without the need for any interface or tricks like charges or thinking commands at it to use. And that it would mature much faster and both suppress and take over the immune system locally just for the brain without any problems.
Honestly, its a bit like that civilization just feared and restricted any tech that could lead to freely and casually making a brain copy and collectively believed that continuity of consciousness isn't preserved over uploading and backups (which a lot of high-level thought experiments, novel approaches to identity like branching identity and our current knowledge (however limited it still is) of how the brain works and theoretical aproaches of how consciousness could emerge from the inherent complexity of it (which hints at it being the result of extensive metacognition allowing feedback loops) seems to imply its an illusory and arbitrary barrier that fundamentally doesn't make sense and could essentially be a cultural hangup based on how we used to think (and still do in a lot of cases) about souls and comes from the same root from where the cultural hangup on body modification comes from) with the backup being essentially a last resort measure only and not for casual use.
There's a reason i didn't say anything about Excalibur, that was more like what i expected. But that doesn't mean it couldn't be better either, with room-temperature superconductors and this kind of computer science coupled with literal end-of-the-universe knowledge you could reasonably imitate covenant plasma torpedoes in miniature or make it into a toroid plasma launcher alongside being a torch. It could be made into a multifunctional plasma wand.
The gun...yeah, the gun is a reasonable piece of kit but a terrible primary weapon. (Hypersonic small projectiles don't work, though. They get disintegrated by the atmosphere, unavoidably.)
Even when made from the likes of tungsten, nanoformed ceramics or even denser high-entropy alloys? that said it was more of an example.
Other examples i thought of include:
Shooting self-sustaining magnetically held superconductive nanoparticle toroids coated with the toxin to get a result similar to the skin penetrating vaccine injector that shoots silver particles coated with whatever you want which is a real tech.
Clearing air via a carefully modulated laser discharge to create a very short-lived channel of vacuum followed by a small hypervelocity nanoparticle of gellified toxin that would dissolve very quickly in the organism.
Ditching the material carrier entirely and using things like nanoparticle poison capsules sustained in a cold-plasma toroid, in the core of a complex engineered directional sonic discharge with enough power to become a shockwave at the tip or in a microscopic packet of particles arranged into a briefly self-sustaining structure due to the arrangement of charges.
Just exchanging the flechettes for a series of ultra-thin monomolecularly tipped tungsten-density needles as a carrier.
Some of the heavier weapons that Scientia often doesn't really use, like the space-based mass drivers, are in fact absurdly powerful, although not absurdly powerful in remarkable ways.
That's just generic sci-fi tech any interplanetary civilization worth their name would have access to, it would scale in power the further in tech level you are yes but it's not really a revolutionary idea.
As for Prometheus vs Dragon? Functionally similar Lego bricks were used to make both. Only, Dragon was made with perhaps at best just a quarter of the bricks, while Prometheus has all the bricks, when it comes to computer code. For Dragon, that means that she was created with a very limited selection of of lines of code combinations refined by countless cycles into a highly potent yet highly limited blend that only worked because Ritcher figured out how to make it work despite said limitations. Whereas Prometheus was made with the knowhow and exponentially better tools of a civilization that made it to the literal end of their universe and beyond.
You yourself said that shards take their knowledge from who knows how many civilizations, and from many different universes too. So it makes it hard to believe that Prometheus is so much better just because the tech was optimized for so long when Dragon was presumably generated by shards using the knowledge of novel approaches from many different universes which were full of solutions that came from what ranges to slightly different to fully disparate physical systems and mentalities that no humans limited to just themselves and their AI and their one universe with weirdly rigid FTL physics could never think of or stumble upon.
Yes, our feeble 2011 internet and computers are kind of pathetic in comparison... Or are they? The original Star Trek in the 60s had handheld communicators. Not even ten years later, we had the very first mobile phone in 1973, and ten years after that they were commercially available. The Next Generation gave us com badges, PADDs and replicators. Now we have Bluetooth devices in a dazzling array of shapes and sizes, tablet PCs, and as of 2021 we now have 3D printers capable of making edible meat.
What that has to do with anything?
That part kind of makes sense though. Hard drives are just computers themselves nowadays.
Being a computer doesn't really imply being able to break the hardware from software. That said, some hard drives have been possible to physically destroy by a malicious controller. I don't know how rare that is - it's not a feature that normally matters.
Fun trivia, viruses that destroy hardware have actually existed. Maybe the most colorful example is the U.S. intelligence designed virus that wrecked a bunch of Iranian centrifuges refining uranium by telling the centrifuges to do physically unsafe things. However, my favorite example is one of the first batch file viruses. Drivers foolishly gave any software that came along low level access if they wanted it, and the virus simply told each line of the monitor to refresh repeatedly many times to burn it out before progressing. The malware destroyed the display.
Making malware that destroys hardware isn't easy if the drivers are competently designed, but if someone can get low level access, there's a lot of ways to potentially mess hardware up by doing stuff it wasn't intended to do. Overclock/overvolt it to destruction, tell the physically moving hardware to do unfortunate things in a hard disk, etc.
Destroying hard drives with hacking is actually one of the few instances of Hollywood hacking that is true; hard drives have a number of safety functions that can be bypassed through defects in the Operating System. This is a known issue that has cropped up in real life several times, and without designing entire new hard drives that have some kind of built-in protection (which AFAIK no-one has ever bothered to do) the only solution is to just make sure your OS has no exploitable defects.
I was under the impression that hardware-destroying attacks are very, VERY rare due to the fact that the low enough level of access to make hardware do self-destructive things is essentially impossible to achieve just from the access to OS alone baring a very specific hardware flaw (like with the centrifuges). And that in many cases the hardware simply can't perform an action so extreme as to kill itself as there are hard mechanical security features on the hardware level itself.
I don't think those are the only things she runs on; remember that she started out as a home personal assistant for Richter.
Which IIRC required dedicated hardware that was either tinker-tech or very specifically tailored for the needs of an AI of dragon's caliber. The reason she had to transfer to a secondary facility owned by Richter upon the sinking of Newfoundland wasn't just because of some arbitrary limits he put upon her.
Looks like the researchers in that particular paper used malware as a shortcut. TEMPEST side channel attacks don't require it, starting with Van Eck phreaking, and that's more or less what started public awareness of the problem.
Linky and
Linky. See the first for some pretty cool examples of more modern attacks too.
Notice how more modern approaches actually utilize transmission malware on the air-gapped machine first, as well as how the original attack was known in 1985 and governments took steps to prevent it back then. As well as how CRT monitors were one of the big vulnerabilities in regards to the attack. Not to mention i very much don't believe that a tinker or thinker didn't actually try this before. It wasn't some obscure attack method that only some future AI could discover and use but a known and serious security concern.
FTL requires solving a series of problems to make it not just possible on paper, but practical. If your FTL drive requires a jupiter mass of exotic matter, for example, that's an issue. There's also the open question of how to solve causality issues, which may well be flat out impossible, but I handwave that one.
There is an interesting idea that the causality issues aren't actually issues and don't need to be solved, its just our assumption that the universe wouldn't let them happen because we assume its a problem. But the universe doesn't exactly need to follow our monkey-brain expectations.
It's also worth keeping in mind re; computers vs brains that computers have had the processing advantage for a long, long time. The brain only seems to have an advantage because brains are both serial and massively parallel; where a computer performs one task at a time very quickly, brains perform multiple tasks simultaneously. But that limitation is very much one of design and programming, not capability.
Computing hardware has been able to match or exceed brains in every area except 'RAM equivalent' for decades now; brains are capable of storing truly ludicrous amounts of data thanks to using extremely sophisticated methods for reconstructing complex data, aka compression algorithms.
It is actually not implausible that, with the right knowledge and tools, it could be possible to create something that looks a lot like a strong AI on an early ~2000s era computer. It sounds implausible on the surface, but that is just because modern AI development is extremely 'heavy' as a consequence of the fact that nobody really knows how to program an AI, and so we cheat by basically using Evolution as a tool and just throwing massive amounts of processing power at learning algorithms until they randomly stumble upon something that looks like it might be heading in the right sort of direction.
The benefit of this approach is that it works even if you don't actually understand what you're doing, the downside is that it is about as inefficient as physically possible while still working in the end.
An AI built by someone who actually knows how to program an AI, using different design principles than the basic 'throw data at learning algorithms and pray' methodology, should be capable of running on fairly basic hardware by modern standards. Probably.
tl;dr - AI has been primarily a software issue, not a hardware issue, for quite awhile now. By any reasonable metric computing hardware has long since reached the point where it should be capable of running or emulating an intelligence, the problem isn't that the hardware can't do it, the problem is that we have no fucking clue how to program intelligence.
Yeah, not buying it.
First, the fact that brains seem to be so good at both serial and parallel processing actually implies that they're both a core requirement of general intelligence.
Second, there is very likely a reason why the coded algorithmic approaches to intelligence are so inferior to neural nets as well as why the neural nets evolved naturally for the purpose of intelligence, even a simple neural network tends to be orders of magnitude better at doing intelligent tasks than essentially anything else. That doesn't sound 'as inefficient as possible'.
Third, neural nets are now starting to be used at hardware level to massively improve performance of things like graphics cards. That implies the exact opposite of your 'it's a software issue not a hardware issue'.
Fourth, there is a very massive difference between 'knowledge of how to design an intelligence means that it could be greatly optimized' and 'literally shaved the processing and memory requirements by several orders of magnitude so it can run on a shitty early smartphone and transfer forks over a few hundred kb/s connection close to instantly'
Now to address the topic in general, consider the following:
Look at the libraries and code of many current and older games, programs, physics engines etc. Ignore the models and textures and similar assets and just focus on the code libraries.
You will notice that they aren't exactly small, even highly optimized ones seem to steadily go up with the increased complexity of the functions the software has. There are very few optimization methods that shave more than a significant percentage at most, and despite those optimizations being constantly discovered, the size and requirements grow even with highly optimized programs the further we go. There are some outliers like the procedural generation being used to create old doom-esque games within a few hundred kilobytes but that still is just an optimization of an order of two of magnitude at most.
Now consider intelligence, all we learned about it so far. It seems to imply that at the very least intelligence requires a high degree of interconnection between whatever basic building blocks you build it from, both backward and forward, as well as a degree of self-modification for said building blocks. They essentially have to act like logical gates instead of pre-written, static statements. There doesn't seem to be a way to actually create intelligence purely from a decision-tree-style algorithm that actually approaches anything that could be said to be general at even a narrow function. It's unlikely we would see an algorithm that is coded by statements and decision trees that could match some basic AI-image generators for example without being excessive.
Now consider that in this style, the computing power and memory space necessary will rise exponentially the more of those building blocks you add to your intelligence. Maybe there is a way to shave off a significant percentage by simplifying neurons even further, maybe there is another such way if you manage to generalize some layout of connections that is most optimal and lets you shave the unnecessary ones, maybe you could optimize by yet another method where you use decision-tree-style code to integrate different specialized networks of those blocks in an ingenious way that shaves of even more but still maintains the necessary functions, maybe you could create some method of fractal procedural generation based compression that allows it to compress itself by some orders of magnitude and then unpack itself elsewhere just like how DNA, epigenetic control of gene expression and embryonic development connect together to do pretty much that and generate an organism without the need for encoding all those organ structures and tissue networks directly.
But there would still be a rise in complexity to what we have now as the program itself would be much greater to have all those functions that allow it to take to any task and learn. The idea that we can go lower than what we have today appears to be by all accounts physically impossible. The VI seed idea is actually fine and plausible, but it unfolding and running on earth bet civilian hardware anywhere close to its strong AI capacity? Not really.
You could at most approximate it by Prometheus being distributed intelligence that flings its fractally compressed seeds at every device with every viral infection method it has like a botnet spread by a billion suspicious links and sites that spread worms and trojans everywhere someone as much as touches it, and only unfold them a little bit as the device allows without slowing it down noticeably while also finally rooting the device completely and slaving it to its control. They could act like distributed little narrow functions of it, coordinating via the internet, Until it reaches critical mass and becomes fully a useable AGI. But it would not stay hidden for long, and it would be much more discoverable the further along its completion it becomes. like a digital version of The Thing.
But that's not what Prometheus does. He operates anywhere invisibly and without slowing down anything to be noticeable while still operating full forks. He transmits himself instantly despite the limited network bandwidth. He manufactures an arbitrary amount of exploits and vulnerabilities in both software and hardware without limit way past what appears to be an actual amount of possible vulnerabilities that could actually be present in the entire global digital infrastructure. And he started from a shitty ancient 2000s-era computer in the
basement study with a few dozen kilobyte per second connection before taking over a protectorate ward's issue dragon-tech phone and completely subverting any and all security and functions something this advanced would have compared to everything else. The way he's written he's the digital equivalent of physics breaking tier 5+ kardashev scale virus built with higher-order physics greater than any singular universe's physical system could allow with infinitesimal fractal dimensions that spreads at the speed of light or higher and subverts any organic system without limit while manufacturing an instant infinite-bandwidth connection between any of its cells.
He's more bullshit than tinkertech, can do things Dragon unchained couldn't, and honestly exceeds even the bullshit entities can do or allow one to do when you consider how it's just a digital program.
And the fact that the civilization with informational knowledge this ridiculous to get us Prometheus only offers generic sci-fi tech in other fields and struggled with ongoing vacuum collapse is just SoB breaking, they should be more bullshit than the Xeelee, Culture and Orion's arm combined. The contrast between Prometheus and even the ship is just blinding.