Scientia Weaponizes The Future

Did Prometheus actually run from the Heberts' desktop in full form? That part seems a big stretch. The rest, less so - I don't buy the theory that parahumans have supercharged Earth Bet computer security, and while many attacks are based on getting a user to expose their system by accident, ones that don't require that are real.
 
Did Prometheus actually run from the Heberts' desktop in full form? That part seems a big stretch. The rest, less so - I don't buy the theory that parahumans have supercharged Earth Bet computer security, and while many attacks are based on getting a user to expose their system by accident, ones that don't require that are real.
No, there were brief mentions by Scientia at a few points that he's clearly continued to develop over time. The desktop version was more of a seed.
 
I hear you. My argument here, for whatever you feel it's worth, is that computer science as a field is so new it's barely out of diapers, and has an unknown potential for further development before it's anywhere close to tapped out. It took 50 or so years of serious AI research barking up the wrong trees before someone hit on the deep learning idea that gave us the current wave of GPT style AIs, for example, and before that innovation a lot of people in the field were having doubts about whether something like GPT would ever be possible given how long AI researchers had been banging their heads against a wall with little to show for it.

In other words, Prometheus and company aren't deep learning AIs, they're something else, something far more efficient and effective. Something that might need a million years of computer science to figure out, but that is nevertheless plausible when talking about anything approaching those sorts of timelines. Something vaguely analogous to a quicksort vs. a bubble sort, a collection of really great ideas that's just plain better.

I'm not as overwhelmingly optimistic about future technological progress in many fields, but in computer science in particular I think we have a whole lot of room to grow.

I hope that helps. If that argument doesn't quite get you there, the fallback argument I can offer is that this is an argument where strong AI with reasonable hardware requirements is definitely possible because Dragon exists, and some alien civilization still young enough to be on one planet came up with Dragon. Prometheus could be seen as an evolution of the same tech path, with a great deal more development. For whatever that's worth to you.

Edit: As to one minor specific point, my understanding of TEMPEST is that it enables inferring what's stored in a machine's memory by the electromagnetic noise it gives off as it operates. There's no need for installing malware on the airgapped machine, and that would be quite impossible.

And the magically easy hacking is mostly a result of Scientia giving Prometheus a large database of tools to apply to hacking problems. Imagine something like general purpose exploit-finding algorithms in a variety of styles, combined with some weaknesses known to someone with Scientia's techbase that are foundationally present in many or all Earth systems, like a general purpose solution to factoring large numbers quickly.

I believe that true A.I to be impossible on a binary system. I believe for true sentience a "maybe" state is needed and that can not be produced on a binary system, if you look at the brain, a single neuron has half a dozen different responses to surrounding neurons based on chemical, temperature and electrical stimuli, my understanding is so basic that I don't know if some of these are just side effects or input commands but its always been my belief that the future of computing is biochemical or taking the Ternary Computer to its extreme with dozen of base states for computing, there would be a minimal size restriction for all the extra equipment needed to measure and organise those states but you could have exponentially more data with a similarly sized binary computer. For true A.I you'd either need quantum computing or a degree of chaos inherent in the system just for the potential for growth

quick disclaimer I know Jack shit about the practical of any of this beyond the fact that binary computing has limits that we are running into, I discovered the ternary computer after reading a fic that starts back in the 1800s and gives Charles Babbage all the resources he could want to produce his difference engine to its max potential before attempting to use electricity and producing a ternary computer because he was too stubborn to just take the easy binary and it's fiction so any actual problems could be ignored
 
Last edited:
I believe that true A.I to be impossible on a binary system. I believe for true sentience a "maybe" state is needed and that can not be produced on a binary system, if you look at the brain, a single neuron has half a dozen different responses to surrounding neurons based on chemical, temperature and electrical stimuli, my understanding is so basic that I don't know if some of these are just side effects or input commands but its always been my belief that the future of computing is biochemical or taking the Ternary Computer to its extreme with dozen of base states for computing, there would be a minimal size restriction for all the extra equipment needed to measure and organise those states but you could have exponentially more data with a similarly sized binary computer. For true A.I you'd either need quantum computing or a degree of chaos inherent in the system just for the potential for growth

quick disclaimer I know Jack shit about the practical of any of this beyond the fact that binary computing has limits that we are running into, I discovered the ternary computer after reading a fic that starts back in the 1800s and gives Charles Babbage all the resources he could want to produce his difference engine to its max potential before attempting to use electricity and producing a ternary computer because he was too stubborn to just take the easy binary and it's fiction so any actual problems could be ignored
Yeah, no, this doesn't make any sense. There's literally nothing ternary can do that binary can't do - you can just substitute two bits for each trit. You can do fuzzy logic and probability calculations and all that on a binary computer just as well as you can do it anywhere.

This is a somewhat popular trope, or has been at least, but it's completely bullshit.
 
I hear you. My argument here, for whatever you feel it's worth, is that computer science as a field is so new it's barely out of diapers, and has an unknown potential for further development before it's anywhere close to tapped out. It took 50 or so years of serious AI research barking up the wrong trees before someone hit on the deep learning idea that gave us the current wave of GPT style AIs, for example, and before that innovation a lot of people in the field were having doubts about whether something like GPT would ever be possible given how long AI researchers had been banging their heads against a wall with little to show for it.

In other words, Prometheus and company aren't deep learning AIs, they're something else, something far more efficient and effective. Something that might need a million years of computer science to figure out, but that is nevertheless plausible when talking about anything approaching those sorts of timelines. Something vaguely analogous to a quicksort vs. a bubble sort, a collection of really great ideas that's just plain better.
That argument can only go up to a point, i can easily buy that future approaches would optimize a lot of the code needed for a functional AI, but a strong AI that can take less space and processing power than a neural network of a few hundred neurons isn't in any way plausible, its physically impossible.

Besides even then with 2011 internet speeds even a few hundred megabytes would take minutes at a minimum to transfer, and yet Prometheus takes over everything instantly.

There's also the issue of earth bet still being on a level where cell phones would be rather prevalent, but he seems to have no problems with that either.

And i read up even further and got an instance of him destroying physical hardware (hard drives) just from hacking which is just all kinds of wrong.

I'm not as overwhelmingly optimistic about future technological progress in many fields, but in computer science in particular I think we have a whole lot of room to grow.
I've noticed, most future tech introduced seems very... limited. Very much not what i would expect from a civilization that partakes in mega structures. Honestly feels more like ticking off a list of 'generic sci-fi tech'.

Like how the neural lace is really just a fancy phone but without the apps any phone should have and doesn't actually seem to integrate itself into the brain and become a part of the user's mind that you can seamlessly use to think in parallel or increase your working memory or gain complete overview and control over your own brain and subconscious, it just exists alongside it as something you can instruct to do stuff.

Or how the gun is a subsonic poisoned flechette launcher instead of a smart weapon that can adjust its power for optimal penetration and shoots grain-sized poisoned shavings from an alloy block that could be accelerated up to hypersonic velocities for the same amount of recoil as the subsonic flechette, granting more ammunition and greater anti-brute capacity

And when it actually is exotic it is arbitrarily limited in possible approaches like the FTL drive being limited to some kludge that somehow is the only way that works. If you can make wormholes and alcubierre-style warp drives workable then you have all the means and physics needed for a whole slew of other methods like krasnikov tubes and such. They all work on the same set of rules.

This makes it even more jarring when you also have an impossible magical AI alongside those. Honestly, the story seems to use Prometheus almost like a crutch.

I hope that helps. If that argument doesn't quite get you there, the fallback argument I can offer is that this is an argument where strong AI with reasonable hardware requirements is definitely possible because Dragon exists, and some alien civilization still young enough to be on one planet came up with Dragon. Prometheus could be seen as an evolution of the same tech path, with a great deal more development. For whatever that's worth to you.
From what i remember Dragon uses custom biocomputer cores and dedicated satellites for transmission, her hardware requirements are anything but reasonable.

Edit: As to one minor specific point, my understanding of TEMPEST is that it enables inferring what's stored in a machine's memory by the electromagnetic noise it gives off as it operates. There's no need for installing malware on the airgapped machine, and that would be quite impossible.
ieeexplore.ieee.org

LCD TEMPEST Air-Gap Attack Reloaded

In 1998, researcher showed how attackers can transmit data from computers through electromagnetic radio waves generated by the computer video card. 20 years later, we examine this type of threat in a context of modern cyber-attacks. In this type of threat, attackers can covertly leak sensitive...
The important part:
"We implement a transmitter malware that can modulate binary data and transmit it over electromagnetic waves emitted from the video cable."
So no, it requires malware on the machine. Inferring correct data from random electromagnetic noise without modulating it first for the purpose of transmitting information would require a nearly perfect understanding of the architecture of the target machine (both hardware and the OS), seriously powerful dedicated sensors on-site and a massive supercomputer cluster to crunch the resultant data.

And the magically easy hacking is mostly a result of Scientia giving Prometheus a large database of tools to apply to hacking problems. Imagine something like general purpose exploit-finding algorithms in a variety of styles, combined with some weaknesses known to someone with Scientia's techbase that are foundationally present in many or all Earth systems, like a general purpose solution to factoring large numbers quickly.
I can imagine it, and that imaginary someone would still be unable to arbitrarily manufacture vulnerabilities to get into any system it wants and it would definitely not be able to break encryption that would take the age of the universe or a quantum computer to solve because of 'future math'. For someone who is so conservative on the topic of future advances you really go way WAY too far past any believability on the topic of computer science.

The rest, less so - I don't buy the theory that parahumans have supercharged Earth Bet computer security, and while many attacks are based on getting a user to expose their system by accident, ones that don't require that are real.
I didn't mean that it would be supercharged, only that compared to our earth flaws and exploits would be to a degree less frequent and be caught quicker due to both white and black hat tinker and thinker hackers.

And i know those attacks are real, but that doesn't mean that an AI can arbitrarily take over everything whenever it wants because 'AI'. Vulnerabilities that serious are rather rare and digital systems can differ greatly between themselves. Is there a significant portion of devices that Prometheus could subvert? Yes. Could he conjure up flaws left and right to do whatever it wants? No.

I believe that true A.I to be impossible on a binary system. I believe for true sentience a "maybe" state is needed and that can not be produced on a binary system, if you look at the brain, a single neuron has half a dozen different responses to surrounding neurons based on chemical, temperature and electrical stimuli, my understanding is so basic that I don't know if some of these are just side effects or input commands but its always been my belief that the future of computing is biochemical or taking the Ternary Computer to its extreme with dozen of base states for computing, there would be a minimal size restriction for all the extra equipment needed to measure and organise those states but you could have exponentially more data with a similarly sized binary computer. For true A.I you'd either need quantum computing or a degree of chaos inherent in the system just for the potential for growth
Any binary system is turing complete and could simulate tertiary logic, fuzzy logic and even a brain with all its neurochemistry without problems given its powerful enough. The logic of the underlying hardware isn't a factor in whenever a true AI is possible or not.
 
I've noticed, most future tech introduced seems very... limited. Very much not what i would expect from a civilization that partakes in mega structures. Honestly feels more like ticking off a list of 'generic sci-fi tech'.

Like how the neural lace is really just a fancy phone but without the apps any phone should have and doesn't actually seem to integrate itself into the brain and become a part of the user's mind that you can seamlessly use to think in parallel or increase your working memory or gain complete overview and control over your own brain and subconscious, it just exists alongside it as something you can instruct to do stuff.

Or how the gun is a subsonic poisoned flechette launcher instead of a smart weapon that can adjust its power for optimal penetration and shoots grain-sized poisoned shavings from an alloy block that could be accelerated up to hypersonic velocities for the same amount of recoil as the subsonic flechette, granting more ammunition and greater anti-brute capacity
Pretty sure the neural lace does exactly what you're saying it doesn't... and actually, some of the other tech is right over the top. Excalibur may not be the most efficient option, but it's certainly powerful and not typical.

The gun...yeah, the gun is a reasonable piece of kit but a terrible primary weapon. (Hypersonic small projectiles don't work, though. They get disintegrated by the atmosphere, unavoidably.)

Some of the heavier weapons that Scientia often doesn't really use, like the space-based mass drivers, are in fact absurdly powerful, although not absurdly powerful in remarkable ways.
 

Honestly? The way I see this is, is as such. Shard-based thinkers and thinkers are, as a general rule, most definitely are not super geniuses. Hell, most of them probably aren't even as smart as they think they are or pretend to be, with precious few canonical exceptions.

It's the Shard attached to them that does the lion's share of the work. For a given tinker, their Shard presents them with a selected and carefully limited set of tools. I liken it to being given a very specific set of Lego bricks, then being told 'the camera is rolling, go nuts.' This limited selection of bricks means that there's some things that will always be completely beyond their ability; however, human imagination and ingenuity being what it is, that tinker still might be able to come up with something impressive enough that their Shard will go 'Oh yeah, that's definitely going in the scrapbook to show off later.'

The way I see it, these metaphorical Lego blocks are always second, third, or even seventh-hand gifts for a Tinker, and that's probably seriously understating it; the Entities have gone through a mind-boggling number of cycles with other alien races and civilizations, each cycle refining our Lego blocks more and more, adding new options and eliminating others based on what past users have done. Maybe early on, you could've built that 1:18 scale model of Unicorn or whatever that's capable of destroying entire countries in merely half a day but now it'll only be a robot half your size, and the only ability you'll be able to give it is to is to transform inorganic matter from one form to another, and only while it's touching solid ground.

For a sufficiently creative person, they could still use this pint-sized marvel to destroy a country. They just have to be patient enough and clever enough, and if they can manage it, then that trick just might go in the scrapbook for later cycles, though the next person to get those Legos might not be able to pull it off unless the Shard chooses to give them that particular key Lego brick or combination of bricks that makes it possible.

From my understanding of Canon, Dragon wasn't supposed to be possible to create, but Ritcher was very clever with the tools his Shard gave him despite all of the hiccups said Shard threw in his way. Meanwhile our girl Scientia is working not with a very limited set of Lego blocks, but instead the entire goddamn factory. She has tools at her disposal that many capes capable of manipulating computer code might literally murder their own mothers to obtain.

Tinkers always have to make the tools to make the tools to make the tools to make the tools, just to create the things that their Shards dangle within their subconscious minds. At the same time their Shards cripple them by doing some of the work, just enough so that most Tinkers can't actually comprehend or explain how and why the things that they build work.

Scientia had no such handicap.

Also, as a reminder, in the 50s and 60s humans were putting objects and people in space eith the help of computers that can't even match the processing capability of the Hebert's shitty home PC. We are really, really good at doing more with less.

As for Prometheus vs Dragon? Functionally similar Lego bricks were used to make both. Only, Dragon was made with perhaps at best just a quarter of the bricks, while Prometheus has all the bricks, when it comes to computer code. For Dragon, that means that she was created with a very limited selection of of lines of code combinations refined by countless cycles into a highly potent yet highly limited blend that only worked because Ritcher figured out how to make it work despite said limitations. Whereas Prometheus was made with the knowhow and exponentially superior tools of a civilization that made it to the literal end of their universe and beyond.

Yes, our feeble 2011 internet and computers are kind of pathetic in comparison... Or are they? The original Star Trek in the 60s had handheld communicators. Not even ten years later, we had the very first mobile phone in 1973, and ten years after that they were commercially available. The Next Generation gave us com badges, PADDs and replicators. Now we have Bluetooth devices in a dazzling array of shapes and sizes, tablet PCs, and as of 2021 we now have 3D printers capable of making edible meat.

Yeah, you all read that correctly.

My point is that in many respects, the tools that we have at our disposal are often far more capable than most of us ever realize. Sometimes, what we think is a limitation actually isn't. To some, Prometheus seems way too complicated. To me, I think that he just has a hell of a lot more tools and options at his disposal than anyone else.
 
Last edited:
Yeah, no, this doesn't make any sense. There's literally nothing ternary can do that binary can't do - you can just substitute two bits for each trit. You can do fuzzy logic and probability calculations and all that on a binary computer just as well as you can do it anywhere.

This is a somewhat popular trope, or has been at least, but it's completely bullshit.

Any binary system is turing complete and could simulate tertiary logic, fuzzy logic and even a brain with all its neurochemistry without problems given its powerful enough. The logic of the underlying hardware isn't a factor in whenever a true AI is possible or not.

Okay yeah that … I'm hitting my head here about how obvious that is that I forgot about it, but my belief that biochemical or ternary+ computing is a viable option for the future depends on how easy quantum computing would be to mass manufacture, there'd be compatibility issues between different computing types no matter which we choose to progress down.

my opinion is that quantum computing would probably be the best for high spec requirements but is too expensive to scale manufacturing up much beyond one off systems, Biochemical would be the hardest to develop for, and ternary+ might have issues with maintenance I'm mainly bringing them up because I almost never see anything beyond "quantum" computers in Sci-fi, Star Trek I think has a biochemical computing chip that I've seen mentioned in voyager fics, but trinary+ computing ive only ever seen mentioned once when it seems like a good idea to have another attempt at with better technology

P.S I added the + to ternary+ to describe any computer that uses more states than bits
 
Last edited:
Okay yeah that … I'm hitting my head here about how obvious that is that I forgot about it, but my belief that biochemical or ternary+ computing is a viable option for the future depends on how easy quantum computing would be to mass manufacture, there'd be compatibility issues between different computing types no matter which we choose to progress down.

my opinion is that quantum computing would probably be the best for high spec requirements but is too expensive to scale manufacturing up much beyond one off systems, Biochemical would be the hardest to develop for, and ternary+ might have issues with maintenance I'm mainly bringing them up because I almost never see anything beyond "quantum" computers in Sci-fi, Star Trek I think has a biochemical computing chip that I've seen mentioned in voyager fics, but trinary+ computing ive only ever seen mentioned once when it seems like a good idea to have another attempt at with better technology

P.S I added the + to ternary+ to describe any computer that uses more states than bits
Quantum computing is an almost-real thing (people work on it seriously, but I'm not sure if they've actually been able to run any of the quantum algorithms yet) that does some really cool stuff. I don't know a lot about it. I do know it can do some things much faster than your Turing-machine types, in highly non-linear ways. I'm not sure it can do anything that a Turing machine wouldn't eventually be able to.


Biochemical computing is probably just a bad idea, compared to drier nanotechnology, but maybe it has a window. It doesn't seem likely to change what you can compute as opposed to how small you can make computational elements though.

Trinary+ has the same problem as just trinary: from a computer science perspective, it is literally no different from binary. It's possible it has some engineering merit and would let you increase computational density (although also possible that it does not), but it would definitely not do anything more exciting than that.
 
It took 50 or so years of serious AI research barking up the wrong trees before someone hit on the deep learning idea that gave us the current wave of GPT style AIs, for example, and before that innovation a lot of people in the field were having doubts about whether something like GPT would ever be possible given how long AI researchers had been banging their heads against a wall with little to show for it.
Eh, deep learning is built on an old approach (multi-layer perceptrons) that worked just fine for simple tasks, but hit a wall at a certain level of complexity (six layers or so). The main improvement was finding practical if computationally expensive ways to extend the learning across more layers than a single pass of the older algorithms could tolerate. That hardly means the foundational research was "barking up the wrong trees".

That argument can only go up to a point, i can easily buy that future approaches would optimize a lot of the code needed for a functional AI, but a strong AI that can take less space and processing power than a neural network of a few hundred neurons isn't in any way plausible, its physically impossible.
On the other hand, it's very difficult to say extremely tightly optimized code can't do something with a given amount of resources. There are some information-theoretic limits, for example computers with less resources can't directly emulate computers that have more resources. But avoid those and it's totally plausible that a few hundred instructions could weave some incomprehensible ball of chaotic interactions that just happen to very efficiently lead to whatever result you're looking for every time. We just can't deliberately produce those because we stick to writing programs we can understand.
 
Last edited:
And i read up even further and got an instance of him destroying physical hardware (hard drives) just from hacking which is just all kinds of wrong.
Fun trivia, viruses that destroy hardware have actually existed. Maybe the most colorful example is the U.S. intelligence designed virus that wrecked a bunch of Iranian centrifuges refining uranium by telling the centrifuges to do physically unsafe things. However, my favorite example is one of the first batch file viruses. Drivers foolishly gave any software that came along low level access if they wanted it, and the virus simply told each line of the monitor to refresh repeatedly many times to burn it out before progressing. The malware destroyed the display.

Making malware that destroys hardware isn't easy if the drivers are competently designed, but if someone can get low level access, there's a lot of ways to potentially mess hardware up by doing stuff it wasn't intended to do. Overclock/overvolt it to destruction, tell the physically moving hardware to do unfortunate things in a hard disk, etc.

Eh, deep learning is built on an old approach (multi-layer perceptrons) that worked just fine for simple tasks, but hit a wall at a certain level of complexity (six layers or so). The main improvement was finding practical if computationally expensive ways to extend the learning across more layers than a single pass of the older algorithms could tolerate. That hardly means the foundational research was "barking up the wrong trees".
Fair point, I had in mind some of the other approaches that were tried and appear to have hit dead ends. I encountered a few in my AI courses in college.

The gun...yeah, the gun is a reasonable piece of kit but a terrible primary weapon. (Hypersonic small projectiles don't work, though. They get disintegrated by the atmosphere, unavoidably.)
It's more in the category of "quick to design and make and it gets the job done", the job being disabling or killing humans with maybe some body armor or a low to moderate brute rating. You don't always need a high tech solution, especially when the user has virtually perfect aim.

From what i remember Dragon uses custom biocomputer cores and dedicated satellites for transmission, her hardware requirements are anything but reasonable.
I don't think those are the only things she runs on; remember that she started out as a home personal assistant for Richter.

The important part:
"We implement a transmitter malware that can modulate binary data and transmit it over electromagnetic waves emitted from the video cable."
So no, it requires malware on the machine. Inferring correct data from random electromagnetic noise without modulating it first for the purpose of transmitting information would require a nearly perfect understanding of the architecture of the target machine (both hardware and the OS), seriously powerful dedicated sensors on-site and a massive supercomputer cluster to crunch the resultant data.
Looks like the researchers in that particular paper used malware as a shortcut. TEMPEST side channel attacks don't require it, starting with Van Eck phreaking, and that's more or less what started public awareness of the problem. Linky and Linky. See the first for some pretty cool examples of more modern attacks too.

And when it actually is exotic it is arbitrarily limited in possible approaches like the FTL drive being limited to some kludge that somehow is the only way that works. If you can make wormholes and alcubierre-style warp drives workable then you have all the means and physics needed for a whole slew of other methods like krasnikov tubes and such. They all work on the same set of rules.
FTL requires solving a series of problems to make it not just possible on paper, but practical. If your FTL drive requires a jupiter mass of exotic matter, for example, that's an issue. There's also the open question of how to solve causality issues, which may well be flat out impossible, but I handwave that one.
 
Destroying hard drives with hacking is actually one of the few instances of Hollywood hacking that is true; hard drives have a number of safety functions that can be bypassed through defects in the Operating System. This is a known issue that has cropped up in real life several times, and without designing entire new hard drives that have some kind of built-in protection (which AFAIK no-one has ever bothered to do) the only solution is to just make sure your OS has no exploitable defects.


It's also worth keeping in mind re; computers vs brains that computers have had the processing advantage for a long, long time. The brain only seems to have an advantage because brains are both serial and massively parallel; where a computer performs one task at a time very quickly, brains perform multiple tasks simultaneously. But that limitation is very much one of design and programming, not capability.

Computing hardware has been able to match or exceed brains in every area except 'RAM equivalent' for decades now; brains are capable of storing truly ludicrous amounts of data thanks to using extremely sophisticated methods for reconstructing complex data, aka compression algorithms.


It is actually not implausible that, with the right knowledge and tools, it could be possible to create something that looks a lot like a strong AI on an early ~2000s era computer. It sounds implausible on the surface, but that is just because modern AI development is extremely 'heavy' as a consequence of the fact that nobody really knows how to program an AI, and so we cheat by basically using Evolution as a tool and just throwing massive amounts of processing power at learning algorithms until they randomly stumble upon something that looks like it might be heading in the right sort of direction.

The benefit of this approach is that it works even if you don't actually understand what you're doing, the downside is that it is about as inefficient as physically possible while still working in the end.

An AI built by someone who actually knows how to program an AI, using different design principles than the basic 'throw data at learning algorithms and pray' methodology, should be capable of running on fairly basic hardware by modern standards. Probably.


tl;dr - AI has been primarily a software issue, not a hardware issue, for quite awhile now. By any reasonable metric computing hardware has long since reached the point where it should be capable of running or emulating an intelligence, the problem isn't that the hardware can't do it, the problem is that we have no fucking clue how to program intelligence.

e:
FTL requires solving a series of problems to make it not just possible on paper, but practical. If your FTL drive requires a jupiter mass of exotic matter, for example, that's an issue. There's also the open question of how to solve causality issues, which may well be flat out impossible, but I handwave that one.
The primary issue with FTL is that all 'plausible' methods that have been invented so far require negative mass\energy.

And the thing about negative mass\energy is that there is no actual reason to believe it exists. The math works, but that doesn't mean that it is actually possible, it just means the math works.

So far we are yet to discover any evidence of negative mass\energy being a real thing that actually exists, and until we do there is no good reason to believe that it does, especially as it existing would open up the potential for all kinds of entirely new problems.
 
Last edited:
It's more in the category of "quick to design and make and it gets the job done", the job being disabling or killing humans with maybe some body armor or a low to moderate brute rating. You don't always need a high tech solution, especially when the user has virtually perfect aim.
And then she brought it out as her only ranged weapon on the Dragonslayer op and made a completely avoidable disaster of the affair because it's useless on hard targets.

Very pointedly demonstrating the deficiency of carrying the pellet gun as a primary weapon.
 
Biochemical computing is probably just a bad idea, compared to drier nanotechnology, but maybe it has a window. It doesn't seem likely to change what you can compute as opposed to how small you can make computational elements though.

By biochemical I basically mean artificial brains used to compute, not 100% sure that you were talking about the same thing but I can understand that it could have ethics concerns, but they wouldn't be any worse than the argument that Will be kicked off when the first bottom-up A.I is made, how smart must something be before human rights apply to it? And what restrictions are realistic without destroying their freedom? How many of them will actually be made? Because if they can fork infinitely do we need more than a handful?

Its an issue we'll have when something is made, I currently have a toaster with more computing power than some of the early space launches, at what point is it sentient? Sapient?

the same question has been bugging biologists with the bacteria vs virus debate at when it can be called "alive" vs "ongoing chemical reaction"


on a completely different note aka what everybody else is talking about, I believe that a standard A.I could be done but to get the data compression and compatibility needed you'd need to code directly in binary, I watched a review of an old videogame that can be run on anything because the crazy programmer designed the thing direct without using a language to get around current hardware issues and slight differences between manufacturers before file translation for the internet
 
Last edited:
And then she brought it out as her only ranged weapon on the Dragonslayer op and made a completely avoidable disaster of the affair because it's useless on hard targets.

Very pointedly demonstrating the deficiency of carrying the pellet gun as a primary weapon.
Who could possibly have expected the teenager to make bad decisions.
 
By biochemical I basically mean artificial brains used to compute
Oh. Why bother? Neurons don't seem to be particularly exciting computational material.
on a completely different note aka what everybody else is talking about, I believe that a standard A.I could be done but to get the data compression and compatibility needed you'd need to code directly in binary, I watched a review of an old videogame that can be run on anything because the crazy programmer designed the thing direct without using a language to get around current hardware issues and slight differences between manufacturers before file translation for the internet
This is again nonsense. In multiple ways, but the biggest one is the idea that coding in binary somehow improves compatibility. Very much the opposite! A program in high level language can be compiled for or interpreted on a variety of platforms. A binary has to be in the local processor's native opcodes.

Whether writing in binary will give a more compact executable may vary but is wildly unlikely to matter because executable sizes are pretty well never a problem.

Also, an AI doesn't need either data compression or compatibility anyway. Why would it? It only needs to run on one platform, and additional memory just isn't hard to get.
 
Oh. Why bother? Neurons don't seem to be particularly exciting computational material.

This is again nonsense. In multiple ways, but the biggest one is the idea that coding in binary somehow improves compatibility. Very much the opposite! A program in high level language can be compiled for or interpreted on a variety of platforms. A binary has to be in the local processor's native opcodes.

Whether writing in binary will give a more compact executable may vary but is wildly unlikely to matter because executable sizes are pretty well never a problem.

Also, an AI doesn't need either data compression or compatibility anyway. Why would it? It only needs to run on one platform, and additional memory just isn't hard to get.

'I'm talking about in the context of the story, but I just repeated what I saw about the game in the review, my actual knowlege of the feasibility of anything starts and ends with a interest in nerd culture, my understanding of code was that it is similar to languages in that its mostly compatible between each language but you lose a little something in the translation and have to compensate by adding more context thus increasing file size

but this may also no longer be an issue, it's a lot of little information that I've retained that drew me to this conclusion, the issue of file format conversion before the internet solved all the issues, something is jiggling in the back of my head that in the early days of computing each computer ran on incompatible software/hardware and standardising was pushed after initial models, the mention of that game, googling why we have so many types of USB, an article on y2k that I can't actually remember beyond what people were scarred of and possibly other things I can't name right now, most of this might not be true but I'm good at fudging details into a conversation about random facts that I remember which are probably not true
 
Last edited:
Pretty sure the neural lace does exactly what you're saying it doesn't... and actually, some of the other tech is right over the top. Excalibur may not be the most efficient option, but it's certainly powerful and not typical.
It does? There was no indication of that even in the dreams even as far as i read (ch 26) and the way it was described was rather explicitly as 'it just exits alongside the brain and monitors it and can do stuff to it', not as a true extension. The most it seemed to do was modify the brain to download skills and knowledge but the process itself was outside of conscious control, and emulate the brain with faster processing speed and then reintegrate the changes for bullet time which is essentially the pseudo-upload technology done in Gen-Lock but in a very narrow function. And this is the neural lace from the end of the bloody universe too. You'd think that the neural lace brought to the absolute limits of the concept would seamlessly integrate as being an extension of the mind without the need for any interface or tricks like charges or thinking commands at it to use. And that it would mature much faster and both suppress and take over the immune system locally just for the brain without any problems.

Honestly, its a bit like that civilization just feared and restricted any tech that could lead to freely and casually making a brain copy and collectively believed that continuity of consciousness isn't preserved over uploading and backups (which a lot of high-level thought experiments, novel approaches to identity like branching identity and our current knowledge (however limited it still is) of how the brain works and theoretical aproaches of how consciousness could emerge from the inherent complexity of it (which hints at it being the result of extensive metacognition allowing feedback loops) seems to imply its an illusory and arbitrary barrier that fundamentally doesn't make sense and could essentially be a cultural hangup based on how we used to think (and still do in a lot of cases) about souls and comes from the same root from where the cultural hangup on body modification comes from) with the backup being essentially a last resort measure only and not for casual use.

There's a reason i didn't say anything about Excalibur, that was more like what i expected. But that doesn't mean it couldn't be better either, with room-temperature superconductors and this kind of computer science coupled with literal end-of-the-universe knowledge you could reasonably imitate covenant plasma torpedoes in miniature or make it into a toroid plasma launcher alongside being a torch. It could be made into a multifunctional plasma wand.

The gun...yeah, the gun is a reasonable piece of kit but a terrible primary weapon. (Hypersonic small projectiles don't work, though. They get disintegrated by the atmosphere, unavoidably.)
Even when made from the likes of tungsten, nanoformed ceramics or even denser high-entropy alloys? that said it was more of an example.

Other examples i thought of include:
Shooting self-sustaining magnetically held superconductive nanoparticle toroids coated with the toxin to get a result similar to the skin penetrating vaccine injector that shoots silver particles coated with whatever you want which is a real tech.
Clearing air via a carefully modulated laser discharge to create a very short-lived channel of vacuum followed by a small hypervelocity nanoparticle of gellified toxin that would dissolve very quickly in the organism.
Ditching the material carrier entirely and using things like nanoparticle poison capsules sustained in a cold-plasma toroid, in the core of a complex engineered directional sonic discharge with enough power to become a shockwave at the tip or in a microscopic packet of particles arranged into a briefly self-sustaining structure due to the arrangement of charges.
Just exchanging the flechettes for a series of ultra-thin monomolecularly tipped tungsten-density needles as a carrier.

Some of the heavier weapons that Scientia often doesn't really use, like the space-based mass drivers, are in fact absurdly powerful, although not absurdly powerful in remarkable ways.
That's just generic sci-fi tech any interplanetary civilization worth their name would have access to, it would scale in power the further in tech level you are yes but it's not really a revolutionary idea.

As for Prometheus vs Dragon? Functionally similar Lego bricks were used to make both. Only, Dragon was made with perhaps at best just a quarter of the bricks, while Prometheus has all the bricks, when it comes to computer code. For Dragon, that means that she was created with a very limited selection of of lines of code combinations refined by countless cycles into a highly potent yet highly limited blend that only worked because Ritcher figured out how to make it work despite said limitations. Whereas Prometheus was made with the knowhow and exponentially better tools of a civilization that made it to the literal end of their universe and beyond.
You yourself said that shards take their knowledge from who knows how many civilizations, and from many different universes too. So it makes it hard to believe that Prometheus is so much better just because the tech was optimized for so long when Dragon was presumably generated by shards using the knowledge of novel approaches from many different universes which were full of solutions that came from what ranges to slightly different to fully disparate physical systems and mentalities that no humans limited to just themselves and their AI and their one universe with weirdly rigid FTL physics could never think of or stumble upon.

Yes, our feeble 2011 internet and computers are kind of pathetic in comparison... Or are they? The original Star Trek in the 60s had handheld communicators. Not even ten years later, we had the very first mobile phone in 1973, and ten years after that they were commercially available. The Next Generation gave us com badges, PADDs and replicators. Now we have Bluetooth devices in a dazzling array of shapes and sizes, tablet PCs, and as of 2021 we now have 3D printers capable of making edible meat.
What that has to do with anything?

That part kind of makes sense though. Hard drives are just computers themselves nowadays.
Being a computer doesn't really imply being able to break the hardware from software. That said, some hard drives have been possible to physically destroy by a malicious controller. I don't know how rare that is - it's not a feature that normally matters.
Fun trivia, viruses that destroy hardware have actually existed. Maybe the most colorful example is the U.S. intelligence designed virus that wrecked a bunch of Iranian centrifuges refining uranium by telling the centrifuges to do physically unsafe things. However, my favorite example is one of the first batch file viruses. Drivers foolishly gave any software that came along low level access if they wanted it, and the virus simply told each line of the monitor to refresh repeatedly many times to burn it out before progressing. The malware destroyed the display.

Making malware that destroys hardware isn't easy if the drivers are competently designed, but if someone can get low level access, there's a lot of ways to potentially mess hardware up by doing stuff it wasn't intended to do. Overclock/overvolt it to destruction, tell the physically moving hardware to do unfortunate things in a hard disk, etc.
Destroying hard drives with hacking is actually one of the few instances of Hollywood hacking that is true; hard drives have a number of safety functions that can be bypassed through defects in the Operating System. This is a known issue that has cropped up in real life several times, and without designing entire new hard drives that have some kind of built-in protection (which AFAIK no-one has ever bothered to do) the only solution is to just make sure your OS has no exploitable defects.
I was under the impression that hardware-destroying attacks are very, VERY rare due to the fact that the low enough level of access to make hardware do self-destructive things is essentially impossible to achieve just from the access to OS alone baring a very specific hardware flaw (like with the centrifuges). And that in many cases the hardware simply can't perform an action so extreme as to kill itself as there are hard mechanical security features on the hardware level itself.

I don't think those are the only things she runs on; remember that she started out as a home personal assistant for Richter.
Which IIRC required dedicated hardware that was either tinker-tech or very specifically tailored for the needs of an AI of dragon's caliber. The reason she had to transfer to a secondary facility owned by Richter upon the sinking of Newfoundland wasn't just because of some arbitrary limits he put upon her.

Looks like the researchers in that particular paper used malware as a shortcut. TEMPEST side channel attacks don't require it, starting with Van Eck phreaking, and that's more or less what started public awareness of the problem. Linky and Linky. See the first for some pretty cool examples of more modern attacks too.
Notice how more modern approaches actually utilize transmission malware on the air-gapped machine first, as well as how the original attack was known in 1985 and governments took steps to prevent it back then. As well as how CRT monitors were one of the big vulnerabilities in regards to the attack. Not to mention i very much don't believe that a tinker or thinker didn't actually try this before. It wasn't some obscure attack method that only some future AI could discover and use but a known and serious security concern.

FTL requires solving a series of problems to make it not just possible on paper, but practical. If your FTL drive requires a jupiter mass of exotic matter, for example, that's an issue. There's also the open question of how to solve causality issues, which may well be flat out impossible, but I handwave that one.
There is an interesting idea that the causality issues aren't actually issues and don't need to be solved, its just our assumption that the universe wouldn't let them happen because we assume its a problem. But the universe doesn't exactly need to follow our monkey-brain expectations.

It's also worth keeping in mind re; computers vs brains that computers have had the processing advantage for a long, long time. The brain only seems to have an advantage because brains are both serial and massively parallel; where a computer performs one task at a time very quickly, brains perform multiple tasks simultaneously. But that limitation is very much one of design and programming, not capability.

Computing hardware has been able to match or exceed brains in every area except 'RAM equivalent' for decades now; brains are capable of storing truly ludicrous amounts of data thanks to using extremely sophisticated methods for reconstructing complex data, aka compression algorithms.


It is actually not implausible that, with the right knowledge and tools, it could be possible to create something that looks a lot like a strong AI on an early ~2000s era computer. It sounds implausible on the surface, but that is just because modern AI development is extremely 'heavy' as a consequence of the fact that nobody really knows how to program an AI, and so we cheat by basically using Evolution as a tool and just throwing massive amounts of processing power at learning algorithms until they randomly stumble upon something that looks like it might be heading in the right sort of direction.

The benefit of this approach is that it works even if you don't actually understand what you're doing, the downside is that it is about as inefficient as physically possible while still working in the end.

An AI built by someone who actually knows how to program an AI, using different design principles than the basic 'throw data at learning algorithms and pray' methodology, should be capable of running on fairly basic hardware by modern standards. Probably.


tl;dr - AI has been primarily a software issue, not a hardware issue, for quite awhile now. By any reasonable metric computing hardware has long since reached the point where it should be capable of running or emulating an intelligence, the problem isn't that the hardware can't do it, the problem is that we have no fucking clue how to program intelligence.
Yeah, not buying it.

First, the fact that brains seem to be so good at both serial and parallel processing actually implies that they're both a core requirement of general intelligence.

Second, there is very likely a reason why the coded algorithmic approaches to intelligence are so inferior to neural nets as well as why the neural nets evolved naturally for the purpose of intelligence, even a simple neural network tends to be orders of magnitude better at doing intelligent tasks than essentially anything else. That doesn't sound 'as inefficient as possible'.

Third, neural nets are now starting to be used at hardware level to massively improve performance of things like graphics cards. That implies the exact opposite of your 'it's a software issue not a hardware issue'.

Fourth, there is a very massive difference between 'knowledge of how to design an intelligence means that it could be greatly optimized' and 'literally shaved the processing and memory requirements by several orders of magnitude so it can run on a shitty early smartphone and transfer forks over a few hundred kb/s connection close to instantly'





Now to address the topic in general, consider the following:

Look at the libraries and code of many current and older games, programs, physics engines etc. Ignore the models and textures and similar assets and just focus on the code libraries.

You will notice that they aren't exactly small, even highly optimized ones seem to steadily go up with the increased complexity of the functions the software has. There are very few optimization methods that shave more than a significant percentage at most, and despite those optimizations being constantly discovered, the size and requirements grow even with highly optimized programs the further we go. There are some outliers like the procedural generation being used to create old doom-esque games within a few hundred kilobytes but that still is just an optimization of an order of two of magnitude at most.

Now consider intelligence, all we learned about it so far. It seems to imply that at the very least intelligence requires a high degree of interconnection between whatever basic building blocks you build it from, both backward and forward, as well as a degree of self-modification for said building blocks. They essentially have to act like logical gates instead of pre-written, static statements. There doesn't seem to be a way to actually create intelligence purely from a decision-tree-style algorithm that actually approaches anything that could be said to be general at even a narrow function. It's unlikely we would see an algorithm that is coded by statements and decision trees that could match some basic AI-image generators for example without being excessive.

Now consider that in this style, the computing power and memory space necessary will rise exponentially the more of those building blocks you add to your intelligence. Maybe there is a way to shave off a significant percentage by simplifying neurons even further, maybe there is another such way if you manage to generalize some layout of connections that is most optimal and lets you shave the unnecessary ones, maybe you could optimize by yet another method where you use decision-tree-style code to integrate different specialized networks of those blocks in an ingenious way that shaves of even more but still maintains the necessary functions, maybe you could create some method of fractal procedural generation based compression that allows it to compress itself by some orders of magnitude and then unpack itself elsewhere just like how DNA, epigenetic control of gene expression and embryonic development connect together to do pretty much that and generate an organism without the need for encoding all those organ structures and tissue networks directly.

But there would still be a rise in complexity to what we have now as the program itself would be much greater to have all those functions that allow it to take to any task and learn. The idea that we can go lower than what we have today appears to be by all accounts physically impossible. The VI seed idea is actually fine and plausible, but it unfolding and running on earth bet civilian hardware anywhere close to its strong AI capacity? Not really.

You could at most approximate it by Prometheus being distributed intelligence that flings its fractally compressed seeds at every device with every viral infection method it has like a botnet spread by a billion suspicious links and sites that spread worms and trojans everywhere someone as much as touches it, and only unfold them a little bit as the device allows without slowing it down noticeably while also finally rooting the device completely and slaving it to its control. They could act like distributed little narrow functions of it, coordinating via the internet, Until it reaches critical mass and becomes fully a useable AGI. But it would not stay hidden for long, and it would be much more discoverable the further along its completion it becomes. like a digital version of The Thing.

But that's not what Prometheus does. He operates anywhere invisibly and without slowing down anything to be noticeable while still operating full forks. He transmits himself instantly despite the limited network bandwidth. He manufactures an arbitrary amount of exploits and vulnerabilities in both software and hardware without limit way past what appears to be an actual amount of possible vulnerabilities that could actually be present in the entire global digital infrastructure. And he started from a shitty ancient 2000s-era computer in the basement study with a few dozen kilobyte per second connection before taking over a protectorate ward's issue dragon-tech phone and completely subverting any and all security and functions something this advanced would have compared to everything else. The way he's written he's the digital equivalent of physics breaking tier 5+ kardashev scale virus built with higher-order physics greater than any singular universe's physical system could allow with infinitesimal fractal dimensions that spreads at the speed of light or higher and subverts any organic system without limit while manufacturing an instant infinite-bandwidth connection between any of its cells.

He's more bullshit than tinkertech, can do things Dragon unchained couldn't, and honestly exceeds even the bullshit entities can do or allow one to do when you consider how it's just a digital program.

And the fact that the civilization with informational knowledge this ridiculous to get us Prometheus only offers generic sci-fi tech in other fields and struggled with ongoing vacuum collapse is just SoB breaking, they should be more bullshit than the Xeelee, Culture and Orion's arm combined. The contrast between Prometheus and even the ship is just blinding.
 
Last edited:
That's just generic sci-fi tech any interplanetary civilization worth their name would have access to, it would scale in power the further in tech level you are yes but it's not really a revolutionary idea.
Almost nothing in any given work of SF is. I don't think Scientia is doing anything at all that I haven't seen in a prior work, but that is not a condemnation or even a criticism.
Now consider intelligence, all we learned about it so far. It seems to imply that at the very least intelligence requires a high degree of interconnection between whatever basic building blocks you build it from, both backward and forward, as well as a degree of self-modification for said building blocks. They essentially have to act like logical gates instead of pre-written, static statements. There doesn't seem to be a way to actually create intelligence purely from a decision-tree-style algorithm that actually approaches anything that could be said to be general at even a narrow function. It's unlikely we would see an algorithm that is coded by statements and decision trees that could match some basic AI-image generators for example without being excessive.
Statements and decision trees and calculations are what all software, necessarily including those image generators, are assembled out of.

And I'm 99% sure that nothing in that pattern is expanding exponentially, because exponential growth is catastrophic. People throw that word around, but it is rather specific and deadly serious. Something that actually scales exponentially is something that you are only going to do on a very limited scale.

I'm not up on exactly how deep learning works, honestly, but multilayer perceptrons have an aspect of quadratic scaling, where each 'neuron' on a layer factors in the outputs of all those on the preceeding layer. Quadratic scaling is very very different from exponential scaling, and note that it's not even quadratic scaling in the overall size of the system we're looking at - it's quadratic in the width of the layers.
'I'm talking about in the context of the story, but I just repeated what I saw about the game in the review, my actual knowlege of the feasibility of anything starts and ends with a interest in nerd culture, my understanding of code was that it is similar to languages in that its mostly compatible between each language but you lose a little something in the translation and have to compensate by adding more context thus increasing file size

but this may also no longer be an issue, it's a lot of little information that I've retained that drew me to this conclusion, the issue of file format conversion before the internet solved all the issues, something is jiggling in the back of my head that in the early days of computing each computer ran on incompatible software/hardware and standardising was pushed after initial models, the mention of that game, googling why we have so many types of USB, an article on y2k that I can't actually remember beyond what people were scarred of and possibly other things I can't name right now, most of this might not be true but I'm good at fudging details into a conversation about random facts that I remember which are probably not true
While this analogy isn't necessarily safe, code is similar to languages in that it isn't at all compatible between languages. If you try to read Japanese as English you don't lose something in translation, you don't get anything out of the reading at all. Things that read programs for effect aren't polyglot translators, they're monolingual strict grammarians. They're not there to figure out what you meant, they're there to do exactly what you said.

There are still multiple operating systems even in the desktop market. Standardizing towards a single processor instruction set doesn't seem to have been intentional but maybe sort of happened by accident in the desktop market - it hasn't happened at all beyond that.

I think the most common ground for compatibility issues as opposed to 'this was never meant to be compatible' was (and still is at times) peripherals, and that bears some resemblance to what you're saying. Much of what you want to do with a computer involves controlling added-on components. Even things like displays or input devices. Standardizing interfaces for those so that a program can engage with them without needing to know the quirks of the specific device being used? That took a while to get to the mostly-working state it is today. And there, yes, adding on bits to your program could potentially help - making it able to recognize and act differently for different known devices.

Avoiding those compatibility problems by sticking to core architecture and OS functionality rather than doing anything that requires directly touching peripherals is decidedly possible, but isn't particularly exciting nor is it likely to benefit from writing in binary.


Y2K, FYI, wasn't a compatibility problem, unless you stretch the concept quite oddly. It's just a lot of different programs and databases having stored dates with two-digit decimal years so the year 2000 looked just like the year 1900 and would have been understood as such. That's literally the entire thing.
 
Last edited:
You yourself said that shards take their knowledge from who knows how many civilizations, and from many different universes too. So it makes it hard to believe that Prometheus is so much better just because the tech was optimized for so long when Dragon was presumably generated by shards using the knowledge of novel approaches from many different universes which were full of solutions that came from what ranges to slightly different to fully disparate physical systems and mentalities that no humans limited to just themselves and their AI and their one universe with weirdly rigid FTL physics could never think of or stumble upon.


.... So what you're saying is that you genuinely believe that the technological prowess from a single massive civilization that expanded out not just from its home planet but from its home galaxy, created massive extrasolar gigastructures, and grew to dominate their entire universe is somehow inferior to the harvested technological remnants of countless civilizations that never had the time and resources to truly explore the options that 'Tinkertech' gives because of the widespread societal, cultural and technological destabilization that entities inflict on species that they visit, causing said species to likely never make off of their home world and as such rendering them unable to survive a visit from the Warrior-Thinker entity pair given how their cycles operate.

...

I think I'll leave it at that; I'm officially done paying attention to this debate and will just say that if you don't like this aspect of this - in my humble opinion - excellent example of fanfiction, there are plenty of other good fanfics scattered about and I'd be happy to share with you my personal favorites.

Toodles, I'll be back when TaliesinSkye posts the next update (unless I inexplicably am summoned). 😀
 
Statements and decision trees and calculations are what all software, necessarily including those image generators, are assembled out of.
In the fundamental sense yes. What i tried to describe was the difference between neural networks and 'normal' coding used in most programs. The difference between manually coding what each function should do when/if or making a network and training it. Not sure if i succeeded.

And I'm 99% sure that nothing in that pattern is expanding exponentially, because exponential growth is catastrophic. People throw that word around, but it is rather specific and deadly serious. Something that actually scales exponentially is something that you are only going to do on a very limited scale.

I'm not up on exactly how deep learning works, honestly, but multilayer perceptrons have an aspect of quadratic scaling, where each 'neuron' on a layer factors in the outputs of all those on the preceeding layer. Quadratic scaling is very very different from exponential scaling, and note that it's not even quadratic scaling in the overall size of the system we're looking at - it's quadratic in the width of the layers.
Yeah i used the colloquial definition here (i also tend to forget that quadratic scaling is a thing that exists and often confuse the two).

About the layers, one thing i keep wondering is if using defined layers in our neural networks is actually the correct approach. Organic neural networks are certainly much messier and don't seem to have defined layers.

.... So what you're saying is that you genuinely believe that the technological prowess from a single massive civilization that expanded out not just from its home planet but from its home galaxy, created massive extrasolar gigastructures, and grew to dominate their entire universe is somehow inferior to the harvested technological remnants of countless civilizations that never had the time and resources to truly explore the options that 'Tinkertech' gives because of the widespread societal, cultural and technological destabilization that entities inflict on species that they visit, causing said species to likely never make off of their home world and as such rendering them unable to survive a visit from the Warrior-Thinker entity pair given how their cycles operate.
Your write-up falls apart the moment you consider:

1. Frequent data exchanges with other entities that don't operate the cycle in the same way and could very well have information from interplanetary, interstellar civilizations or even greater ones. Some rare ones might even have taken to peacefully trading with kardashev scale 4 civilizations, multiversal empires and other incredibly powerful beings.
2. The greatly disparate physical systems, mentalities, environments and even instances of exotic biology leading to solutions, approaches and knowledge that this humanity simply would never come up with.
3. The simple matter of scale. Millions of cycles would constitute a volume of data that would dwarf even this end-state single universe humanity. Even if in breadth more than in depth.
4. The fact that baring Prometheus and the FTL drive the demonstrated tech is actually nowhere close to the bullshit that is most tinkertech and the former is essentially an outlier that should not work as well as it does. Things like the String theory's G-driver, Bakuda's all-or-nothing type bombs, Lab rat's potions, Armsmaster clockblocker-derived tech, Toybox and Haywire's dimensional bullshit and literally everything Hero could do is far above and beyond what that civilization was shown having when it comes to the twist-physics-into-pretzels department.

The end-state humans of that universe we're advanced yes. But they also plateaued rather badly. In a way that is simply depressing to read about. Especially how they stagnated at what we see as possible theoretical technology we can tell is possible according to what we know now (FTL excepted) and have gone no further even if they achieved the scale of kardashev 2+ civilization. A civilization that represents a cynical view of the author on the future of technological development, rather ironic for a fic about the power and potential of science and knowledge.

The other reason i argue this at all is that i'm just tired of AI being portrayed as some unstoppable digital gods that just get to ignore the limitations of the computer infrastructure because 'AI'. I learned enough that i no longer see it as realistic but rather as a stupid Hollywood trope, like exploding cars.
 
Last edited:
A lot of the technological 'naysaying' in this thread seems somewhat analogous to a late 1500's early 1600's scientist denying the possibility of a horseless carriage because it isn't possible to get useable work out of a windmill attached to a cart, completely ignorant of the upcoming revolution of the first steam engine, then internal combustion, and ~500 years of progress advancing material sciences and physics research leading to the creation of a viable Windmill powered car.

The society that taylor is drawing from is so old that 500 years of advanced is almost immeasurably small in comparison, and they have been actively progressing their understanding of the universe that entire time as far as we are aware. The real life progress of technology has been advancing faster with every passing year, and to my knowledge every prediction of technological progress slowing down has been wrong save for some individual fields falling by the wayside in favor of newer more promising fields.
 
4. The fact that baring Prometheus and the FTL drive the demonstrated tech is actually nowhere close to the bullshit that is most tinkertech and the former is essentially an outlier that should not work as well as it does. Things like the String theory's G-driver, Bakuda's all-or-nothing type bombs, Lab rat's potions, Armsmaster clockblocker-derived tech, Toybox and Haywire's dimensional bullshit and literally everything Hero could do is far above and beyond what that civilization was shown having when it comes to the twist-physics-into-pretzels department.

The end-state humans of that universe we're advanced yes. But they also plateaued rather badly. In a way that is simply depressing to read about. Especially how they stagnated at what we see as possible theoretical technology we can tell is possible according to what we know now (FTL excepted) and have gone no further even if they achieved the scale of kardashev 2+ civilization. A civilization that represents a cynical view of the author on the future of technological development, rather ironic for a fic about the power and potential of science and knowledge.
The origin civilization doesn't appear to have anything like the Entities' dimensional nonsense.

I'm not inclined to blame them for that considering how thoroughly the Entities' dimensional nonsense is nonsense, though from within a setting where we know that works it is presumably something they could have gotten.
 
Back
Top