Hybrid Hive: Eat Shard? (Worm/MGLN) (Complete)

She does, but as you point out, the Book of Darkness is a seriously complicated unison device. If I were to count myself, I'd count it as three devices... not *separate* ones, mind you, and that's also ignoring the Knights and their own devices, which are still part of the system.

Technically, it is one device. The actual "book" part itself is the Database component, while Reinforce is the Administration Program that manages the Database. The device-like staff that Hayate is seen carrying is merely something created to act as a focus; if I recall correctly, that staff has little to no function that usual devices are equipped with.
Personally, I say the Wolks are probably better categorized as part of the Defense Program that was (luckily) untouched.

Still, yeah. Whoever created the original Tome had created something so complex that trying to tamper with it caused the Defense Program to go berserk, and thus it became the BoD. Who knows what happened when Reinforce was cut away from the Defense Program at the climax of A's.

(I still don't think Hayate has a different biology from the others as you claim, though.)

Anyway. Force, chapter sixteen, page.. nine, IIRC.

The Magic Dictionary for Force states that the abilities she mentions in Force are spells she had deployed beforehand as a precaution, not a feature due to the nature of her body.

Though, yes, the sheer amount of spells that the Tome would have collected over the centuries (or even millennia, depending on when the Tome was created) would give enough versatility to Hayate that others wouldn't have, so she would live up to the name of Living Lost Logia anyway.
 
Last edited:
I wouldn't complain if he posted it sooner.
To be fair though, this is perhaps the most regular fic in my watch list. Less wondering if the story is dead.
 
While this is true, Missy also didn't save hundreds of capes as Vista during Endbringer fights like Amy has as Panacea.
:Citation Needed:
Pretty sure she has, Canberra at the very least she definitely participated in. But she's had her powers for a while now, over a year at the very least, so she's almost certainly participated in more besides that one. Otherwise they'd not really accept her word that Taylor isn't a Parahuman way back in the beginning, governments and their agencies don't just accept facts from new sources without vetting them extensively which takes a LOT of time. Her going to Endbringer fights helps her credibility a bunch.
I will point out that Panacea wasn't the only data point for "not a parahuman" - MRI-like checks also show no sign of being a parahuman, for example.
<- Eagerly awaiting today's update
Wait, I need a chapter ready for today? Shit! I've only got six chapters and an interlude in my buffer! :p

Regarding distances...
I'm guessing that UA-97 is also on the other side of the cordon to the opening that Taylor and give have found?
I mean, Hive didn't take long to prepare for his potentially months long trip to earth with potential of never coming back. How much prep can it possibly take? :V

Considering that (according to CmptrWz himself) UA-97 is very far away on any axis the TSAB know about (meaning the ones they can use to travel)? It's likely that it's in roughly that direction, with no more than a few degrees of variance... Unless the point where Hive was originally launched away from is an obscenely long distance from UA-97, at least. After all, to be on the opposite side from the hole, UA-97 would have to be further from Hive's origin point than Earth Bet... And the TSAB has found the place Hive came from but not Earth Bet.
Hive came from a battle on the far side of what the TSAB now considers to be their jurisdiction and traveled along a path that was both crossing three dimensional space and the dimensional sea. Distances, though, are complicated.

From the TSAB's point of view, there are several potential paths they could take.
  1. Go to UA-97 and travel only through the dimensional sea to reach Bet. This is an incredibly long path for them, as Bet is very far off from UA-97. Total travel time: 3 years.
  2. Start at a wasteland planet just outside of their jurisdiction and travel through three dimensional space and the dimensional sea to reach Bet. This is following Hive's original path and is shorter. Total travel time: 2 years.
  3. Start at Midchilda and travel through three dimensional space and the dimensional sea to reach Bet. This is shorter still. Total travel time: 1 year.
  4. Start at one of seven other planets and "spiral" through the dimensional sea to reach Bet. This is the shortest and fastest path available to the TSAB, though not the most direct, taking advantage of known 'twists' of the dimensional sea. Total travel time: 3-6 months, depending on how accurately they can follow the spiral.
From Hive's POV, once she knows where UA-97 is and that it's reasonably close to Belka (as far as these things go)? There's a dimensional axis that leads almost straight there! Total travel time to UA-97: 9-12 days, then however long it would take to get elsewhere in TSAB space.

Note that this last path in particular looks like going the entirely wrong direction based on Hive's original approach vector, as though traveling 90 degrees off of the path you want to take.
 
Note that this last path in particular looks like going the entirely wrong direction based on Hive's original approach vector, as though traveling 90 degrees off of the path you want to take.

So, while there may be brief snippets from the TSAB point of view during Interlude chapters, Nanoha and friends won't be showing up anytime soon in this story. Probably arriving as BIG DAMN HEROES during whatever version of Gold Morning this story does.
 
:Citation Needed:

I will point out that Panacea wasn't the only data point for "not a parahuman" - MRI-like checks also show no sign of being a parahuman, for example.

Wait, I need a chapter ready for today? Shit! I've only got six chapters and an interlude in my buffer! :p

Regarding distances...


Hive came from a battle on the far side of what the TSAB now considers to be their jurisdiction and traveled along a path that was both crossing three dimensional space and the dimensional sea. Distances, though, are complicated.

From the TSAB's point of view, there are several potential paths they could take.
  1. Go to UA-97 and travel only through the dimensional sea to reach Bet. This is an incredibly long path for them, as Bet is very far off from UA-97. Total travel time: 3 years.
  2. Start at a wasteland planet just outside of their jurisdiction and travel through three dimensional space and the dimensional sea to reach Bet. This is following Hive's original path and is shorter. Total travel time: 2 years.
  3. Start at Midchilda and travel through three dimensional space and the dimensional sea to reach Bet. This is shorter still. Total travel time: 1 year.
  4. Start at one of seven other planets and "spiral" through the dimensional sea to reach Bet. This is the shortest and fastest path available to the TSAB, though not the most direct, taking advantage of known 'twists' of the dimensional sea. Total travel time: 3-6 months, depending on how accurately they can follow the spiral.
From Hive's POV, once she knows where UA-97 is and that it's reasonably close to Belka (as far as these things go)? There's a dimensional axis that leads almost straight there! Total travel time to UA-97: 9-12 days, then however long it would take to get elsewhere in TSAB space.

Note that this last path in particular looks like going the entirely wrong direction based on Hive's original approach vector, as though traveling 90 degrees off of the path you want to take.

I know nothing about what this story is crossed over and there is so much tech talk that it's hard to folllow some times, I just nod my head like a retard and say yes that when reading sometime.
 
MOARRR!!
Also, does any Canon shard use the power that the entities make use of for intergalactic travel other than the spatial manipulation?
The spatial manipulation is enough on its own, though?

I could imagine some sort of propulsion would also be useful, but they've got hundreds of methods for that. It's really Vista's power that's the most crucial.
 
The spatial manipulation is enough on its own, though?

I could imagine some sort of propulsion would also be useful, but they've got hundreds of methods for that. It's really Vista's power that's the most crucial.

Eh, I don't think Vista's shard is one of the 'galactic transit" shards. For one thing, using it that way would requires Entities to be smart enough to realize it can be used to travel.
 
Eh, I don't think Vista's shard is one of the 'galactic transit" shards. For one thing, using it that way would requires Entities to be smart enough to realize it can be used to travel.
The Entities are, by all accounts, far superhuman in intelligence when they're fully intact. In case of Zion and Eden we're looking at a sexually dimorphic pair, where Eden was doing all the thinking -- and Eden very likely fell victim to a PtV'd plot to destroy her, while Zion has lost even whatever thinker shards he had to begin with.

Coupled with that, Vista use(d) her shard to fast-travel absolutely all the time. There is really no chance the entities have missed that application.

Tattletale's shard on its own would suffice to hack up a post-human (though non-gaussian) AGI with just a little bit of extra coding.
 
Last edited:
The Entities are, by all accounts, far superhuman in intelligence when they're fully intact. In case of Zion and Eden we're looking at a sexually dimorphic pair, where Eden was doing all the thinking -- and Eden very likely fell victim to a PtV'd plot to destroy her, while Zion has lost even whatever thinker shards he had to begin with.

Coupled with that, Vista use(d) her shard to fast-travel absolutely all the time. There is really no chance the entities have missed that application.

Tattletale's shard on its own would suffice to hack up a post-human (though non-gaussian) AGI with just a little bit of extra coding.

Actually, no they aren't superhuman in intelligence at any point. In fact they are as a species such idiots that it's legendary. Think about it, Entities are looking for the 'solution' to entropy and avoiding the heat death of the universe... Except no they actually aren't doing that. What they are looking for is a way to have infinite food and infinite space on one single planet in order to support their exponential population growth. This is something that is impossible to achieve. To find search for this impossible goal, they do the exact same thing over and over and over again, destroying habitable worlds in every part of the multiverse they can access in order to fling themselves to the next world to try the exact same experiment of "can pitting a species against it's self tell me how to generate unlimited food and living space? Let's find out." What made The Thinker/Eden so excited about this cycle that it literally face planted was the idea of running the exact same experiment, only using politics to guide the conflict rather then random violence. And when they have destroyed the last world capable of supporting life in the entire universe without finding an answer? This doesn't occur to them because it can't occur to them.

In fact, they as a race went into a massive feeding/merging frenzy to combine into one single Entity which blew up their homeworld in a Birth by Suicide gambit to throw the next generation out into the wider universe "in all directions" (according to Wildbow) because a genius among their species after thousands of cycles realized they were destroying their world's ability to support life and yelled "Wait! This isn't working! We need a new way!" An individual shard might be a genius, compared to Entities as a whole. But when all combined to form an entity the total is not greater then the sum of it's parts, it's less then the sum of it's parts. Zion's line for example stole all the scientific and technological knowledge of several highly advanced races. And they learned nothing from it.

EDIT: By all rights, entities should be superhuman intellects. But thanks to the way Wildbow described and portrayed them, they are superhuman idiots.
 
Last edited:
Okay. How? [Scratches head confusedly]
As a general rule, a sufficiently powerful solution to any one problem of intelligence suffices to hackjob general intelligence, although not efficiently.

(Potential applications to analysis of human intelligence: Pending.)

In the case of Tattly's shard in particular, it:
- Interacts with the attention network of the human brain, to decide what to focus on.
- Provides a wealth of information on that focus, until told not to.
- Can't directly predict what effect an action will have, but can provide most of the information needed for that prediction. (Probably this means most of its job is working for precog shards, and Tattletale's usage is a sidenote.)
- Feeds back power usage.

This is not sufficient to build AI on its own, but the current biggest problem of AI is building a model of the world, and it entirely solves that problem. Besides that, we still need the attention network, but that's already partially solved. We also need a goal, but that can just be "maximize paperclips". Lastly, we need a way to choose between possible actions and predict their consequences...

To the degree a "precog" shard really is just simulation, the human brain does the exact same thing; the only real difference is, shards are better at it.

Anyway. Ignoring power constraints (the biggest problem with this plan), what I'd do is hook Tattletale's shard up to a simple physical simulator; perhaps a game engine like Unity.

This... will not produce a humanlike AI, to say the least, but the sheer overpoweredness of the shard is enough that I still expect it would demonstrate surprisingly capable behavior, especially if we spend a bit more effort and give it a way to learn from mistakes. Tattletale's shard is also likely able to tell us what we're doing wrong to cause those mistakes, though without a full mind hooked up to it, I'm not sure how we'd use that.

The result is non-gaussian AI, as in "Superhuman by some metrics, subhuman by others".

If you actually hook it up to a precog shard, however, then I expect it to be just plain superhuman. Still not efficient; you'd want several other types of thinker shards hooked in to achieve that, for example Queen Administrator for the superior attention network and fine control. But when you consider that a complete Entity is made up of thousands of these, all of them slightly different...

There is just no way in hell they're in any way subhuman.
 
If you actually hook it up to a precog shard, however, then I expect it to be just plain superhuman. Still not efficient; you'd want several other types of thinker shards hooked in to achieve that, for example Queen Administrator for the superior attention network and fine control. But when you consider that a complete Entity is made up of thousands of these, all of them slightly different...

There is just no way in hell they're in any way subhuman.
Unless they combine in a way that more resembles a screaming pile of people than a combined intelligence. There is no reason to assume that the shards would result in something smarter than they would be separately when combined. "A person is smart. People are dumb, panicky dangerous animals and you know it."
 
Unless they combine in a way that more resembles a screaming pile of people than a combined intelligence. There is no reason to assume that the shards would result in something smarted when combined.
On the face of it, you're right. That's the scaling hypothesis -- that adding more computing power and scale to an AI makes it smarter, in the absence of other improvements, and it's something most AI researchers up until recently thought was false. The only ones who placed their bets differently were OpenAI, who are in a financial position where it's the only bet they can make. If it's correct, they might win; if it's false, even if they act as if it's false, they'll definitely lose to Google et al.

Unfortunately for basically the entire universe, it looks like the scaling hypothesis is correct. (This is unfortunate, because while such AIs may indeed become arbitrarily smart -- it seems -- there's no known way to control them. Yudkowsky et al. were searching under the street lights, also assuming the scaling hypothesis is false, because if it's true then... we may shortly find out.)



Actually, no they aren't superhuman in intelligence at any point. In fact they are as a species such idiots that it's legendary. Think about it, Entities are looking for the 'solution' to entropy and avoiding the heat death of the universe... Except no they actually aren't doing that. What they are looking for is a way to have infinite food and infinite space on one single planet in order to support their exponential population growth. This is something that is impossible to achieve.
Well, let's see.

First off, conservation of energy isn't a valid conservation law even in real life. It holds on small scales, but not on cosmological scales. It's dual to time translation symmetry, which doesn't hold; the universe expands, and the amount of dark energy keeps increasing. Whatever the reason for that, it's far too early to say the entities can't achieve their goals; that's a rather parochial view that assumes things we have no strong reasons to believe.

Second, I invite you to read the orthogonality thesis.
 
Last edited:
On the face of it, you're right. That's the scaling hypothesis -- that adding more computing power and scale to an AI makes it smarter, in the absence of other improvements, and it's something most AI researchers up until recently thought was false. The only ones who placed their bets differently were OpenAI, who are in a financial position where it's the only bet they can make. If it's correct, they might win; if it's false, even if they act as if it's false, they'll definitely lose to Google et al.

Unfortunately for basically the entire universe, it looks like the scaling hypothesis is correct:
It's not just the general process that matters, the specific implementation also matters. Even if the scaling hypothesis is correct, it doesn't mean that the many different shards function in a way that supports combination. And with how all of the shards seem to function in different ways, I'm assuming that plugging two different shards together might mean that they're running off of entirely different principles, and actively interfere with each other, thus making the final result dumber instead of smarter.

And with how the Entities' entire plan seems to be "let's hope someone else figures it out", I am assuming that it is indeed the case.
 
Last edited:
It's not just the general process that matters, the specific implementation also matters. Even if the scaling hypothesis is correct, it doesn't mean that the many different shards function in a way that supports combination. And with how all of the shards seem to function in different ways, I'm assuming that plugging two different shards together might mean that they're running off of entirely different principles, and actively interfere with each other, thus making the final result dumber instead of smarter.
I concede that it's possible, but do we have any reason to believe so?

It isn't how real-world AI seems to work. Hooking together disparate networks doesn't necessarily help, but given sufficient size or fine-tuning it tends to do so.
 
I concede that it's possible, but do we have any reason to believe so?
Based on the Entities' approach to their plan (which isn't even really "solving entropy", it's more "we want infinite food and infinite space to breed"), which involves using other civilizations to do their thinking for them in a way that is very detrimental to the process of thinking (and yes, the cycle is at least partially how it's supposed to be, because the Thinker prepared the basics of it and got it going before faceplanting, the Warrior only added his own shards to the mix), I am assuming that they're either bad at that level of thinking, or at the very least their decision making is extremely subpar in a "even if you have all the processing power in the world and can think at faster than light speeds, it doesn't matter if your default solution to everything is 'punch it" way.
 
Last edited:
They want, on one single planet, an infinate amount of food and an infinate amount of living space. They want this so they can exponentially breed without any limit or having to fight over resources. They have access to a finite amount of the multiverse, as evidenced by the fact that they kept running out of living space across every reality they have access to. Or in other words, they want infinite space and resources in a finite area while they infinately expand their population. This is not actually possible.
 
On the face of it, you're right. That's the scaling hypothesis -- that adding more computing power and scale to an AI makes it smarter, in the absence of other improvements, and it's something most AI researchers up until recently thought was false.
Right here is the false premise to your theory; that the Entities act like a Computer system and not a biological one. Since biological systems are fundamentally different then computer systems, and since the Entities are provably different then either, no model based in Computer science nor Biology can accurately describe them, but instead we must rely solely upon direct observation.

Which tells us that they are massive idiots.
 
Based on the Entities' approach to their plan (which isn't even really "solving entropy", it's more "we want infinite food and infinite space to breed"), which involves using other civilizations to do their thinking for them, I am assuming that they're either bad at that level of thinking, or at the very least their decision making is extremely subpar in a "even if you have all the processing power in the world and can think at faster than light speeds, it doesn't matter if your default solution to everything is 'punch it" way.
To be honest, I push most of that on "Wildbow didn't think it through", and ignore it for my analysis of the entities. You're correct that it indicates they're not very smart, but that also directly contradicts what we're shown in-story of their components.
 
To be honest, I push most of that on "Wildbow didn't think it through", and ignore it for my analysis of the entities. You're correct that it indicates they're not very smart, but that also directly contradicts what we're shown in-story of their components.
Apologies, I keep on adding to the point while you're already busy answering. Either way, I still think that it's possible to have all the processing power in the world and massively suck at using it. After all, there is a reason why so many people separate "intelligence" and "wisdom" into entirely separate things. There's no reason why a super-intelligent entity wouldn't keep on making utterly dumb decisions because that's all it knows what to do. Analyzing if there is a better solution requires you to think that your current one isn't working, and the Entities as shown do display that mental trap, they never even once doubt that what they're doing isn't correct. They seem to be as stuck in their cycle as everybody they subject to it.
 
Apologies, I keep on adding to the point while you're already busy answering. Either way, I still think that it's possible to have all the processing power in the world and massively suck at using it. After all, there is a reason why so many people separate "intelligence" and "wisdom" into entirely separate things. There's no reason why a super-intelligent entity wouldn't keep on making utterly dumb decisions because that's all it knows what to do. Analyzing if there is a better solution requires you to think that your current one isn't working, and the Entities as shown do display that mental trap. They seem to be as stuck in their cycle as everybody they subject to it.
I don't dispute that. It's why I linked to the orthogonality thesis, in fact: As it points out, you can combine any level of intelligence with any set of goals, except perhaps a level of intelligence too low to comprehend the goals.

At one extreme, you have a rock with some buddhist mantras carved into it. At the other, you have a galaxy-sized brain with the sole goal of converting all of existence into paperclips, and as much as that's a cliche, we'd be no less dead if it happened.

I believe the Entities are closer to the latter than the former.
 
Back
Top