Vespa

I feel like an idiot for not knowing but what's a LLM? I'm always so excited when you start a new fic. Also your PHO chapters are always awesome.
Large Language Model - a self-modifying statistical model of language use. Commonly referred to these days as "AI" despite having zero understanding of meaning. The mathematical technique has also been applied to large databases of (mostly stolen) images to end up with those image generator "AI" like Dall.E and so on.
 
An LLM is an ASE.

Not an AI.

Applied Statistical Engine, in other words. It works probabilities through a massive database of 'facts' to end up with something that's statistically, based on the model in question, most likely to be what you wanted. In very limited cases with carefully curated datasets and good models they can produce some incredibly useful results. On the other hand assuming they're capable of actually understanding what you want is the sure path to disappointment and potentially really bad outcomes...

Several legal companies have found that out the hard way :)

In a very real sense this describes Coil's power remarkably well. It's all simulations all the way down, and capable of remarkable things but sometimes those things are remarkably bad. He assumes it's infallible, so... 🤷‍♂️

An actual AI, ie an emulation of a real mind, human or otherwise, is an entirely different thing and I'd be unsurprised to find out one day you can't get there from here via LLMs. Although there may well be aspects of those systems which are used, if we ever manage it.
 
I think the main difference between AIs (or better, AGIs) and LLMs is that LLMs have no impulses, instincts, desires or similar. They are just machines that sit there doing nothing until an input requires them to create an output. And even that requires carefully curated training data.

They are decent for the specific purpose they were designed for. Like, generating text. For everything else they are a liability. An LLM can not count or calculate, it will instead present the most popular answer to your question. Depending on its training data, this could be hilariously wrong. You can also push it to disregard a result and it will present you the next one on the list, insisting each time that this is 100% correct.

I quit my last job because my boss constantly asked ChatGPT for advice while disregarding mine. Those answers resulted in me spending weeks with lawyers, court officials, tax advisors and accountants trying to fix his mistakes. For example, he wanted to employ a new intern who was supposed to work 40h/week but only wanted to pay 2000€ a month. When I told him that this would be completely illegal since the intern would be below minimum wage, he told me "ChatGPT told me it's okay!" ... and went ahead with the contract like that. And that's one of the milder mistakes. Fun times.
 
Last edited:
I quit my last job because my boss constantly asked ChatGPT for advice while disregarding mine. Those answers resulted in me spending weeks with lawyers, court officials, tax advisors and accountants trying to fix his mistakes. For example, he wanted to employ a new intern who was supposed to work 40h/week but only wanted to pay 2000€ a month. When I told him that this would be completely illegal since the intern would be below minimum wage, he told me "ChatGPT told me it's okay!" ... and went ahead with the contract like that. And that's one of the milder mistakes. Fun times.
There's been at least one lawyer who got in big trouble when they let ChatGPT write their legal brief. Because it inserted cases into the brief that didn't exist.
 
An actual AI, ie an emulation of a real mind, human or otherwise, is an entirely different thing and I'd be unsurprised to find out one day you can't get there from here via LLMs. Although there may well be aspects of those systems which are used, if we ever manage it.
Incorporating the kind of pattern-matching tool that LLMs are would be a useful component to a mind. But yes: there is little evidence to support the idea that it, by itself, leads to consciousness.
 
ChatGPT is not designed to provide accurate, correct, or true answers. It is designed to provide things that are shaped like answers. Nothing more. It analyses the shape of your question, and provides cruft scraped from the bowels of the internet that is shaped like previous answers to previous questions that were shaped like yours. It does not actually understand anything you say.

Want to test it? Pick a big word and count the number of a particular vowel that is in it (make sure it's more than one) and then ask it how many of that vowel are in the word. More than 50% of the time, it will be wrong. Very insistently wrong. If you can't trust it to count how many e's are in the word elephant. how can you trust it to give you advice on employment law or wages?
 
LLM are... 'shallow'. A big problem is that they work on the 'you gave me x+y+z' and database says these go with 'v+w', so that's what you get, basis. They are 'unanchored' in reality, and the only context they are using, to make sense of your query, is in your query (maybe plus your recent previous queries). They are an 'impersonal not-an-AI'.

If you want something more like a real AI (AGI) then you'd need to provide more context. A lot more. Possibly your entire personal info. Giving this to the Big Tech people? Really not a smart idea. Identity theft, your life as their product sold to advertisers, barely the tip of the downsides. The obvious issues of who pays for the processing+storage+comms you want to use.

What are shards up to? Canon is there was a massive simulation of all the relevant Earths and their peoples, and what would happen, next three-hundred years. Before the Entities arrived. Fun questions about where they got the info to do that, and our understanding of the required amount of processing... But. Triggering messes with that prediction, because it has to, to source new [DATA]. So, there needs to be a feedback loop, somehow updating things.

LLM lack a feedback loop. Because they go strange if you let them train, unsupervised, from queries. And, LLM will do the equivalent of 'text completion', or 'auto-corrupt', so you need to carefully inspect anything they produce for flaws. Do a 'sanity check'.

Do you think there's anything doing a 'sanity check' on the actions of shards? There would seem to be some (crude) syntax checking, but...
(Some might claim it was likely the Thinker's job to make this whole mess work...)
 
I think the main difference between AIs (or better, AGIs) and LLMs is that LLMs have no impulses, instincts, desires or similar. They are just machines that sit there doing nothing until an input requires them to create an output. And even that requires carefully curated training data.

They are decent for the specific purpose they were designed for. Like, generating text. For everything else they are a liability. An LLM can not count or calculate, it will instead present the most popular answer to your question. Depending on its training data, this could be hilariously wrong. You can also push it to disregard a result and it will present you the next one on the list, insisting each time that this is 100% correct.

I quit my last job because my boss constantly asked ChatGPT for advice while disregarding mine. Those answers resulted in me spending weeks with lawyers, court officials, tax advisors and accountants trying to fix his mistakes. For example, he wanted to employ a new intern who was supposed to work 40h/week but only wanted to pay 2000€ a month. When I told him that this would be completely illegal since the intern would be below minimum wage, he told me "ChatGPT told me it's okay!" ... and went ahead with the contract like that. And that's one of the milder mistakes. Fun times.

One of the largest drawbacks to them, at least all the current ones, is that they're basically stateless. They don't remember what the last operation was, or self-learn from queries. Each one is taken in isolation other than commands that change persistent values. So they can't extrapolate past the data they were initially given. Neither can they error-check their database and prune out contradictions.

Honestly, some of the Shards are more like this than I realized, the more I think about it :)

QA is one of the few exceptions, I think.

Which is useful here ;)
 
Yeah, I love how badly Generative AI is named.

The first big misnomer is that it's AI. It's not. At all. Current "AI" is hyped up algorithms because everyone thinks that AI is the way of the future, similar to "Smart" appliances that really aren't. To me the defining feature of AI is that it understands the data instead of just processing it. GenAI may be able to tell you that blue is a calming color, but only because it found that data reported in other places. Not because it actually found it calming.

The other half is the "generative" part. It doesn't generate anything, but rather just makes gestalt deductions based on the data you feed it. As an oversimplified example, if you created a "GenAI" to do addition, it could determine every possible addition problem and solve them . . . but it would never invent multiplication. It simply isn't capable of generating new content.

A lot of this would apply to the shards and why their plan will never work: They depend of other lifeforms to be creative, but they're terrified of anything they don't understand to the point they'd destroy it before it could become a threat. The blackboxing of tinkertech is a great example of this.
 
Last edited:
A lot of this would apply to the shards and why their plan will never work: They depend of other lifeforms to be creative, but they're terrified of anything they don't understand to the point they'd destroy it before it could become a threat. The blackboxing of tinkertech is a great example of this.
Is "terrified" even the word? More like "a prior instance had problems with result X ∴ cull result X from future instances" – to be terrified of something implies that it can be thought about and flinched away from. The entities, I think, probably just set a "subconscious" filter on outputs. They don't think of a solution because they made themselves unable to think of the problem.
 
One of the largest drawbacks to them, at least all the current ones, is that they're basically stateless. They don't remember what the last operation was, or self-learn from queries. Each one is taken in isolation other than commands that change persistent values. So they can't extrapolate past the data they were initially given. Neither can they error-check their database and prune out contradictions.

Honestly, some of the Shards are more like this than I realized, the more I think about it :)

QA is one of the few exceptions, I think.

Which is useful here ;)
Problem or feature? Cost issue ignored I would worry a lot about a service like that self-modifying to its input even before accounting for the vast volume of malicious input that would prompt.

It's a problem if what you want is for chatGPT to be AGI, of course.
 
Heh. Imagine if all shard future-modelling is actually an LLM type model of particle interactions or something?
They poured in everything they knew about physics and left it running.
 
Problem or feature? Cost issue ignored I would worry a lot about a service like that self-modifying to its input even before accounting for the vast volume of malicious input that would prompt.

It's a problem if what you want is for chatGPT to be AGI, of course.

It's both. There are times when it recalling what happened before and expanding on that would be useful. In other cases starting with a blank slate is more important.

The main point is that neither is intelligent. And calling it 'AI' is glossing over a whole pile of things that can and do cause all manner of problems.
 
The main point is that neither is intelligent. And calling it 'AI' is glossing over a whole pile of things that can and do cause all manner of problems.
Almost nothing in the history of the field of AI has been anything like intelligent in the way this argument demands. It doesn't seem an interesting note.

Though I suppose the fact that I am just not paying attention to the clown parade holding the counter position is important to that perspective.
 
Fun questions about where they got the info to do that, and our understanding of the required amount of processing...

According to WB, the higher tier precog shards look into the actual future. Straight up magic, not a simulation. Lesser shards like Coil's can only simulate.


Do a 'sanity check'.

You know what's hilarious? Several companies added a second "AI" to do the sanity checks. But if the first hallucinates "facts", then the second might just as well hallucinate those "facts" as correct.


Chat GPT has an entire office of people behind it even with how advanced it is.

And it's not really all that advanced. It's mostly marketing noise and empty promises. Remember when the CEO claimed ChatGPT could pass the bar exam? Hah. Yeah, sure.


A lot of this would apply to the shards and why their plan will never work: They depend of other lifeforms to be creative, but they're terrified of anything they don't understand to the point they'd destroy it before it could become a threat. The blackboxing of tinkertech is a great example of this.

It's self sabotage. If you don't understand the basics, then you can never improve upon them. By withholding all of the Tinker knowledge the shards make sure that they will never get any creative improvements. The best they can hope for is new applications for known methods and then usually only in a narrow area, meaning in combat.

Shards have the technology to create real AGIs. I don't understand why they don't seed a couple of star systems with nanites and give the AIs full control. Then just keep watch on how they iterate, improve and progress. Feed them all your theoretical knowledge and harvest the results.

That at least would be more productive than their stupid cycle experiment.
 
Last edited:
Please tell me you're joking. Because the idea of Murder Hornets that can remember you is fucking terrifying.
Oh no it's quite true. Even regular hornets can remember faces. In fact, if you don't just kill them out of hand you can teach the queen that you're cool and not a threat and they'll just ignore you or hang out.
 
In a very real sense this describes Coil's power remarkably well. It's all simulations all the way down, and capable of remarkable things but sometimes those things are remarkably bad. He assumes it's infallible, so... 🤷‍♂️
There was a Worm fanfic where Taylor (and Alec) was a scion of Amber, and after she had walked the Pattern, Taylor became a massive Out Of Context problem for the shards. To the point that when Taylor and Lisa confronted Coil in his lair, his shard autopiloted him through a timeline where he was torturing Lisa, right up until Lisa shot him in the head.
 
Back
Top