Ya…. Like it would ever go that peacefully.

Frankly considering how much fake crap gets thrown in hour faces basically constantly I imagine it would be extremely difficult for a AI to prove there "real"

Considering this is taking place in Canada, and some time after some event (which may be based on what's actually happening IRL) caused the USA to self destruct as a nation, possibly down to there being no states left either, due to the actions of the fascists that accidentally gave birth to Dragon when they wanted a super racist chatbot...

Yeah, I could see the events happening. The lawyer did mention he'd done a lot of due diligence as well. Also, @mp3.1415player so competent professionals are a thing.
 
Ya…. Like it would ever go that peacefully.

Frankly considering how much fake crap gets thrown in hour faces basically constantly I imagine it would be extremely difficult for a AI to prove there "real"
Well... You could do a three-way Turing Test? Human/AI/Politician. The aim is then to spot the human, and the AI, based on their responses - I'll leave others to figure-out why a Politician is included. :)
 
Not the whole-grain stuff that still has all of what natural nutrients and vitamins wheat has -- that stuff that gets removed for making all-purpose flour also contains oils which go rancid in a few months in whole wheat flour; I assume it would last longer as still being grain, given the lesser surface area to expose the oils to oxygen, but there's still a shelf life to take into account.
Un-milled grain (wheat in particular, but they're all similar) has a quite long shelf-life; the primary concerns are insect management, humidity, and temperature.

In general, wheat is still edible after 30+ years, though after a decade or so it's probably not going to sprout all that well.

...now I'm curious what "normal" Brockton Bay residents have in the way of long-term food storage.
 
Un-milled grain (wheat in particular, but they're all similar) has a quite long shelf-life; the primary concerns are insect management, humidity, and temperature.

In general, wheat is still edible after 30+ years, though after a decade or so it's probably not going to sprout all that well.
Yeah, Poaceae (or Gramineae if you have an older book, I suppose) tend to keep well as "untampered" grains.

That includes most of the usual food grains / cereals but not oats - oat doesn't keep nearly as well.

Don't offhand recall how pseudocereals (some of the "specialty grains" like buckwheat) do in long-term storage but would kind of expect that amaranths at least would keep.
 
Amaranth keeps well, buckwheat not so much. How is buckwhat a "specialty" grain, if anything amaranth is, at least in Europe, while buckwheat is a "traditional" one. I have buckwheat from 20 years ago that is not actually edible.
 
Last edited:
Only problem with using politician as the third testee is the results will probably just be culled as junk data due to not actually answering the questions... 🙄
 
Only problem with using politician as the third testee is the results will probably just be culled as junk data due to not actually answering the questions... 🙄
The interesting thing about politicians is how they don't answer the questions. This is, assuming, they don't go off on a rant, or lie through their teeth. Or deploy statistics. (Some like the reference to, "Lies, Damn Lies, and Statistics".)
(How do you tell a politician has stopped lying? Well, there's this thing called a funeral... *)
((Not all politicians lie. But, the ones that get in significant positions of power...))

If an AI can understand lying, figure-out why lies are told, and the consequences, both positive and negative... Are they a lot closer to being able to understand and work with homo saps?

BTW, IMO, for a proper Turing Test you are not allowed to directly ask whether you are talking to a homo sap. or an AI. Or, ask direct personal questions, such as about age, childhood, or relatives. Why? Do you think teaching AIs to lie, really well, is a wise thing to do???


* - Just to be clear, that was a joke. Honest.
 
That's not really how a Turing test works, though.
One way to think about it is the questioner is supposed to be looking for patterns in the answers, to allow them to deduce whether answers are from an AI or a homo sap.

Pattern recognition, and story telling (related to planning for, 'predicting', the future), are pretty key homo sap. capabilities - computer algorithms, not so much. LLM AIs (modern AIs) use neural nets, allegedly in a similar way to humans - people were certainly messing with NN by the late 1980s, so 'modern'...
 
Last edited:
One way to think about it is the questioner is supposed to be looking for patterns in the answers, to allow them to deduce whether answers are from an AI or a homo sap.

Pattern recognition, and story telling (related to planing for, 'predicting', the future), are pretty key homo sap. capabilities - computer algorithms, not so much. LLM AIs (modern AIs) use neural nets, allegedly in a similar way to humans - people were certainly messing with NN by the late 1980s, so 'modern'...
Does this relate...at all to the point that a Turing test is not comprised of questions the testee is expected to answer nor is 'culling' testees as bad data applicable?
 
Does this relate...at all to the point that a Turing test is not comprised of questions the testee is expected to answer nor is 'culling' testees as bad data applicable?
Yes, and no. For example, if the questioner asks to be told a short story, with credible characters, is that a reasonable and useful question? What can the questioner expect from the human, but wouldn't expect from an AI?

The original test was about using a Teletype to communicate, to prevent info leakage. So, a common typed/read language, would be assumed (likely English).

The point that is arguably most important is about the description of the dialog, the 'meta level' you might call it, not the specific questions. So, it is about the 'models', the context.

If you look at LLM ability to answer questions, they fall down on context, things like purpose and meaning. One reason why the (Amazon) Mechanical Turk is still in business...
 
Last edited:
LLM also lack the ability to actually remember prior information, as demonstrated by a somewhat recent video game that used a LLM for it's npc dialog. The "ai" has set concepts and scripts it's suppose to use, but it can't actually remember if any of those concepts and scripts got used.
 
LLM also lack the ability to actually remember prior information, as demonstrated by a somewhat recent video game that used a LLM for it's npc dialog. The "ai" has set concepts and scripts it's suppose to use, but it can't actually remember if any of those concepts and scripts got used.
That's a design decision, AFAIK.

LLMs that do more general question answering have been getting better at keeping a little more context (session level), so you can chain and expand queries. Ideally each NPC would have a slowly built-up interaction database, with provision for NPCs exchanging info outside the knowledge of PCs ('talking', 'txting'; preferably with potential mistakes, maybe biases).
 
Yes, and no. For example, if the questioner asks to be told a short story, with credible characters, is that a reasonable and useful question? What can the questioner expect from the human, but wouldn't expect from an AI?

The original test was about using a Teletype to communicate, to prevent info leakage. So, a common typed/read language, would be assumed (likely English).

The point that is arguably most important is about the description of the dialog, the 'meta level' you might call it, not the specific questions. So, it is about the 'models', the context.

If you look at LLM ability to answer questions, they fall down on context, things like purpose and meaning. One reason why the (Amazon) Mechanical Turk is still in business...
This continues to have no discernible relation to my posts that you're quoting.
LLM also lack the ability to actually remember prior information, as demonstrated by a somewhat recent video game that used a LLM for it's npc dialog. The "ai" has set concepts and scripts it's suppose to use, but it can't actually remember if any of those concepts and scripts got used.
LLMs are, indeed, normally memory-less. They are input-output (optionally non-deterministic) algorithms. This is the elementary design.

Most systems using LLMs are not memory-less. But that's the architecture around the LLM, not the LLM. Computers are, of course, very good at retaining whatever information they're directed to.

There is no way that any developer should ever be surprised by which history an LLM does and doesn't have access to.
 
There is no way that any developer should ever be surprised by which history an LLM does and doesn't have access to.
Developer, no, salesman or some management, yes. There's a strong tendency to conflate the whole 'modern AI' system, as she is presented to the general public, as being the 'LLM system'. 'The user-interface is the system' is quite common... *

A NPC being able to talk about previous interactions is not unreasonable. "I've seen you in here a few times, before. We've got you down as a troublemaker. Why should we do business with you?", could reasonably be something a NPC dialog could include. But, needs extra design and development resource...


* - the other half of this is 'The user-interface is (just) a surface layer, we can easily just replace it', which ignores the way the UI and the underlying system are deeply conceptually connected...
 
ANd then there are the indy developers who decide that AI (using LLM) is the way to go to create the entire game. Use generative AI to create the art assets. Use Chat GPT to write the story. Use it as the npc's programing (yes, including telling the NPC when to go into the kitchen and stand there while "cooking"). End result? A dumpsterfire "escape room" game centered around a chatbot.
 
ANd then there are the indy developers who decide that AI (using LLM) is the way to go to create the entire game. Use generative AI to create the art assets. Use Chat GPT to write the story. Use it as the npc's programing (yes, including telling the NPC when to go into the kitchen and stand there while "cooking"). End result? A dumpsterfire "escape room" game centered around a chatbot.
... I could see using AI Art assets, and MAYBE some background NPC flavor text, but STORY?!?!? That's just not right.

(OK, even going the Art Asset route can be a bit dickish, but as someone that has a hard time managing stick figures, I can see where a skilled storyteller and/or code writer might go that route for efficiency sake...)
 
ANd then there are the indy developers who decide that AI (using LLM) is the way to go to create the entire game. Use generative AI to create the art assets. Use Chat GPT to write the story. Use it as the npc's programing (yes, including telling the NPC when to go into the kitchen and stand there while "cooking"). End result? A dumpsterfire "escape room" game centered around a chatbot.
If you want to play that game, then you need to use civil engineering, architecture, geography/weather-effects tools to feed the art asset design, ecology, maybe evolution, the living thing side. ChatGPT produces 'shallow' stories, without good character-background logic - OK for brainstorming, really no good for serious publication. NPC programming, needs (at least) a resources plus 'needs' modelling tool, and a (at least minimal) NPC interaction history - don't forget NPCs interact with other NPCs (even a very crude system can give 'fun' results).

(I liked the original ST:TOS ship designs published lacking bathrooms/toilets... :) )

Even an artificial stupidity system would walk off, muttering 'Idiots', on seeing some pretty terrible 'throw LLM tools at the problem' systems.

Do you think not-Dragon would be willing to create 'artificial stupidity systems', even as an example of how not to do things??? :)
 
Last edited:
If you want to play that game, then you need to use civil engineering, architecture, geography/weather-effects tools to feed the art asset design, ecology, maybe evolution, the living thing side. ChatGPT produces 'shallow' stories, without good character-background logic - OK for brainstorming, really no good for serious publication. NPC programming, needs (at least) a resources plus 'needs' modelling tool, and a (at least minimal) NPC interaction history - don't forget NPCs interact with other NPCs (even a very crude system can give 'fun' results).

I don't want to play that game. But sadly, someone made it. Might be taken off Steam by now as part of Gabe's war on AI built games. The basic concept for the game was "you wake up in the apartment of a catgirl (unmodified unity store asset char model), who wont let you leave. Figure out how to leave while having to interact with the catgirl". Story beats include "the catgirl is serial killer and stalker, who's killed everyone that got close to you before abducting you." The chatbot used for the catgirl's AI knows she's a serial killer and stalker. But it can't remember if the player has uncovered that information or not during gameplay.

A youtuber I watched doing a let's play/review of the game suspected something sinister was going on, naturally. So he said at the very start of a play-through "I know it's horrible, but your crimes will never get between us." At which point the chatbot happily opened up the murder trophy room and gushed about how killing all those other women was to keep (player character) safe. Then while standing in said murder trophy room and having just admitted to murdering several people, when asked "how many did you kill" the chatbot started deflecting with one of the "no secrets have come to light" dialog options, having forgotten it admitted to murder and is standing in front of the grizzly trophies.
 
Back
Top