Ted Chiang: ChatGPT Is a Blurry JPEG of the Web

Incidentally? It's been a couple years since LLMs were trained solely to generate the "most probable next token".
The field is advancing quite rapidly, so I'm unsurprised those not in the thick of it can't quite keep up with terminology or how the AI works. Doesn't help that stuff like ChatGPT or image generation are the most prominent public examples and have sparked... No small amount of backlash from misuse by random folk. Alongside ethical concerns about how they were taught at all.

Definitely interested in how the state of affairs shall be in another five years. Kinda wild it was only introduced a few years back on the global stage - outside of researchers anyway.
 
I mean, even in the big 3 realm, shit has been insane. Gemini's research mode is absolutely fantastic.
 
I mean, even in the big 3 realm, shit has been insane. Gemini's research mode is absolutely fantastic.
Hah... yes... when it works. Its internal architecture is a bit perpendicular to everything else, so we had a couple outages.

Still, it's worth a try! IIRC the free tier gives you 5 uses per month, so for everyone else here, there's no real excuse not to try it. Just try not to do it while I'm oncall...

2.5 Pro also seems to be getting good reviews. You can try that on AI Studio right now, with the caveat that capacity is limited right now. If you get an error, try again after I've scrounged under the sofa for spare TPUs. There's no way you can pay for access right now, sorry to say; those 50 free requests / day are all you get.
 
Last edited:
I mean, even in the big 3 realm, shit has been insane. Gemini's research mode is absolutely fantastic.
Big 3 realm? How's gemini been helping out with research? Often you only see folk complaining about AI getting shoved into half the internet or desktop programs people use, which... I can get why it'd bother folk! Rapid change can be scary!

2.5 Pro also seems to be getting good reviews. You can try that on AI Studio right now, with the caveat that capacity is limited right now. If you get an error, try again after I've scrounged under the sofa for spare TPUs. There's no way you can pay for access right now, sorry to say; those 50 free requests / day are all you get.
Even Google is having issues sourcing computer parts for AI stuff? Guess the demand is sky high.
 
I think the complaints about it being forced are fair. I already have ChatGPT and Claude, I don't need Copilot and so don't love it that it's being pushed on me.
 
I think the complaints about it being forced are fair. I already have ChatGPT and Claude, I don't need Copilot and so don't love it that it's being pushed on me.
I have trouble seeing how one could disagree with those complaints.

I sometimes get use out of Google Search AI summaries, but that doesn't mean I appreciate having them stuck into my searches unasked for. Never mind whatever absurdity Microsoft is trying for with cramming crap right into my operating system.

Not sure there's anything really AI-specific about tech giants forcing unwanted 'upgrades' in user's faces, though. That was already business as usual before the LLM story arc took off.
 
My irritation is that I don't want to use AI powered stuff at all normally. However, there are definitely times when I might have wanted to but don't generically trust large companies. But that's just me. I recognize that as much as I hate AI it's definitely here.
 
I think the complaints about it being forced are fair. I already have ChatGPT and Claude, I don't need Copilot and so don't love it that it's being pushed on me.

Maybe, but maybe we're also at that point where things are transitioning from traditional menus to the ribbon or from a structured Start Menu to a search based one. It's hard to tell, I know I've posted before about things that are just easier to deal with through Copilot.
 
My irritation is that I don't want to use AI powered stuff at all normally. However, there are definitely times when I might have wanted to but don't generically trust large companies. But that's just me. I recognize that as much as I hate AI it's definitely here.
I'm not sure what not 'trusting' large companies implies exactly for you, but there are open models you can run at home if you've got the equipment and make the effort.
 
I'm not sure what not 'trusting' large companies implies exactly for you, but there are open models you can run at home if you've got the equipment and make the effort.

I struggle to run Stable Diffusion XL - anything larger and I do not have the VRAM for it. And really, it's that google and openai do not give me faith in them. Microsoft is rapidly tending in that direction.
 
It seems to be the other way around. The only person seriously discussing tariffs in the past decade has been Peter Navarro (Trump's main advisor on trade), and as a result, all the LLM results are poisoned by his ideas.

Although we can't really rule out a full ouroboros, given how low-effort the whole thing was.
 
It seems to be the other way around. The only person seriously discussing tariffs in the past decade has been Peter Navarro (Trump's main advisor on trade), and as a result, all the LLM results are poisoned by his ideas.

Although we can't really rule out a full ouroboros, given how low-effort the whole thing was.

Well, the justification posted to the government website is very clearly LLM-written; it sets a variable to < 0, then later asserts that same variable is = 4. It also has an incoherent sentence fragment that is the kind of sentence fragment an LLM will create (because it has a word in the middle that could have been a verb that would have made it work, except that it's actually a noun)

It also, in an official announcement of a policy, says "Higher minimum rates might be necessary to limit heterogeneity in rates and reduce transshipment" which ... well, they are higher! the minimum rate is 10%, not the zero it previously said! This is announcing a policy! There is no reason for it to say 'might'! This is LLM-speak!

(edit: Reciprocal Tariff Calculations to actually read the blatant LLM-speak)
 
People asked a bunch of AI to calculate the 'proper' tariffs on foreign nations based on the trade deficit. Guess what numbers they came up with?
And? Just because the AI has the same answer doesn't mean it was used at all in the law. I have serious doubt about it, mainly because in the twitter thread that started this, all of the AI examples caution against it (ChatGPT straight up calls it naive). And no, I'm not going to put the blame on AI when the blame should always fall on the users.

(edit: Reciprocal Tariff Calculations to actually read the blatant LLM-speak)
I'm also hesitant about this because the references are real. I wouldn't be surprised if the wording was run through an LLM for an editing pass, as that's getting more and more common.
 
One of the biggest AI devs in the world is actively ruining the country. But I'm sure his AI isn't used to help do that at all.
Let me put it this way: if the LLMs had said that this tarrif policy was a no good, very bad idea, do you think the US government would have gone "Oh, fair enough, chuck it into the bins lads"?

I don't doubt that at least one person, statistically speaking, used an LLM during the initial brainstorm phase for this policy, but you might as well decry Linux for the subjugation of the North Korean people
 
Well, the justification posted to the government website is very clearly LLM-written; it sets a variable to < 0, then later asserts that same variable is = 4. It also has an incoherent sentence fragment that is the kind of sentence fragment an LLM will create (because it has a word in the middle that could have been a verb that would have made it work, except that it's actually a noun)

It also, in an official announcement of a policy, says "Higher minimum rates might be necessary to limit heterogeneity in rates and reduce transshipment" which ... well, they are higher! the minimum rate is 10%, not the zero it previously said! This is announcing a policy! There is no reason for it to say 'might'! This is LLM-speak!

(edit: Reciprocal Tariff Calculations to actually read the blatant LLM-speak)
We are in agreement that the entire thing is a poorly researched disaster that had less than 15 minutes' actual thought put into its presentation. And maybe someone really did say "Hey ChatGPT, how to tariff pls" during its conception. But to be honest, those just look like... typos. The writing style doesn't match any of the common models, the issues you're describing aren't actually common LLM failure modes, and this is exactly the kind of math Peter Navarro has been shoving into Trump's ear for a decade already.
 
I'm also hesitant about this because the references are real. I wouldn't be surprised if the wording was run through an LLM for an editing pass, as that's getting more and more common.

It says epsilon is < 0, and also = 4. No human makes that mistake, you kidding? People can keep track of variables for a couple lines at a time.
It has the sentence fragment "To calculate reciprocal tariffs, import and export data from the U.S. Census Bureau for 2024." where it kind of looks like it's treating import as a verb. That is LLM-plausible and I can't even begin to comprehend where a human would be starting to generate it.
It has the weird and ambiguous "Higher minimum rates might be necessary to limit heterogeneity in rates and reduce transshipment." in a policy announcement - this is the kind of ambiguous nonsense LLMs always say, every single LLM output I've seen has mealy-mouthed qualifiers like this, and government announcements almost never do because, you know, they're actually announcing a decision.

The maximum waffle and inconsistency from line to line is the hallmark of LLM garbage, and this has it in spades.
 
It says epsilon is < 0, and also = 4. No human makes that mistake, you kidding? People can keep track of variables for a couple lines at a time.
It has the sentence fragment "To calculate reciprocal tariffs, import and export data from the U.S. Census Bureau for 2024." where it kind of looks like it's treating import as a verb. That is LLM-plausible and I can't even begin to comprehend where a human would be starting to generate it.
It has the weird and ambiguous "Higher minimum rates might be necessary to limit heterogeneity in rates and reduce transshipment." in a policy announcement - this is the kind of ambiguous nonsense LLMs always say, every single LLM output I've seen has mealy-mouthed qualifiers like this, and government announcements almost never do because, you know, they're actually announcing a decision.

The maximum waffle and inconsistency from line to line is the hallmark of LLM garbage, and this has it in spades.
People typo the wrong direction of <> all the time. The sentence fragment was probably meant to end with "were used", and would have if it was an LLM doing it, because the things absolutely hate incorrect grammar.

Anyway, while I'm here, article I found a while back:
Article:
NewsGuard says that a Moscow-based disinformation network named "Pravda" (the Russian word for truth) is spreading falsehoods across the web. Rather than directly sway people, it aims to influence AI chatbot results. [...] Newsguard said it studied 10 major chatbots — including those from Microsoft, Google, OpenAI, You.com, xAI, Anthropic, Meta, Mistral and Perplexity — and found that a third of the time they recycled arguments made by the Pravda network.
 
Trump's current tariff bullshit looks likely derived from AI generation about trade deficits.

So, congrats AI helped cause a recession
Man, people will just say anything these days, huh?
I think this is some talking past each other situation. Stratigo is saying that AI was used to calculate the "needed" tariff, but it's easy to misread it as something like "Trump asked AI what to do and the AI spontaneously suggested tariffing everyone"

Mostly it probably stems from "AI helped cause a recession", which in this situation is like saying that middle school math helped cause a recession because it was used to calculate the tariff percentage
 
Back
Top