Ted Chiang: ChatGPT Is a Blurry JPEG of the Web

Another llm thing is the tarrifs are actually broken down via internet domain and not country. That is how you get uninhabited islands(.hm), a military base(.io), and various overseas territories (.re ,gi) having their own specific tarrifs.
 
Yeah, no, I do think LLMs were used for this, but they were purely an excuse generator - I think some of the output was made by LLM to support the plan people already had, which was the specific "tariffs to equalize the trade deficit!!1!" garbage that is absolutely a Trump bugbear.

Without LLMs they just would have had interns do the math and write up the policy documents and the net result would be the same but with less wafflewords.
 
Another llm thing is the tarrifs are actually broken down via internet domain and not country. That is how you get uninhabited islands(.hm), a military base(.io), and various overseas territories (.re ,gi) having their own specific tarrifs.
How, exactly, does this point to LLM's? I at least get what Sotek is going for but I can't see how this connects at all. The regional breakdown for the tariff calculations would've come from whatever data source the government used for trade/deficits.
 
Yeah, no, I do think LLMs were used for this, but they were purely an excuse generator - I think some of the output was made by LLM to support the plan people already had, which was the specific "tariffs to equalize the trade deficit!!1!" garbage that is absolutely a Trump bugbear.

Without LLMs they just would have had interns do the math and write up the policy documents and the net result would be the same but with less wafflewords.

The interns would have to actually put logic into the work instead of being a mindless number generator that all the morons in the admin can just lazily accept.

Fuck, someone in the process if there was a real human might have gone "these numbers make no fucking sense" and talked down the fucking ridiculous bullshit that these tariffs are.
 
The interns would have to actually put logic into the work instead of being a mindless number generator that all the morons in the admin can just lazily accept.

Fuck, someone in the process if there was a real human might have gone "these numbers make no fucking sense" and talked down the fucking ridiculous bullshit that these tariffs are.

The problem is, these numbers DO make sense, except for when territories are tariffed separately from the country they're part of and when uninhabited islands are even discussed - both mistakes that interns would absolutely make, since they'd pull the data list which has the breakouts and not dig into why there's 0s or if this particular breakout is 'real'.

I mean. It's fucking stupid, the goal is dumb, but the numbers are coherent to the chosen idiotic goal. It all falls out of the equation: trade deficit divided by trade volume, minimum 10%. If you think trade deficits are us being ripped off (because you are an idiot who thinks being given stuff for paper you literally made up is bad! FUCKING!! IDIOT!! TRUMP!! OH MY GOD!!) then like ... this is going to do a fuckin' number on our trade deficits if it actually stays intact.

it's also going to do a number on our prosperity, our global influence, our wealth, our economy, and our military, but ...
 
The problem is, these numbers DO make sense, except for when territories are tariffed separately from the country they're part of and when uninhabited islands are even discussed - both mistakes that interns would absolutely make, since they'd pull the data list which has the breakouts and not dig into why there's 0s or if this particular breakout is 'real'.

I mean. It's fucking stupid, the goal is dumb, but the numbers are coherent to the chosen idiotic goal. It all falls out of the equation: trade deficit divided by trade volume, minimum 10%. If you think trade deficits are us being ripped off (because you are an idiot who thinks being given stuff for paper you literally made up is bad! FUCKING!! IDIOT!! TRUMP!! OH MY GOD!!) then like ... this is going to do a fuckin' number on our trade deficits if it actually stays intact.

it's also going to do a number on our prosperity, our global influence, our wealth, our economy, and our military, but ...

No, they don't. They just don't. Because some human in the chain of logic when doing the numbers would be able to go "Hmmm, this seems off". There's a reason trump didn't do this shit in his first term.

AI lets the stupidest people get bad data and convince themselves it's good. Cause Trump isn't gonna do the fucking math.
 
No, they don't. They just don't. Because some human in the chain of logic when doing the numbers would be able to go "Hmmm, this seems off". There's a reason trump didn't do this shit in his first term.

AI lets the stupidest people get bad data and convince themselves it's good. Cause Trump isn't gonna do the fucking math.
If you think stupid people couldn't get bad data and convince themselves it's good without the help of AI, I have to wonder whether you were born longer ago than ChatGPT.


It's pretty easy to believe that the Signal War Room administration is also having policy documents written by their LLM of choice. That'd definitely be on-brand lazy recklessness.

Proposing that they're doing what they do because they are misguided by AI, rather than because that's exactly what they set out to do all along, is bizarre.
 
No, they don't. They just don't. Because some human in the chain of logic when doing the numbers would be able to go "Hmmm, this seems off". There's a reason trump didn't do this shit in his first term.

AI lets the stupidest people get bad data and convince themselves it's good. Cause Trump isn't gonna do the fucking math.
This is an interesting situation where you have too much faith in humans and too little faith in AI.

Quite frankly, a government that just blindly followed what LLMs said would probably be doing better. Not better than any human, just better than THESE humans, given how all LLMs say this current course of action is idiotic.
 
If applying a formula is all they did, they wouldn't need LLM, Excel could do that. And that's probably what the human in the chain would do if they're fine with LLM.
 
If applying a formula is all they did, they wouldn't need LLM, Excel could do that. And that's probably what the human in the chain would do if they're fine with LLM.
In fact based on the formatting it was definitely in excel (or an equivalent) at the end. Given the simplicity of the formula it was probably just "export trade amounts from database. Purge the 4 excluded countries. Copy formula down. Format.

Would have taken maybe 30 minutes, including the quibbling over the formatting.
 
If you think stupid people couldn't get bad data and convince themselves it's good without the help of AI, I have to wonder whether you were born longer ago than ChatGPT.


It's pretty easy to believe that the Signal War Room administration is also having policy documents written by their LLM of choice. That'd definitely be on-brand lazy recklessness.

Proposing that they're doing what they do because they are misguided by AI, rather than because that's exactly what they set out to do all along, is bizarre.

I don't think stupid people here could get this date because they are too stupid to do the basic math. Like they are so lazy and stupid that more than a prompt is beyond them

Would we get stupid tariffs? Yes.

Would they be this bad? No.
 
I don't think stupid people here could get this date because they are too stupid to do the basic math. Like they are so lazy and stupid that more than a prompt is beyond them

Would we get stupid tariffs? Yes.

Would they be this bad? No.

Spoken like someone who has never worked with an intern or a junior engineer.

The thing is that some people can be both smart and stupid at the same time and on the same task. They will diligently do a very stupid thing to the best of their abilities because they think that is what they were told to do by someone who knows better, and they are in fact not stupid but they have a confusion and are not interrogating it because of the position of social inferiority that they hold.

If you say "here's the data, here's the formula, do it" plenty of people will correctly do the math and not go "hey, this result is insane" even though it obviously is, because they know they don't understand what's going on and just want to get it done without asking too many questions.
 
Spoken like someone who has never worked with an intern or a junior engineer.

The thing is that some people can be both smart and stupid at the same time and on the same task. They will diligently do a very stupid thing to the best of their abilities because they think that is what they were told to do by someone who knows better, and they are in fact not stupid but they have a confusion and are not interrogating it because of the position of social inferiority that they hold.

If you say "here's the data, here's the formula, do it" plenty of people will correctly do the math and not go "hey, this result is insane" even though it obviously is, because they know they don't understand what's going on and just want to get it done without asking too many questions.
And while in healthy work environments raising the question should be the preferred move (whether or not the junior's concern is correct), there are plenty of unhealthy work environments. Or, in the case at hand, unhealthy work environments where the fact that the result is insane doesn't mean it isn't what the boss wanted...


There are additional ways for people who are arguably smart to do very stupid things when in more empowered positions, but I'm not sure there's anyone involved in US policymaking at this point who needs those explanations.
 
And while in healthy work environments raising the question should be the preferred move (whether or not the junior's concern is correct), there are plenty of unhealthy work environments. Or, in the case at hand, unhealthy work environments where the fact that the result is insane doesn't mean it isn't what the boss wanted...

Indeed. And even in healthy work environments you get that dynamic out of fear of embarrassment, etc. This environment? Lol, lmao.
 
Indeed. And even in healthy work environments you get that dynamic out of fear of embarrassment, etc. This environment? Lol, lmao.
Yeah, often it can just be the person in question thinks they understand, and only find out later they really didn't.

In this environment if you don't do as told you get vamoosed on the spot, so...
 
Possibly, but given how close HK and HM are on the keyboard, data entry errors are more likely.
Having used similar financial systems before you reaaaally don't want to know how easy it is to do that and for absolutely no one to notice until it comes up in an audit 5 years later. Note: It was audited quarterly that entire time.
 
Article:
OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.

While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say.


what

This is a pivot I was uh, not expecting as a reaction to Musk launching his own fake OpenAI and merging X!Twitter with it.
 
Article:
OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.

While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say.


what

This is a pivot I was uh, not expecting as a reaction to Musk launching his own fake OpenAI and merging X!Twitter with it.

Dead Internet Theory happening too slowly for you? Just build your own!
 
Article:
OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.

While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say.


what

This is a pivot I was uh, not expecting as a reaction to Musk launching his own fake OpenAI and merging X!Twitter with it.

I'm not surprised, given how viral studio ghibli style images went. It would also let OpenAI get their own user data to train their models with, and make better, more human sounding models.
 
Back
Top