How, exactly, does this point to LLM's? I at least get what Sotek is going for but I can't see how this connects at all. The regional breakdown for the tariff calculations would've come from whatever data source the government used for trade/deficits.Another llm thing is the tarrifs are actually broken down via internet domain and not country. That is how you get uninhabited islands(.hm), a military base(.io), and various overseas territories (.re ,gi) having their own specific tarrifs.
confirmed here: United States Products Imports by country 2022 | WITS Dataaccording to export data from the World Bank, the US imported US$1.4m (A$2.23m) of products from Heard Island and McDonald Islands in 2022, nearly all of which was "machinery and electrical" imports. It was not immediately clear what those goods were.
Yeah, no, I do think LLMs were used for this, but they were purely an excuse generator - I think some of the output was made by LLM to support the plan people already had, which was the specific "tariffs to equalize the trade deficit!!1!" garbage that is absolutely a Trump bugbear.
Without LLMs they just would have had interns do the math and write up the policy documents and the net result would be the same but with less wafflewords.
The interns would have to actually put logic into the work instead of being a mindless number generator that all the morons in the admin can just lazily accept.
Fuck, someone in the process if there was a real human might have gone "these numbers make no fucking sense" and talked down the fucking ridiculous bullshit that these tariffs are.
The problem is, these numbers DO make sense, except for when territories are tariffed separately from the country they're part of and when uninhabited islands are even discussed - both mistakes that interns would absolutely make, since they'd pull the data list which has the breakouts and not dig into why there's 0s or if this particular breakout is 'real'.
I mean. It's fucking stupid, the goal is dumb, but the numbers are coherent to the chosen idiotic goal. It all falls out of the equation: trade deficit divided by trade volume, minimum 10%. If you think trade deficits are us being ripped off (because you are an idiot who thinks being given stuff for paper you literally made up is bad! FUCKING!! IDIOT!! TRUMP!! OH MY GOD!!) then like ... this is going to do a fuckin' number on our trade deficits if it actually stays intact.
it's also going to do a number on our prosperity, our global influence, our wealth, our economy, and our military, but ...
If you think stupid people couldn't get bad data and convince themselves it's good without the help of AI, I have to wonder whether you were born longer ago than ChatGPT.No, they don't. They just don't. Because some human in the chain of logic when doing the numbers would be able to go "Hmmm, this seems off". There's a reason trump didn't do this shit in his first term.
AI lets the stupidest people get bad data and convince themselves it's good. Cause Trump isn't gonna do the fucking math.
This is an interesting situation where you have too much faith in humans and too little faith in AI.No, they don't. They just don't. Because some human in the chain of logic when doing the numbers would be able to go "Hmmm, this seems off". There's a reason trump didn't do this shit in his first term.
AI lets the stupidest people get bad data and convince themselves it's good. Cause Trump isn't gonna do the fucking math.
In fact based on the formatting it was definitely in excel (or an equivalent) at the end. Given the simplicity of the formula it was probably just "export trade amounts from database. Purge the 4 excluded countries. Copy formula down. Format.If applying a formula is all they did, they wouldn't need LLM, Excel could do that. And that's probably what the human in the chain would do if they're fine with LLM.
If you think stupid people couldn't get bad data and convince themselves it's good without the help of AI, I have to wonder whether you were born longer ago than ChatGPT.
It's pretty easy to believe that the Signal War Room administration is also having policy documents written by their LLM of choice. That'd definitely be on-brand lazy recklessness.
Proposing that they're doing what they do because they are misguided by AI, rather than because that's exactly what they set out to do all along, is bizarre.
I don't think stupid people here could get this date because they are too stupid to do the basic math. Like they are so lazy and stupid that more than a prompt is beyond them
Would we get stupid tariffs? Yes.
Would they be this bad? No.
And while in healthy work environments raising the question should be the preferred move (whether or not the junior's concern is correct), there are plenty of unhealthy work environments. Or, in the case at hand, unhealthy work environments where the fact that the result is insane doesn't mean it isn't what the boss wanted...Spoken like someone who has never worked with an intern or a junior engineer.
The thing is that some people can be both smart and stupid at the same time and on the same task. They will diligently do a very stupid thing to the best of their abilities because they think that is what they were told to do by someone who knows better, and they are in fact not stupid but they have a confusion and are not interrogating it because of the position of social inferiority that they hold.
If you say "here's the data, here's the formula, do it" plenty of people will correctly do the math and not go "hey, this result is insane" even though it obviously is, because they know they don't understand what's going on and just want to get it done without asking too many questions.
And while in healthy work environments raising the question should be the preferred move (whether or not the junior's concern is correct), there are plenty of unhealthy work environments. Or, in the case at hand, unhealthy work environments where the fact that the result is insane doesn't mean it isn't what the boss wanted...
Yeah, often it can just be the person in question thinks they understand, and only find out later they really didn't.Indeed. And even in healthy work environments you get that dynamic out of fear of embarrassment, etc. This environment? Lol, lmao.
Having used similar financial systems before you reaaaally don't want to know how easy it is to do that and for absolutely no one to notice until it comes up in an audit 5 years later. Note: It was audited quarterly that entire time.Possibly, but given how close HK and HM are on the keyboard, data entry errors are more likely.
Article: OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.
While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say.
Article: OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.
While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say.
what
This is a pivot I was uh, not expecting as a reaction to Musk launching his own fake OpenAI and merging X!Twitter with it.
How about, instead, we don't do that."Normalize sharing AI art" is definitely a mood I can get behind, though.
Article: OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.
While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say.
what
This is a pivot I was uh, not expecting as a reaction to Musk launching his own fake OpenAI and merging X!Twitter with it.