Ted Chiang: ChatGPT Is a Blurry JPEG of the Web

Article:
Is generative AI a tool for creative empowerment and efficiency—or a threat to creative professions? OpenAI CTO Mira Murati isn't worried about such potential negative impacts, suggesting during a talk this month that if AI does kill some creative jobs, those jobs were perhaps always replaceable.
...
"Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place," the CTO said of AI's role in the workplace.

who needs art, apparently
 
Article:
Is generative AI a tool for creative empowerment and efficiency—or a threat to creative professions? OpenAI CTO Mira Murati isn't worried about such potential negative impacts, suggesting during a talk this month that if AI does kill some creative jobs, those jobs were perhaps always replaceable.
...
"Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place," the CTO said of AI's role in the workplace.

who needs art, apparently
Is the purpose of commercial art the result, or the process?

I suspect even the commercial artists would not be unified behind the latter, though for them 'artists getting paid' is a key part of the result.
 
"artists being able to excercise their craft and proffessionally collaborate" is a much bigger part of the deal as well. Without art jobs, then you suddenly have 2000+ hours less in the year that you can hone your craft, without the network of collaborators and passion and guidance and skill and training that comes along with it. Every decade you're putting in 20,000 hours of practice and craft into your abilities and without art jobs to pay the bills that whole path suddenly becomes non-viable.

There's about 2.6 million artists in the us, collectively representing a minimum of 5.2 billion hours of art practice and execution per year. That is what you lose when art jobs go away.
 
Article:
Is generative AI a tool for creative empowerment and efficiency—or a threat to creative professions? OpenAI CTO Mira Murati isn't worried about such potential negative impacts, suggesting during a talk this month that if AI does kill some creative jobs, those jobs were perhaps always replaceable.
...
"Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place," the CTO said of AI's role in the workplace.

who needs art, apparently

You would think that a member of the C-suite will at least sound somewhat tactful and not so... dismissive and flippant about making people redundant.

I don't think OpenAI needs to double down on creating controversy.
 
Observe as data poisoning is implemented with increasing harshness. Data poisoning against learning models is as new as the learning models themselves. It's really a matter of time before countermeasures are developed with increasing sophistication and applied with greater broadness. Breaking is easier than making.
 
Last edited:
Observe as data poisoning is implemented with increasing harshness. Data poisoning against learning models is as new as the learning models themselves. It's really a matter of time before countermeasures are developed with increasing sophistication and applied with greater broadness. Breaking is easier than making.

I'm skeptical. The attempts at "poisoning" images have produced interesting edge cases but have largely fallen flat. And you can't "poison" text without making it useless to humans.
 
I'm skeptical. The attempts at "poisoning" images have produced interesting edge cases but have largely fallen flat. And you can't "poison" text without making it useless to humans.
Granted on text (although the garlic oil neurotoxin suggest that they can selfpoison perfectly well), but in my opinion I think image poisoning is a relatively new field of technology that could potentially advance quite rapidly. We're probably at the 'I Love You' levels of malware in images in my opinion.
 
I'm skeptical. The attempts at "poisoning" images have produced interesting edge cases but have largely fallen flat. And you can't "poison" text without making it useless to humans.
Memes are sufficient to poison text, as are jokes. As can be seen by the google AI's various "what the fuck?" moments.

It won't even stop it from working, but it will sometimes cause the output to go off in wild directions.
 
Last edited:
Memes are sufficient to poison text, as are jokes. As can be seen by the google AI's various "what the fuck?" moments.

It won't even stop it from working, but it will sometimes cause the output to go off in wild directions.
You probably can't poison text without making it useless to humans, but that's a price we've been imposing on ourselves for decades anyway!

(Though more seriously, it's probably possible to train the models to do that less using curated datasets. And stacking that with things it learned from the uncurated data.)
 
Last edited:
Observe as data poisoning is implemented with increasing harshness. Data poisoning against learning models is as new as the learning models themselves. It's really a matter of time before countermeasures are developed with increasing sophistication and applied with greater broadness. Breaking is easier than making.

And breaking data poisoning is easier than breaking a model, since human usable data still has to be inside it. Just changing your VAE or going to a different architecture is enough to break that data poisoning, so it's gonna be obsolete when the next ml black swan event occurs. And that's not even getting into data processing to remove the effect as simple as scaling and cropping.
 
Article:
Google's greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to "net zero" by 2030 in doubt.

The Silicon Valley company's pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday.
...
Chief sustainability officer Kate Brandt said the company remained committed to the 2030 target but stressed the "extremely ambitious" nature of the goal.

"We do still expect our emissions to continue to rise before dropping towards our goal," said Brandt.

She added that Google was "working very hard" on reducing its emissions, including by signing deals for clean energy. There was also a "tremendous opportunity for climate solutions that are enabled by AI," said Brandt.

Bleak lol at their defense for going all in on fuck the environment for AI being "you can't prove AI won't help us develop space magic to get our way out of climate change."
 
Unless Skynet (or a similarly drastically powerful AI) literally happens, AI can't solve climate change. Climate change is driven by human behaviour, mostly corporations. We have the solutions for climate change in existence, multiples even. It's an implementation problem.

Much like every other problem we might want to solve with AI. Solutions not executed might as well not exist.
 
Article:
Google's greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to "net zero" by 2030 in doubt.

The Silicon Valley company's pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday.
...
Chief sustainability officer Kate Brandt said the company remained committed to the 2030 target but stressed the "extremely ambitious" nature of the goal.

"We do still expect our emissions to continue to rise before dropping towards our goal," said Brandt.

She added that Google was "working very hard" on reducing its emissions, including by signing deals for clean energy. There was also a "tremendous opportunity for climate solutions that are enabled by AI," said Brandt.

Bleak lol at their defense for going all in on fuck the environment for AI being "you can't prove AI won't help us develop space magic to get our way out of climate change."

So how much of this growth is because of AI and how much of this growth is because they're in a cloud monopolization contest with amazon and microsoft? They don't publish hard numbers on those that I've seen, and trying to capture a third of the worlds computing needs can't be cheap.
 
Normal data centers have gotten more energy-efficient over time. AI models have grown so fast that the energy consumption of training can outpace energy efficiency gains.

Remember that AI training has gotten so energy-intensive, people are thinking of building nuclear reactors to supply them.
 
Again, what are the hard numbers to back it up? Because google is also, at the same time, trying to capture a third of all cloud computing in an expanding global economy.
 
Again, what are the hard numbers to back it up? Because google is also, at the same time, trying to capture a third of all cloud computing in an expanding global economy.
Do we really need receipts to not make the obvious connection that integrating an unnecessary computation-hungry extra step into your main product with like a billion daily users will inflate your power bill?
Since it's Google themselves who aren't sharing more, it's fair to not be extra charitable when interpreting the facts we do have.
 
Again, what are the hard numbers to back it up? Because google is also, at the same time, trying to capture a third of all cloud computing in an expanding global economy.

What other cloud-computational service is expanding so rapidly? If you want to argue that this rapid expansion of available GPU-based horsepower is the product of another need, what other need would that be and why would it be using this particular equipment?
 
Do we really need receipts to not make the obvious connection that integrating an unnecessary computation-hungry extra step into your main product with like a billion daily users will inflate your power bill?
Since it's Google themselves who aren't sharing more, it's fair to not be extra charitable when interpreting the facts we do have.

Doesn't Chat-GPT use something like 5 gallons (22.7 liters) of water every single time you prompt it? That cannot be sustainable. We already don't have enough water to go around as it is.

>"Remember that AI training has gotten so energy-intensive, people are thinking of building nuclear reactors to supply them." @Somebody

Exactly this. At current rates, this simply cannot be sustainable. How much bigger can these models get before we simply don't have enough resources to power them? And even if we do, should we? Considering we're in the middle of a global climate crisis and we need to reach net zero by 2040
 
Do we really need receipts to not make the obvious connection that integrating an unnecessary computation-hungry extra step into your main product with like a billion daily users will inflate your power bill?
Yes. Obviously yes. That's how evidence-based reasoning works.
Article:
That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry's current mania for generative AI.

Energy consumption in data centers has been growing at the same rate since approximately 2010. A rate that roughly matches the numbers Google published for 23-24. These companies are expanding, but if they weren't building out capacity for "AI," they'd be building it out for something else. The LLM craze is just another workload.

Is a lot of it wasted? Quite possibly. Both the cycles spent on LLM's and the ones spent on other things. But it's not some unique and sudden crisis, yet. You can be critical of these trends without falling into hysteria. ("AI would help propel climate solutions" is still obvious garbage, of course.)

Doesn't Chat-GPT use something like 5 gallons (22.7 liters) of water every single time you prompt it? That cannot be sustainable. We already don't have enough water to go around as it is.
500 ml for between 5 and 50 prompts in 2023, which includes an estimate of water consumed at the power plant providing the energy. Still an order of magnitude or two higher than a Google search. We don't have details on their 2024 offerings like GPT-4o, but it's suspected to be far more efficient (quantized, possibly MoE architecture) based on lessons learned from 3.5 and 4. Which makes sense - Microsoft is the one who has to pay for that usage.

(@Byzantine yes, they kinda do - evaporated for cooling or rendered undrinkable, in both the data center and the power plant that supplies energy.)
 
Last edited:
(@Byzantine yes, they kinda do - evaporated for cooling in both the data center and the power plant that supplies energy.)
I wasn't really counting the power plant, as you'd normally slot that in with the power production question, though that definitely does require expending water. Very little water should evaporate from the cooling in the data center - those are not open air systems.
 
We don't have details on their 2024 offerings like GPT-4o, but it's suspected to be far more efficient (quantized, possibly MoE architecture) based on lessons learned from 3.5 and 4. Which makes sense - Microsoft is the one who has to pay for that usage.

Ok, if it's more efficient then that kinda lessens the resource worries a bit. But on the other hand, aren't we running into other walls too? Like bigger and bigger models needing more and more data. And you could just make the models smaller of course, but my understanding is if you use less data, there's less training data in the model, and therefore the quality of the output is not as good. That's how i see it anyway, could be wrong
 
I wasn't really counting the power plant, as you'd normally slot that in with the power production question, though that definitely does require expending water. Very little water should evaporate from the cooling in the data center - those are not open air systems.
All of the numbers we're quoting are counting the power plant, therefore so was I. As for the data centers, some of them just discharge wastewater, but others definitely do use evaporative cooling.

The Dalles, Oregon – Google Data Center Location

We own and operate data centers around the world to keep our products running 24 hours a day, seven days a week.
 
Back
Top