Ted Chiang: ChatGPT Is a Blurry JPEG of the Web

Might be a doomed rear action, but I'm still going to fight it. Anti-AI Aktion and all that.

The issue is that there isn't really any Pro-AI vs Anti-AI movement.

There is only Anti-Adobe and Pro-Adobe. This is because Adobe and similar giant companies with unlimited money to spend can use that money to prop up the 'Anti-AI' position and 'coincidentally' dominate the market once they get 'unlicensed' AI banned.

Even if I was Anti-AI, I would still support AI because the alternative is uncontested corporate AI monopoly. And judging from similar corporate co-opting of issues, it's extremely unlikely the Anti-AI movement will ever free itself and oppose corporate interests.
 
Except that Adobe is extremely pro-AI. To the point they're trying to have any use of their products be scraped for training data.
You are blatantly supporting corporate dominance and data monopolization by supporting these pointless programs
 
Last edited:
Except that Adobe is extremely pro-AI. To the point they're trying to have any use of their products be scraped for training data.
And it is laughable to say the pro-AI crowd isn't supported by a massive number of huge companies.

Companies are just divided in a weird way on this one, and no matter which side you take you'll have strange bedfellows, who are already sizing you up for a knife.
 
Except that Adobe is extremely pro-AI. To the point they're trying to have any use of their products be scraped for training data.
You are blatantly supporting corporate dominance and data monopolization by supporting these pointless programs

No, Adobe is extremely Anti-AI and fully supports the Anti-AI position of requiring full IP authorization for all AI use.

Because Adobe HAS that authorization due to TOS of its products. Meaning a full legal implementation of Anti-AI policies gives Adobe monopoly power over almost all AI.
 
The pro-AI crowd is inundated with both the largest tech and data management companies, who are always trying to drain as much money as possible from consumers, and Silicone Valley venture capitalists, most of which are gigantic scams
 
No, Adobe is extremely Anti-AI and fully supports the Anti-AI position of requiring full IP authorization for all AI use.

Because Adobe HAS that authorization due to TOS of its products. Meaning a full legal implementation of Anti-AI policies gives Adobe monopoly power over almost all AI.
That… doesn't match their public statements or behavior Cloak.

They are trying to set themselves up as a massive source of training data, but that's pretty typical.
 
That… doesn't match their public statements or behavior Cloak.

They are trying to set themselves up as a massive source of training data, but that's pretty typical.

Adobe is fully on board the maximal AI copyright restrictions train. They explicitly and deliberately prune and ban copyrighted material from their training. They also try to label AI images to make recording their source easier.

That's because Adobe gets to benefit from and bypass all Anti-AI laws if the Anti-AI policies succeed.

Adobe might not have the best AI right now, but if all their competitors get blown up by the law and they are left with unlimited license to use almost all data, then Adobe wins.

Adobe is so sure of its legal invulnerability it has literally offered to pay legal bills of any work derived from its AI.

It's pretty obvious that Adobe is salivating at the idea of AI legal restrictions.
 
Adobe is so sure of its legal invulnerability it has literally offered to pay legal bills of any work derived from its AI.
Do you actually buy that stunt? Companies love to add things like that, that if you read the actual disclaimer on the product you would basically have to get stuck by lighting 6 times while doing jumping jacks in the middle of the pacific ocean while a shark bites you, to actually qualify for the payment.

My assumption is it includes a "unintentional" clause, where you can't have generated the infringing material intentionally. Naturally Adobe claims it is impossible to do by mistake, so anything you generate that contains copyrighted material has to have been done intentionally.

But this is beside the point - Adobe is trying to position themselves as the big name in AI tools, and are doing a bunch of dumb shit in support of chasing all that venture capital they see on the horizon.

They don't want ai to get banned or restricted because that will restrict how many venture capitalists see their plan as a sound investment.
 
They don't want ai to get banned or restricted because that will restrict how many venture capitalists see their plan as a sound investment.

I would say there are two kinds of capitalists involved in AI stuff, the kind that are basically all-in on AI stuff because they see it as the Next Stage of technological development, capitalism, what-have-you, and don't care about the people who are being run over because it'll help their bottom line. A lot of tech companies are here or at least testing the waters of being here. And then the capitalists who think AI could help their bottom-line but aren't convinced that the public at large can be bought to by into it, so they're tentatively slowly incorporating it into work because they believe it can more efficiently turn a profit when they get rid of those annoying "workers" who want "rights" and such. Exactly where anybody is on the map can fluctuate depending on what exactly you're talking about, of course.
 
They don't want ai to get banned or restricted because that will restrict how many venture capitalists see their plan as a sound investment.

Adobe's AI will never be banned or restricted because they have ironclad IP control of every part of it because of their products.

Hence Anti-AI policy is directly pro-Adobe since it removes almost all their competition.

And yes, I do believe Adobe would honor it's offer of legal fees in all but the most egregious cases. Because it's vital to Adobe's niche to protect the inviolability of its product.
 
I would say there are two kinds of capitalists involved in AI stuff, the kind that are basically all-in on AI stuff because they see it as the Next Stage of technological development, capitalism, what-have-you, and don't care about the people who are being run over because it'll help their bottom line. A lot of tech companies are here or at least testing the waters of being here. And then the capitalists who think AI could help their bottom-line but aren't convinced that the public at large can be bought to by into it, so they're tentatively slowly incorporating it into work because they believe it can more efficiently turn a profit when they get rid of those annoying "workers" who want "rights" and such. Exactly where anybody is on the map can fluctuate depending on what exactly you're talking about, of course.
I would add a third group: the ones who think this whole thing is overblown and definitely going to blow up in someone's face, but believe they are savvy enough to make a bunch of money on it in the short term and get the hell out of dodge before it starts to explode.

They aren't truly pro AI because they don't see it a long-term idea, but they are going to act like they are because they want the gravy train to roll as long as they remain aboard.
 
Last edited:
It seems weird to treat 'will Adobe make out like bandits' as an important issue here, especially an important issue to align against. (Aligning for it would at least make sense if one had a pecuniary motive.)

I think it's framing the issues people actually care about as foregone and unworthy of discussion? But that's not a good approach.
 
It seems weird to treat 'will Adobe make out like bandits' as an important issue here, especially an important issue to align against. (Aligning for it would at least make sense if one had a pecuniary motive.)

I think it's framing the issues people actually care about as foregone and unworthy of discussion? But that's not a good approach.

Well, the issues people care about are largely foregone.

The two obvious futures, the Pro-AI and Anti-AI future, are almost identical. The difference is that the Pro-AI future has a wide spread of more advanced models in circulation, while the Anti-AI future still has AIs everywhere, just lower quality ones that are locked down to Adobe and similar megacorps.

It's just that the Anti-AI future has all the same unemployment, but worse quality and far fewer independent artists.

I don't see how the obvious Anti-AI positions, those using IP law to attack, are supposed to actually stop AI rather than just ensure Adobe dominance.

Why are you pro-Google and Elon Musk Cloakie???? EXPLAIN YOURSELF!

I see a separation between the serious AI companies like Adobe, Google, ect, and the investment trend AI companies like OpenAI, whatever Musk is doing, ect.

The latter I'm not really worried about. The market will crash and those companies will go bankrupt sooner or later. Only the companies that have an inherent advantage like Adobe and such will have a long term presence.
 
All indie artists oppose AI when one takes the position that anyone who uses AI is not an artist, yes.

I will reiterate that my lived experience of the pro-AI crowd is "look at this cool thing, let's have fun, and make cool things" while my lived experience of the anti-AI crowd is "that's not art and you aren't an artist, we hate your artistry but will endlessly insist that it is you that hates art and artists, please go away, also you are a criminal or would be if the laws matched my ideological preferences". It is my lived experience in part because the people in threads like these chose to make it my experience. Then they tell me that my experience isn't what I experienced.

"Those ivory tower intellectuals" amiright?

It's already enough data to train on, the training has been done, Adobe Firefly was released 1.5 years ago.
You can rent it out for 9.99/month.

All anyone expressing the "AI Art is theft" is doing, in practice, is saying that the real problem with AI art is that corporations can't buy up and monopolize training rights. And boy, would corporations love to help you solve that problem.

Copyright is a corporate tool, it's not going to defend individual artists. Certainly not in this situation, where the entire point of the AI is that individual pieces of trading data are entirely interchangeable.

(Edit : With, like, maybe a handful exceptions if you're a mega-famous artist or celebrity, and they want to stick your name on the cover).

Corporations are using AI to steal art too, yes. And often legally getting away with it. It is not okay that corporations use copyright law to thieve from people

In fact they are doing it the most and then selling it to people too.

It do suck. It isn't an argument for AI.

Shockingly enough when side A pisses on me and side B offers a hand I'm going to be inclined with side B. I mean side A not only failed to conjure up convincing arguments, but repeatedly and relentlessly pushes slanderously bad ones no matter how many times they're debunked or endless gish gallops, but don't be surprised when open contempt and thinly veiled threats does not yield a warm reception.

Speaking of:

I've repeatedly discussed the cons of AI and my concerns about it thereof. I'm talking about one thing, so you complain that I'm not addressing another. Even if I was discussing it earlier. Even if I was agreeing with you on it. How do you think I feel about that?

You simply ignore the reality of any argument you don't like.

Fossil, you made up your mind, no one is gonna convince you of anything. You were pro AI first.
The issue is that there isn't really any Pro-AI vs Anti-AI movement.

There is only Anti-Adobe and Pro-Adobe. This is because Adobe and similar giant companies with unlimited money to spend can use that money to prop up the 'Anti-AI' position and 'coincidentally' dominate the market once they get 'unlicensed' AI banned.

Even if I was Anti-AI, I would still support AI because the alternative is uncontested corporate AI monopoly. And judging from similar corporate co-opting of issues, it's extremely unlikely the Anti-AI movement will ever free itself and oppose corporate interests.

They have no ability to monopolize AI in this way if you think AI will actually be a thing like firefossil does.
 
New research show that AI, contrary to some myths, produce less CO2 than human writers and artists:

The carbon emissions of writing and illustrating are lower for AI than for humans - Scientific Reports

As AI systems proliferate, their greenhouse gas emissions are an increasingly important concern for human societies. In this article, we present a comparative analysis of the carbon emissions associated with AI systems (ChatGPT, BLOOM, DALL-E2, Midjourney) and human individuals performing...

Figure 1 compares several variations of authorship: BLOOM is 1400 times less impactful, per page of text produced, than a US resident writing, and 180 times less impactful than a resident of India writing. ChatGPT is 1100 times less impactful than a US resident writing, and 130 times less impactful than a resident of India writing.

Assuming that a person's emissions while writing are consistent with their overall annual impact, we estimate that the carbon footprint for a US resident producing a page of text (250 words) is approximately 1400 g CO2e. In contrast, a resident of India has an annual impact of 1.9 metric tons22, equating to around 180 g CO2e per page. In this analysis, we use the US and India as examples of countries with the highest and lowest per capita impact among large countries (over 300 M population).

In addition to the carbon footprint of the individual writing, the energy consumption and emissions of the computing devices used during the writing process are also considered. For the time it takes a human to write a page, approximately 0.8 h, the emissions produced by running a computer are significantly higher than those generated by AI systems while writing a page. Assuming an average power consumption of 75 W for a typical laptop computer23, the device produces 27 g of CO2e24 during the writing period. It is important to note that using green energy providers may reduce the amount of CO2e emissions resulting from computer usage, and that the EPA's Greenhouse Gas Equivalencies Calculator we used for this conversion simplifies a complex topic. However, for the purpose of comparison to humans, we assume that the EPA calculator captures the relationship adequately. In comparison, a desktop computer consumes 200 W, generating 72 g CO2e in the same amount of time.
 
Last edited:
of course, that assumes that a chatgpt-written page is considered of similar value to something written by a human. Meanwhile the newest and best models cost thousands of times as much compute as chatgpt... >_>
 
New research show that AI, contrary to some myths, produce less CO2 than human writers and artists:

The carbon emissions of writing and illustrating are lower for AI than for humans - Scientific Reports

As AI systems proliferate, their greenhouse gas emissions are an increasingly important concern for human societies. In this article, we present a comparative analysis of the carbon emissions associated with AI systems (ChatGPT, BLOOM, DALL-E2, Midjourney) and human individuals performing...

It also produces shit.
 
To be fair, there is one issue that I can see with this research paper: it takes into account all emissions of a human writer, but human writer would still cause CO2 emissions even when no longer writing, especially if (for example) playing a computer game instead.

A hour of human existence emits CO2 regardless of writing or not. To ensure lower emissions outside of /without?/ work, the main hobby of the (former?) writer would need to be something like walking.
 
To be fair, there is one issue that I can see with this research paper: it takes into account all emissions of a human writer, but human writer would still cause CO2 emissions even when no longer writing, especially if (for example) playing a computer game instead.

A hour of human existence emits CO2 regardless of writing or not.
Yeah, I don't see the point of such an overly-literal-to-the-point-of-silliness response to environmental criticisms of tech companies—unless the idea was to kill all professional writers to save on carbon emissions, in which case you might as well add in some calculations of how many kilojoules you can extract from their biomass just to drive the point home.
 
AI will probably be better at making business and political decisions than CEOs or politicians, really. It's not an intellectually demanding task, after all. So if anything we should be using AI to replace them.
 
To be fair, there is one issue that I can see with this research paper: it takes into account all emissions of a human writer, but human writer would still cause CO2 emissions even when no longer writing, especially if (for example) playing a computer game instead.

A hour of human existence emits CO2 regardless of writing or not. To ensure lower emissions outside of /without?/ work, the main hobby of the (former?) writer would need to be something like walking.

There are ways to reduce the "non-working" carbon emissions.

Hint: Any writer who flies overseas for holidays... erm "inspiration" every year is setting themselves up for horrible carbon emissions regardless of whether they are writing or not. Ditto for driving to work when public transport or work-from-home are available, eating meat for every meal and so on.

Also, the first draft generated from AI tends to look bad, or go on tangents you don't intend it to. What is the carbon output of the final text? The AI text needs proof-reading and editing by the writer, meanwhile the relatively slow pace of writing by humans already includes quite a bit of proof-reading, spell-checking, revising and editing. Never mind how energy consumption of laptops when writing is surprisingly low if you aren't running other tasks in the background. (If you aren't running Youtube, Spotify or a media player for background music, and the computer isn't really doing much else while you are writing, don't be surprised that the energy consumption is considerably lower than 75 watts...)
 
New research show that AI, contrary to some myths, produce less CO2 than human writers and artists:
not only is the research not remotely "new" (february 2024 per that article, but previously submitted in march 2023 also), it's also obviously meaningless nonsense with obviously flawed methodology (both nudging data in favour of the ai and making absurd assumptions about the human side) that does not actually measure what it claims to. hell, their claimed source for chatgpt's emissions is literally some dude making a guess on medium who himself admits it's impossible to estimate.

this "research" is vacuous bullshit and frankly it is a waste of time to even discuss it, and that's before even getting into all the reasons the very premise is absurd in itself.
 
Last edited:
Also you know o3 takes orders of magnitude more compute time than the previous version so it's already obsolete to measure what ChatGPT was like in March 2023.
 
Back
Top