Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
-
@pluralistic @simonzerafa @tante
"What is the incremental environmental damage created by running an existing LLM locally on your own laptop?"I dunno. But how about a couple of million people?
The person who coins the term 'enshittification' defends LLM. Just...wow. We truly are fucked.
Let's all do what Cory does!
☠️
Meanwhile:
https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/?gad_source=1&gad_campaignid=20737314952&gbraid=0AAAAADgO_miNIDzn-BdCIXzZ6r87g94-L&gclid=Cj0KCQiA49XMBhDRARIsAOOKJHbvIzPACe0EdEyWK86TnS7rNlnUaePKc5y22qT0ZsfqUeGDe72zzc0aAhFFEALw_wcB
#doomed #ClimateChange🤖 Tracking strings detected and removed!
🔗 Clean URL(s):
https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/?gad_source=1&gad_campaignid=20737314952&gbraid=0AAAAADgO_miNIDzn-BdCIXzZ6r87g94-L
❌ Removed parts:
&gclid=Cj0KCQiA49XMBhDRARIsAOOKJHbvIzPACe0EdEyWK86TnS7rNlnUaePKc5y22qT0ZsfqUeGDe72zzc0aAhFFEALw_wcB -
@pluralistic @tante @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.
Am I an old man yelling at a cloud?
No, it's the children who are wrong!
Rockets were literally perfected in Nazi slave labor camps.
-
@clintruin @simonzerafa @tante
Which "couple million people" suffer harm when I run a model on my laptop?
@pluralistic @simonzerafa @tante
Missed the point, sir.When one person does it...no big deal.
When a couple of million people do it...well, see the MIT article above.
-
@tante Dunno where you got the idea that I have a "libertarian" background. I was raised by Trotskyists, am a member of the DSA, am advising and have endorsed Avi Lewis, and joined the UK Greens to back Polanski.
@pluralistic
Fair enough, but that's not the core of the argument
@tante made. He had the same complaint for starters (your argument was heavily drenched in 'you ppl are purists' ), but he also makes the valid argument that technology isn't neutral in itself. Open weights based on intellectual theft and forced labor is still a problem. Until we have a discussion on how the weights come to fruitition, LLM's are objectively problematic from an ethical view. That has nothing to do with purism. -
Thanks for these corrections. Completely agree with everything, and thanks for tagging Cory.
One of the really unfortunate things that the Silicon Valley scammers have achieved is to coopt new technologies for their despicable pump and dump schemes and apply their disingenuous hype factory which ends up tarring all uses with the same brush.
@mastodonmigration @shiri @pluralistic @tante The only ethical use of a LLM would be one where the training dataset was ethically acquired, the power was minimized to the level of other methods of providing the same benefits, and the 'benefits' were actually measureable and accurate.
None of those are true today, and so far as I know there is little to no path to them.
-
@pluralistic @simonzerafa @tante
Missed the point, sir.When one person does it...no big deal.
When a couple of million people do it...well, see the MIT article above.
@pluralistic @simonzerafa @tante
Subhead quote from the article:
"The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next." -
@mastodonmigration @shiri @pluralistic @tante The only ethical use of a LLM would be one where the training dataset was ethically acquired, the power was minimized to the level of other methods of providing the same benefits, and the 'benefits' were actually measureable and accurate.
None of those are true today, and so far as I know there is little to no path to them.
@reflex @shiri @pluralistic @tante
Seems like Cory's local punctuation and grammer checker is such an example, no?
-
@pluralistic @simonzerafa @tante
Subhead quote from the article:
"The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."@clintruin @simonzerafa @tante
You are laboring under a misapprehension.
I will reiterate my question, with all caps for emphasis.
Which "couple million people" suffer harm when I run a model ON MY LAPTOP?
-
@clintruin @simonzerafa @tante
Well, you could "do what Cory does" by familiarizing yourself with the conduct that you are criticizing before engaging in ad hominem.
To be fair, that's not unique to me, but people who fail to rise to that standard are doing themselves and others no good.
-
@clintruin @simonzerafa @tante
You are laboring under a misapprehension.
I will reiterate my question, with all caps for emphasis.
Which "couple million people" suffer harm when I run a model ON MY LAPTOP?
@pluralistic @simonzerafa @tante
I'll reiterate my response.When you *alone* do it...no big deal.
When a couple of million do it ON THEIR OWN LAPTOPS...problem. -
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante It is difficult to be respectful to someone who had built a reputation and then deliberately alienated a large share of the following that came with that reputation. I will just say that when you're right about important issues as often as Doctorow has been, it is a very human thing to develop a certain arrogance. I hope he receives the critical feedback rather than digging in.
PS There are ample non-LLM technologies that do what he claims to use LLMs for. I would be interested to see his reasons for passing over those options.
-
@FediThing @pluralistic @tante i feel in the similar way as big tech has taken the notion of AI and LLMs as a cue/excuse to mount a global campaign of public manipulation and massive investments into a speculative project and pumps gazillions$ into it and convinces everyone it's innevitable tech to be put in bag of potato chips, the backlash is then that anything that bears the name of AI and LLM is poisonous plague and people are unfollowing anyone who's touched it in any way or talks about it in any other way than "it's fascist tech, i'm putting a filter in my feed!" (while it IS fascist tech because it's in hands of fascists).
in my view the problem seems not what LLMs are (what kind of tech), but how they are used and what they extract from planet when they are used by the big tech in this monstrous harmful way. of course there's a big blurred line and tech can't be separated from the political, but... AI is not intelligent (Big Tech wants you to believe that), and LLMs are not capable of intelligence and learning (Big Tech wants you to believe that).
so i feel like a big chunk of anger and hate should really be directed at techno oligarchs and only partially and much more critically at actual algorithms in play. it's not LLMs that are harming the planet, but rather the extraction, these companies who are absolute evil and are doing whatever the hell they want, unchecked, unregulated.
or as varoufakis said to tim nguyen: "we don't want to get rid of your tech or company (google). we want to socialize your company in order to use it more productively" and, if i may add, safely and beneficialy for everyone not just a few.
@prinlu @FediThing @pluralistic @tante I agree with most things said in this thread, but on a very practical level, I'm curious what training data was used for the model used by @pluralistic 's typo-checking ollama?
for me, that training data is key here. was it consensually allowed for use in training?
because as I understand, LLMs need vast amounts of training data, and I'm just not sure how you would get access to such data consensually. would love to be enlightened about this :)
-
@pluralistic @simonzerafa @tante
I'll reiterate my response.When you *alone* do it...no big deal.
When a couple of million do it ON THEIR OWN LAPTOPS...problem.@clintruin @simonzerafa @tante
OK, sorry, i was under the impression that I was having a discussion with someone who understands this issue.
You are completely, empirically, technically wrong.
Checking the punctuation on a document on your laptop uses less electricity than watching a Youtube video.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante
while we're pointing out logistical inconsistencies..there is zero reason to stop masking in an ongoing pandemic - especially as someone who acknowledged the benefits previously
nothing has changed to make this a rational choice and it can't be said to be in solidarity with disabled people (or folks in general)
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante People like Cory who mock others for their disabilities are not worth paying attention to.
-
@prinlu @FediThing @pluralistic @tante I agree with most things said in this thread, but on a very practical level, I'm curious what training data was used for the model used by @pluralistic 's typo-checking ollama?
for me, that training data is key here. was it consensually allowed for use in training?
because as I understand, LLMs need vast amounts of training data, and I'm just not sure how you would get access to such data consensually. would love to be enlightened about this :)
@bazkie @prinlu @FediThing @tante
I do not accept the premise that scraping for training data is unethical (leaving aside questions of overloading others' servers).
This is how every search engine works. It's how computational linguistics works. It's how the Internet Archive works.
Making transient copies of other peoples' work to perform mathematical analysis on them isn't just acceptable, it's an unalloyed good and should be encouraged:
https://pluralistic.net/2023/09/17/how-to-think-about-scraping/
-
@reflex @shiri @pluralistic @tante
Seems like Cory's local punctuation and grammer checker is such an example, no?
@mastodonmigration
it's the "copyright" issue, the outlook that unless everyone who posted anything that was used receives a check for a hefty sum then it's unethical.Copyright is in quotes because it's not really a violation of copyright (the LLMs are not producing whole copies of copywritten materials without basically being forced) nor is it a violation of the intent of copyright (people are confused, copyright was never intended to give artists total control, it's just to ensure new art continues to be created).
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante
I partly agree with Cory and partly not.
Refusing to use resource-gobbling datacenter-hosted LLMs makes perfect sense. I'd just as soon heat my house by burning kittens. It is also a rational political statement.Refusing to use an LLM hosted on my own iron is also a political statement, as well as a personal choice. I don't give a hoot about ideological purity; I just distrust clankers, and don't want to get into the habit of depending on them. (Besides, they offer me nothing I cannot as easily do for myself.)
-
@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.
For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.
@skyfaller@jawns.club @pluralistic@mamot.fr @FediThing@social.chinwag.org @tante@tldr.nettime.org This is precisely it; it's about the process, not their distance from Altman, Amodei, et al. (which the Ollama project and those like it achieve).
The LLM models themselves are, per this analogy, still almost entirely of the mink-corpse variety, and I think it's a stretch to scream "purity!" at everyone giving you the stink eye for the coat you're wearing.
It's not impossible to have and use a model, locally hosted and energy-efficient, that wasn't directly birthed by mass theft and human abuse (or training directly off of models that were). And having models that aren't, that are genuinely open, is great! That's how the wickedness gets purged and the underlying tech gets liberated.
Maybe your coat is indeed synthetic, that much is still unclear, because so far all the arguing seems to be focused on the store you got it from and the monsters that operate the worst outlets. -
@clintruin @simonzerafa @tante
OK, sorry, i was under the impression that I was having a discussion with someone who understands this issue.
You are completely, empirically, technically wrong.
Checking the punctuation on a document on your laptop uses less electricity than watching a Youtube video.
@pluralistic @simonzerafa @tante
Fair enough, Cory. You're gonna do what you want regardless of my accuracy or inaccuracy anyway. And maybe I've misunderstood this. The same way many many will.
But visualize this:
"Hey...I just read Cory Doctrow uses an LLM to check his writing."
"Really?"
"Yeah, it's true."
"Cool, maybe what I've read about ChatGPT is wrong too..."