Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
-
Should we ban the OED?
There is literally no way to study language itself without acquiring vast corpora of existing language, and no one in the history of scholarship has ever obtained permission to construct such a corpus.
@pluralistic I gave it a good thought, and you know what, I'm gonna argue that yes, for me there is a degree of unethical-ness to that lack of permission!
the things that makes me not mind that so much are a variety of differences in method and scale;
(*btw just explaining my personal reasons here, not arguing yours)
- every word in the OED was painstakingly researched by human experts to make the most possible sense of it
- coming from a place of passion on the end of the linguists, no doubt
- the ownership of said data isn't "techno-feudal mega-corporations existing under a fascist regime"
- the OED didn't spell the end of human culture (heh) like LLMs very much might.
so yeah. I guess we do agree that, on some level, the OED and an LLM have something in similar.
it's the differences in method and scale that make me draw the line somewhere in between them; in a different spot from where you may draw it.
and like @zenkat mentioned elsewhere, it's the whole thing around LLMs that makes me very wary of normalizing anything to do with it, and I concede I wouldn't mind your slightly unethical LLM spellchecker as much, if we didn't live in this horrible context. :)
I guess this has become a bit of a reconciliatory toot. agree to disagree on where we draw the line, to each their own, and all that.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante your writing is great. Thanks for articulating what I couldn't.
-
Beats me.
I thought Cory was supposed to be clever or something? I've blocked him for now. Not interested in banging my head against that particular lack of critical thinking.
Perhaps when the AI bubble bursts, he will become more rational.
> AI bubble bursts
For about ~30k I can run deepseek on a rig in my apartment, forever. That's pennies for a startup. How do you envision this bubble bursting such that LLMs are gone?
-
What is the incremental environmental damage created by running an existing LLM locally on your own laptop?
As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
-
@pluralistic @clintruin @simonzerafa @tante
Which "couple million people" suffer harm when I run a model ON MY LAPTOP?
Anyone who's hosting a website, and is getting hammered by the bots that seek content to train the models on. Those of us are the ones who continue getting hurt.
Whether you run it locally or not, makes little difference. The models were trained, and training very likely involved scraping, and that continues to be a problem to this day. Not because of ethical concerns, but technical ones: a constant 100req/sec 24/7, with over 2.5k req/sec waves may sound little in this day and age, but at around 2.5k req/sec (sustained for about a week!), my cheap VPS's two vCPUs are bogged down trying to deal with all the TLS handshakes, let alone serving anything.
That is a cost many seem to forget. It costs bandwidth, CPU, and human effort to keep things online under the crawler DDoS - which often will require cold, hard cash too, to survive.
Ask Codeberg or LWN how they fare under crawler load, and imagine someone who just wants to have their stuff online having to deal with similar abuse.
That is the suffering you enable when using any LLM model, even locally.
@algernon @pluralistic @clintruin @simonzerafa @tante
Ok, sure. But you won't stop that train by purity testing other leftists. So what's the plan to stop openai ddosing all our blogs?
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante The original strawman was somewhat acceptable ("if you refuse to use things touched by evil completely you can't use computers, so we all draw a line somewhere and this is where I draw mine" yadda yadda), but what follows is a pristine example of "man slips on a slippery slope, and slips a lot". :D
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante It's why I'm (lately) preternaturally cautious around "very loud" voices. Part of their ability to formulate & disseminate their contrarian view is dismissing some views & focusing on core principles. If you don't keep counterbalancing voices close, it's extraordinarily easy for that methodology to abruptly send you far off the path.
See also: DHH
-
@tante It's why I'm (lately) preternaturally cautious around "very loud" voices. Part of their ability to formulate & disseminate their contrarian view is dismissing some views & focusing on core principles. If you don't keep counterbalancing voices close, it's extraordinarily easy for that methodology to abruptly send you far off the path.
See also: DHH
@tante These days, I generally follow people inclined to selectively & thoughtfully retweet folks like that than to drink from those particular fire hoses directly.
-
@tante People like Cory who mock others for their disabilities are not worth paying attention to.
@mallory @tante Reposting as the thread got broken: While I would not make fun of someone whose disability was incontinence, or disfigured, or big on general principle, the lack of accountability leaves us with inadequately giving back what that individual has sent out. I don't find that immoral and I'll die on that hill. If someone spent their time and platform to disparage others for their appearance, disabilities or weight, they absolutely can have that back, especially when said speaker is so physically repugnant to begin with. trump had no business judging others on any of those qualities, being so grotesque himself. I think the criticism of Cory's stance on LLMs is valid, picking at him for describing a man as he is is not.
-
@correl @skyfaller @FediThing @tante
> as you really haven't clarified that anyhow
I'm sorry, this is entirely wrong.
The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.
I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.
@pluralistic@mamot.fr @skyfaller@jawns.club @FediThing@social.chinwag.org @tante@tldr.nettime.org Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable. -
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante Thank you for writing this; I feel like it's a necessary counterpoint. I definitely agree more with you than I do with Cory on this one. But I also feel a bit like you're talking past each other a bit. The difficulty with Internet Discourse like this is that in order to avoid harassment we are often arguing with large, nebulous collectives rather than calling out individuals. I think that Cory is correct that the *line of argument* he is criticizing is in fact fallacious.
-
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
@pluralistic @simonzerafa @tante Not a judgment on your usage. Just an answer to the question.
-
@pluralistic@mamot.fr @skyfaller@jawns.club @FediThing@social.chinwag.org @tante@tldr.nettime.org Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable.@correl @skyfaller @FediThing @tante
Scraping work is categorically not "stealing."
-
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
parsing a doc uses as much juice as streaming a Youtube video and less juice than performing a gnarly transform on a hi-rez in the Gimp.
I measured.
-
@tante Thank you for writing this; I feel like it's a necessary counterpoint. I definitely agree more with you than I do with Cory on this one. But I also feel a bit like you're talking past each other a bit. The difficulty with Internet Discourse like this is that in order to avoid harassment we are often arguing with large, nebulous collectives rather than calling out individuals. I think that Cory is correct that the *line of argument* he is criticizing is in fact fallacious.
@tante The question about the strawman is: is anyone comprehensively, seriously making this argument? I don't think so. I think a lot of people are extremely fucking angry for justifiable reasons and directionlessly and not particularly coherently lashing out in ways that look "purity culture"-ish. You can see it all over the replies here, with Cory mixing it up, which sure seems to prove the point! *I* don't feel like this constitutes a serious argument, but it's still, like… there
-
@tante The question about the strawman is: is anyone comprehensively, seriously making this argument? I don't think so. I think a lot of people are extremely fucking angry for justifiable reasons and directionlessly and not particularly coherently lashing out in ways that look "purity culture"-ish. You can see it all over the replies here, with Cory mixing it up, which sure seems to prove the point! *I* don't feel like this constitutes a serious argument, but it's still, like… there
@tante So, what I feel like is happening here, is:
1. Some people are making a comprehensive argument against the use of LLMs, well-supported on all sides.
2. This metastasizes into a yelling match about "AI", with fuzzy boundaries and highly emotionally charged language, with a lot of participants who aren't necessarily experts or do not have the time to make a careful argument.
3. Cory picks a fight with this *real* but not-particularly-robust position, skipping stronger counterarguments. -
I see. Well, thanks for wagging your finger at me, and mansplaining about tulip mania as if it's not common knowledge. I hope it has brightened your day.
Now I must get back to see if Antigravity / Gemini 3.1 has finished the stuff I asked it to do, that I definitely could and would not be able to do myself.
@hopeless @JeffGrigg @tante If you couldn't do it yourself, you cheated yourself of the opportunity of learning how to do it. I am also wondering how you know that it did the job correctly?
-
@algernon @pluralistic @clintruin @simonzerafa @tante
Ok, sure. But you won't stop that train by purity testing other leftists. So what's the plan to stop openai ddosing all our blogs?
@komali_2 @algernon @pluralistic @simonzerafa @tante
In the big scheme of things I don’t care if Doctrow is using some local LLM to proof his writing. The fact he happens to be the same fellow who coined the term ‘enshittification’ smacks of a dark irony, but whatever. That’s merely how I view it. He’s convinced it’s copasetic, and I don’t really give a fuck. But when we consider that this tech is contributing enormously to the much larger problem of climate change, there’s a real issue. 1/ -
@komali_2 @algernon @pluralistic @simonzerafa @tante
In the big scheme of things I don’t care if Doctrow is using some local LLM to proof his writing. The fact he happens to be the same fellow who coined the term ‘enshittification’ smacks of a dark irony, but whatever. That’s merely how I view it. He’s convinced it’s copasetic, and I don’t really give a fuck. But when we consider that this tech is contributing enormously to the much larger problem of climate change, there’s a real issue. 1/@komali_2 @algernon @pluralistic @simonzerafa @tante
2/Doctrow has explained he does not believe his use of a local LLM is contributing to that overall problem. I don’t know if it, or it isn’t. But that does not derail his fame, or that others may view his use as a tacit approval of this tech in general. Is it fair to lay this on him merely for using a local LLM to proof his work? Should he even be concerned about this? -
@komali_2 @algernon @pluralistic @simonzerafa @tante
2/Doctrow has explained he does not believe his use of a local LLM is contributing to that overall problem. I don’t know if it, or it isn’t. But that does not derail his fame, or that others may view his use as a tacit approval of this tech in general. Is it fair to lay this on him merely for using a local LLM to proof his work? Should he even be concerned about this?@komali_2 @algernon @pluralistic @simonzerafa @tante
3/I dunno. I’m not the writer who coined the term enshittification. Frankly This other side of it...”purity testing leftists”...I don’t even know what that means, and I’m certainly not looking at the issue in this regard. Likewise the use of the term “neoliberal”...same thing. I’m not viewing it in this light. However, climate, and the well-documented contributions of LLMs to that problem, are most definitely my issue with this. /end