Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante The original strawman was somewhat acceptable ("if you refuse to use things touched by evil completely you can't use computers, so we all draw a line somewhere and this is where I draw mine" yadda yadda), but what follows is a pristine example of "man slips on a slippery slope, and slips a lot". :D
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante It's why I'm (lately) preternaturally cautious around "very loud" voices. Part of their ability to formulate & disseminate their contrarian view is dismissing some views & focusing on core principles. If you don't keep counterbalancing voices close, it's extraordinarily easy for that methodology to abruptly send you far off the path.
See also: DHH
-
@tante It's why I'm (lately) preternaturally cautious around "very loud" voices. Part of their ability to formulate & disseminate their contrarian view is dismissing some views & focusing on core principles. If you don't keep counterbalancing voices close, it's extraordinarily easy for that methodology to abruptly send you far off the path.
See also: DHH
@tante These days, I generally follow people inclined to selectively & thoughtfully retweet folks like that than to drink from those particular fire hoses directly.
-
@tante People like Cory who mock others for their disabilities are not worth paying attention to.
@mallory @tante Reposting as the thread got broken: While I would not make fun of someone whose disability was incontinence, or disfigured, or big on general principle, the lack of accountability leaves us with inadequately giving back what that individual has sent out. I don't find that immoral and I'll die on that hill. If someone spent their time and platform to disparage others for their appearance, disabilities or weight, they absolutely can have that back, especially when said speaker is so physically repugnant to begin with. trump had no business judging others on any of those qualities, being so grotesque himself. I think the criticism of Cory's stance on LLMs is valid, picking at him for describing a man as he is is not.
-
@correl @skyfaller @FediThing @tante
> as you really haven't clarified that anyhow
I'm sorry, this is entirely wrong.
The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.
I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.
@pluralistic@mamot.fr @skyfaller@jawns.club @FediThing@social.chinwag.org @tante@tldr.nettime.org Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable. -
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante Thank you for writing this; I feel like it's a necessary counterpoint. I definitely agree more with you than I do with Cory on this one. But I also feel a bit like you're talking past each other a bit. The difficulty with Internet Discourse like this is that in order to avoid harassment we are often arguing with large, nebulous collectives rather than calling out individuals. I think that Cory is correct that the *line of argument* he is criticizing is in fact fallacious.
-
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
@pluralistic @simonzerafa @tante Not a judgment on your usage. Just an answer to the question.
-
@pluralistic@mamot.fr @skyfaller@jawns.club @FediThing@social.chinwag.org @tante@tldr.nettime.org Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable.@correl @skyfaller @FediThing @tante
Scraping work is categorically not "stealing."
-
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
parsing a doc uses as much juice as streaming a Youtube video and less juice than performing a gnarly transform on a hi-rez in the Gimp.
I measured.
-
@tante Thank you for writing this; I feel like it's a necessary counterpoint. I definitely agree more with you than I do with Cory on this one. But I also feel a bit like you're talking past each other a bit. The difficulty with Internet Discourse like this is that in order to avoid harassment we are often arguing with large, nebulous collectives rather than calling out individuals. I think that Cory is correct that the *line of argument* he is criticizing is in fact fallacious.
@tante The question about the strawman is: is anyone comprehensively, seriously making this argument? I don't think so. I think a lot of people are extremely fucking angry for justifiable reasons and directionlessly and not particularly coherently lashing out in ways that look "purity culture"-ish. You can see it all over the replies here, with Cory mixing it up, which sure seems to prove the point! *I* don't feel like this constitutes a serious argument, but it's still, like… there
-
@tante The question about the strawman is: is anyone comprehensively, seriously making this argument? I don't think so. I think a lot of people are extremely fucking angry for justifiable reasons and directionlessly and not particularly coherently lashing out in ways that look "purity culture"-ish. You can see it all over the replies here, with Cory mixing it up, which sure seems to prove the point! *I* don't feel like this constitutes a serious argument, but it's still, like… there
@tante So, what I feel like is happening here, is:
1. Some people are making a comprehensive argument against the use of LLMs, well-supported on all sides.
2. This metastasizes into a yelling match about "AI", with fuzzy boundaries and highly emotionally charged language, with a lot of participants who aren't necessarily experts or do not have the time to make a careful argument.
3. Cory picks a fight with this *real* but not-particularly-robust position, skipping stronger counterarguments. -
I see. Well, thanks for wagging your finger at me, and mansplaining about tulip mania as if it's not common knowledge. I hope it has brightened your day.
Now I must get back to see if Antigravity / Gemini 3.1 has finished the stuff I asked it to do, that I definitely could and would not be able to do myself.
@hopeless @JeffGrigg @tante If you couldn't do it yourself, you cheated yourself of the opportunity of learning how to do it. I am also wondering how you know that it did the job correctly?
-
@algernon @pluralistic @clintruin @simonzerafa @tante
Ok, sure. But you won't stop that train by purity testing other leftists. So what's the plan to stop openai ddosing all our blogs?
@komali_2 @algernon @pluralistic @simonzerafa @tante
In the big scheme of things I don’t care if Doctrow is using some local LLM to proof his writing. The fact he happens to be the same fellow who coined the term ‘enshittification’ smacks of a dark irony, but whatever. That’s merely how I view it. He’s convinced it’s copasetic, and I don’t really give a fuck. But when we consider that this tech is contributing enormously to the much larger problem of climate change, there’s a real issue. 1/ -
@komali_2 @algernon @pluralistic @simonzerafa @tante
In the big scheme of things I don’t care if Doctrow is using some local LLM to proof his writing. The fact he happens to be the same fellow who coined the term ‘enshittification’ smacks of a dark irony, but whatever. That’s merely how I view it. He’s convinced it’s copasetic, and I don’t really give a fuck. But when we consider that this tech is contributing enormously to the much larger problem of climate change, there’s a real issue. 1/@komali_2 @algernon @pluralistic @simonzerafa @tante
2/Doctrow has explained he does not believe his use of a local LLM is contributing to that overall problem. I don’t know if it, or it isn’t. But that does not derail his fame, or that others may view his use as a tacit approval of this tech in general. Is it fair to lay this on him merely for using a local LLM to proof his work? Should he even be concerned about this? -
@komali_2 @algernon @pluralistic @simonzerafa @tante
2/Doctrow has explained he does not believe his use of a local LLM is contributing to that overall problem. I don’t know if it, or it isn’t. But that does not derail his fame, or that others may view his use as a tacit approval of this tech in general. Is it fair to lay this on him merely for using a local LLM to proof his work? Should he even be concerned about this?@komali_2 @algernon @pluralistic @simonzerafa @tante
3/I dunno. I’m not the writer who coined the term enshittification. Frankly This other side of it...”purity testing leftists”...I don’t even know what that means, and I’m certainly not looking at the issue in this regard. Likewise the use of the term “neoliberal”...same thing. I’m not viewing it in this light. However, climate, and the well-documented contributions of LLMs to that problem, are most definitely my issue with this. /end -
@tante So, what I feel like is happening here, is:
1. Some people are making a comprehensive argument against the use of LLMs, well-supported on all sides.
2. This metastasizes into a yelling match about "AI", with fuzzy boundaries and highly emotionally charged language, with a lot of participants who aren't necessarily experts or do not have the time to make a careful argument.
3. Cory picks a fight with this *real* but not-particularly-robust position, skipping stronger counterarguments.@tante Your abstract arguments are a good counter-position but are still missing something: a specific, situated critique of local models. A lot of the structural properties and material impacts that you describe are specific to the big, hosted models. You veer off into criticizing OpenAI pretty early, which doesn't apply to open-weight, on-device models that Cory is talking about.
-
@tante Your abstract arguments are a good counter-position but are still missing something: a specific, situated critique of local models. A lot of the structural properties and material impacts that you describe are specific to the big, hosted models. You veer off into criticizing OpenAI pretty early, which doesn't apply to open-weight, on-device models that Cory is talking about.
@tante I think you successfully argue that artifacts *do* have politics, but you fall apart a bit on what the politics of local LLMs *are*. The argument that their politics "are violence" but that argument almost entirely applies to Google's regular "non-AI" (arguably not accurate since neural nets have been involved for years before LLMs) search index. Is a google search the same kind of "violence" as a local LLM?
-
@tante I think you successfully argue that artifacts *do* have politics, but you fall apart a bit on what the politics of local LLMs *are*. The argument that their politics "are violence" but that argument almost entirely applies to Google's regular "non-AI" (arguably not accurate since neural nets have been involved for years before LLMs) search index. Is a google search the same kind of "violence" as a local LLM?
@tante Ugh okay this is too many replies, thanks again for writing this, sorry for getting into so much nit-picking, I just need to write my own "it's time to stop using LLMs" omnibus that collects my own perspective on the *different* reasons one should avoid hosted vs. local models
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
> LLMs are based on extraction, exploitation and subjugation
So is torrenting. This is a very capitalist argument, coming from someone that self-identifies as a communist, that one deserves to reap the rewards of them adding value to humanity through some form of gatekeeping and is entitled to a reward from such gatekeeping. You're literally arguing on the side of Elsevier and JSTOR against Aaron Swartz
What does it matter if human knowledge is available as a book or an LLM? The important part is that all of humanity has access to it.
> Omelas is an almost perfect city. Rich, democratic, pleasant. But it only works by having one small child in perpetual torment.
Walking away from Omelas doesn't stop that child's perpetual torment. Your choice is merely ignorance and cowardice in front of injustice. Choosing to stay in Omelas and poison its democratic system to lead to its downfall is the arguably the more moral option. Let's not even get into the argument about how Germany made the Eurozone its Omelas at the expense of deficit-prone southern Europe, and how you should leave Germany by your argument.
> If everything is somehow “free and open” then we have won.
your moral choice to not use LLMs is the same as abandoning Omelas and the eternally tormented child, it serves as nothing but intellectual onanism. Distilling GPT 5, Opus 4.6, commoditising the petaflop (see George Hotz), deploying efficient models on Huawei chips is the same as causing rot in Omelas from the inside, causing the billions invested into AI to be worthless, tearing down the system that is perpetually tormenting that child. It is the only way forward.
Cory was right to label this "neolib purity testing", because 1) it sides with capital (see above point re: torrenting), 2) it tries to don the mantel of dialectical materialism, while viewing this issue through a lens of "individualist action" and static morality and 3) It endlessly criticises power instead of aiming to claim and wield it for good.
-
@komali_2@mastodon.social @tante@tldr.nettime.org @osma@mas.to What do you think you're doing, exactly?