Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
-
@tante questions for the leftists and liberals from a confused anarchist:
1. Do you think you can put the cat back in the bag with LLMs? How?
2. For those that believe that LLMs were trained on stolen data, what does it mean for data to be private, scarce property, that can be "stolen?"
3. What about models that just steal from the big boys, like the PRC ones? Theft from capitalists, surely ethical?
4. Will you not using any LLMs cause Sam Altman and friends to lose control of your country?
@tante @komali_2 Dear anarchist: the confusion you feel is due to the fact that the only people you see are "leftists" "liberals" and "anarchists". None of those are real everyday people. Because the political parties in the U.S. don't recognize real everyday people as voters, real everyday people are fed up and disgusted with partisan loyalties, especially and including anarchists. -
@skyfaller @pluralistic @tante @FediThing @correl Chiming in to state that it's routine to re-state and re-re-state principles that get lost in long reads and long threads such as this one, where any late-comer needs to skim because of the tl;dr factor. There's a long standing principle based on this phenomenon: tell them in short what you're saying, explain what you said, then tell them in summary what you said.
@claralistensprechen3rd @skyfaller @tante @FediThing @correl
I don't know what this has to do with someone stating "you haven't clarified" something, when you have.
Also, I have reposted the paragraph in question TWICE this morning.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
thanks for writing and sharing your mindful critique of this argument.
-
@Colman @FediThing @tante That's interesting. I've never wondered that about you.
@pluralistic capital murder on the timeline
-
@tante It is difficult to be respectful to someone who had built a reputation and then deliberately alienated a large share of the following that came with that reputation. I will just say that when you're right about important issues as often as Doctorow has been, it is a very human thing to develop a certain arrogance. I hope he receives the critical feedback rather than digging in.
PS There are ample non-LLM technologies that do what he claims to use LLMs for. I would be interested to see his reasons for passing over those options.
@liquor_american @tante Agreed. The thing I don't get is embracing something so inefficient. Like, there are other, existing technologies that do the work he described *better*.
-
-
Should we ban the OED?
There is literally no way to study language itself without acquiring vast corpora of existing language, and no one in the history of scholarship has ever obtained permission to construct such a corpus.
@pluralistic I gave it a good thought, and you know what, I'm gonna argue that yes, for me there is a degree of unethical-ness to that lack of permission!
the things that makes me not mind that so much are a variety of differences in method and scale;
(*btw just explaining my personal reasons here, not arguing yours)
- every word in the OED was painstakingly researched by human experts to make the most possible sense of it
- coming from a place of passion on the end of the linguists, no doubt
- the ownership of said data isn't "techno-feudal mega-corporations existing under a fascist regime"
- the OED didn't spell the end of human culture (heh) like LLMs very much might.
so yeah. I guess we do agree that, on some level, the OED and an LLM have something in similar.
it's the differences in method and scale that make me draw the line somewhere in between them; in a different spot from where you may draw it.
and like @zenkat mentioned elsewhere, it's the whole thing around LLMs that makes me very wary of normalizing anything to do with it, and I concede I wouldn't mind your slightly unethical LLM spellchecker as much, if we didn't live in this horrible context. :)
I guess this has become a bit of a reconciliatory toot. agree to disagree on where we draw the line, to each their own, and all that.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante your writing is great. Thanks for articulating what I couldn't.
-
Beats me.
I thought Cory was supposed to be clever or something? I've blocked him for now. Not interested in banging my head against that particular lack of critical thinking.
Perhaps when the AI bubble bursts, he will become more rational.
> AI bubble bursts
For about ~30k I can run deepseek on a rig in my apartment, forever. That's pennies for a startup. How do you envision this bubble bursting such that LLMs are gone?
-
What is the incremental environmental damage created by running an existing LLM locally on your own laptop?
As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
-
@pluralistic @clintruin @simonzerafa @tante
Which "couple million people" suffer harm when I run a model ON MY LAPTOP?
Anyone who's hosting a website, and is getting hammered by the bots that seek content to train the models on. Those of us are the ones who continue getting hurt.
Whether you run it locally or not, makes little difference. The models were trained, and training very likely involved scraping, and that continues to be a problem to this day. Not because of ethical concerns, but technical ones: a constant 100req/sec 24/7, with over 2.5k req/sec waves may sound little in this day and age, but at around 2.5k req/sec (sustained for about a week!), my cheap VPS's two vCPUs are bogged down trying to deal with all the TLS handshakes, let alone serving anything.
That is a cost many seem to forget. It costs bandwidth, CPU, and human effort to keep things online under the crawler DDoS - which often will require cold, hard cash too, to survive.
Ask Codeberg or LWN how they fare under crawler load, and imagine someone who just wants to have their stuff online having to deal with similar abuse.
That is the suffering you enable when using any LLM model, even locally.
@algernon @pluralistic @clintruin @simonzerafa @tante
Ok, sure. But you won't stop that train by purity testing other leftists. So what's the plan to stop openai ddosing all our blogs?
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante The original strawman was somewhat acceptable ("if you refuse to use things touched by evil completely you can't use computers, so we all draw a line somewhere and this is where I draw mine" yadda yadda), but what follows is a pristine example of "man slips on a slippery slope, and slips a lot". :D
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante It's why I'm (lately) preternaturally cautious around "very loud" voices. Part of their ability to formulate & disseminate their contrarian view is dismissing some views & focusing on core principles. If you don't keep counterbalancing voices close, it's extraordinarily easy for that methodology to abruptly send you far off the path.
See also: DHH
-
@tante It's why I'm (lately) preternaturally cautious around "very loud" voices. Part of their ability to formulate & disseminate their contrarian view is dismissing some views & focusing on core principles. If you don't keep counterbalancing voices close, it's extraordinarily easy for that methodology to abruptly send you far off the path.
See also: DHH
@tante These days, I generally follow people inclined to selectively & thoughtfully retweet folks like that than to drink from those particular fire hoses directly.
-
@tante People like Cory who mock others for their disabilities are not worth paying attention to.
@mallory @tante Reposting as the thread got broken: While I would not make fun of someone whose disability was incontinence, or disfigured, or big on general principle, the lack of accountability leaves us with inadequately giving back what that individual has sent out. I don't find that immoral and I'll die on that hill. If someone spent their time and platform to disparage others for their appearance, disabilities or weight, they absolutely can have that back, especially when said speaker is so physically repugnant to begin with. trump had no business judging others on any of those qualities, being so grotesque himself. I think the criticism of Cory's stance on LLMs is valid, picking at him for describing a man as he is is not.
-
@correl @skyfaller @FediThing @tante
> as you really haven't clarified that anyhow
I'm sorry, this is entirely wrong.
The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.
I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.
@pluralistic@mamot.fr @skyfaller@jawns.club @FediThing@social.chinwag.org @tante@tldr.nettime.org Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable. -
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante Thank you for writing this; I feel like it's a necessary counterpoint. I definitely agree more with you than I do with Cory on this one. But I also feel a bit like you're talking past each other a bit. The difficulty with Internet Discourse like this is that in order to avoid harassment we are often arguing with large, nebulous collectives rather than calling out individuals. I think that Cory is correct that the *line of argument* he is criticizing is in fact fallacious.
-
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
@pluralistic @simonzerafa @tante Not a judgment on your usage. Just an answer to the question.
-
@pluralistic@mamot.fr @skyfaller@jawns.club @FediThing@social.chinwag.org @tante@tldr.nettime.org Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable.@correl @skyfaller @FediThing @tante
Scraping work is categorically not "stealing."
-
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
parsing a doc uses as much juice as streaming a Youtube video and less juice than performing a gnarly transform on a hi-rez in the Gimp.
I measured.