Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
-
I'm not a liberal, I'm a leftist, so perhaps this is why I disagree with you.
The argument that "something is unethical because someone else used it in an unethical way" is so incoherent that it doesn't even rise to the level of debatability.
@pluralistic @FediThing @tante The argument that “The argument that “something is unethical because someone else used it in an unethical way” is so incoherent that it doesn’t even rise to the level of debatability.” doesn’t address what i’m saying here at all
again, pretty clear you don’t know what ethics are or how to be ethical in tech
-
@tante I like 'cyberphysical world'. Es benutzt 'cyber' auf eine Art und Weise, die es wieder vertretbar macht. Sehr spaßig, wie begriffe so verwolft werden, dass sie wieder schmecken können. 🙃
@wackJackle @tante
Den Begriff "Cyber-physical" gibt's schon länger und unabhängig von Cory:
https://de.wikipedia.org/wiki/Cyber-physisches_System -
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
"Sometimes a belief or ethic you hold is so integral to you that you will not move. Sometimes they are held loosely enough to let go under certain conditions." 🔥🔥
-
@skyfaller @correl @FediThing @tante
That is completely backwards.
The entire point of measuring embodied emissions is to *make use of things that embody emissions*.
We improve old, energy inefficient buildings *because they represent embodied emissions* rather than building new, more efficient buildings because the *net* emissions of building a new, better building exceed the emissions associated with a remediated, older building.
@pluralistic You're missing my point. Old houses should be used, but if new houses are built using fossil fuels, then we can cook ourselves by building them even if new buildings are fully electrified.
It feels like you're ignoring the context where LLMs are still being created. It's ethically different to use something made by slaves if slavery is not in the past. If you golf on a golf course maintained by prison labor yesterday, it matters that prisoners will clean it again tomorrow.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante questions for the leftists and liberals from a confused anarchist:
1. Do you think you can put the cat back in the bag with LLMs? How?
2. For those that believe that LLMs were trained on stolen data, what does it mean for data to be private, scarce property, that can be "stolen?"
3. What about models that just steal from the big boys, like the PRC ones? Theft from capitalists, surely ethical?
4. Will you not using any LLMs cause Sam Altman and friends to lose control of your country?
-
@pluralistic You're missing my point. Old houses should be used, but if new houses are built using fossil fuels, then we can cook ourselves by building them even if new buildings are fully electrified.
It feels like you're ignoring the context where LLMs are still being created. It's ethically different to use something made by slaves if slavery is not in the past. If you golf on a golf course maintained by prison labor yesterday, it matters that prisoners will clean it again tomorrow.
I'm not ignoring that context, it is *entirely irrelevant*, because I am *not* using some prospective, as-yet-to-be-trained LLM to check punctuation on my laptop. I am using an *actual, existing* LLM.
So if your argument is, "If you did something that's not the thing you've done, that would be bad," my response is, "Perhaps that's true, but I have no idea why you would seek to a stranger to discuss that subject."
-
@osma@mas.to @tante@tldr.nettime.org It has debatable utility in some uses, but nowhere near enough to make the industry worth keeping around given the ethical concerns. The utility is effectively immaterial compared to the self-parody levels of evil on display from OpenAI and its ilk.
-
@elle @dhd6 @tante @simonzerafa
"You used the wrong open model because I don't like the company that made it" is the actual definition of nonsense purity culture.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante That’s a lot of big words! I just want to use tech without feeling guilty. But yeah, being respectful is always good—even online.
-
@correl @skyfaller @FediThing @tante
> as you really haven't clarified that anyhow
I'm sorry, this is entirely wrong.
The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.
I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.
@skyfaller @pluralistic @tante @FediThing @correl Chiming in to state that it's routine to re-state and re-re-state principles that get lost in long reads and long threads such as this one, where any late-comer needs to skim because of the tl;dr factor. There's a long standing principle based on this phenomenon: tell them in short what you're saying, explain what you said, then tell them in summary what you said. -
@tante questions for the leftists and liberals from a confused anarchist:
1. Do you think you can put the cat back in the bag with LLMs? How?
2. For those that believe that LLMs were trained on stolen data, what does it mean for data to be private, scarce property, that can be "stolen?"
3. What about models that just steal from the big boys, like the PRC ones? Theft from capitalists, surely ethical?
4. Will you not using any LLMs cause Sam Altman and friends to lose control of your country?
@tante @komali_2 Dear anarchist: the confusion you feel is due to the fact that the only people you see are "leftists" "liberals" and "anarchists". None of those are real everyday people. Because the political parties in the U.S. don't recognize real everyday people as voters, real everyday people are fed up and disgusted with partisan loyalties, especially and including anarchists. -
@skyfaller @pluralistic @tante @FediThing @correl Chiming in to state that it's routine to re-state and re-re-state principles that get lost in long reads and long threads such as this one, where any late-comer needs to skim because of the tl;dr factor. There's a long standing principle based on this phenomenon: tell them in short what you're saying, explain what you said, then tell them in summary what you said.
@claralistensprechen3rd @skyfaller @tante @FediThing @correl
I don't know what this has to do with someone stating "you haven't clarified" something, when you have.
Also, I have reposted the paragraph in question TWICE this morning.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
thanks for writing and sharing your mindful critique of this argument.
-
@Colman @FediThing @tante That's interesting. I've never wondered that about you.
@pluralistic capital murder on the timeline
-
@tante It is difficult to be respectful to someone who had built a reputation and then deliberately alienated a large share of the following that came with that reputation. I will just say that when you're right about important issues as often as Doctorow has been, it is a very human thing to develop a certain arrogance. I hope he receives the critical feedback rather than digging in.
PS There are ample non-LLM technologies that do what he claims to use LLMs for. I would be interested to see his reasons for passing over those options.
@liquor_american @tante Agreed. The thing I don't get is embracing something so inefficient. Like, there are other, existing technologies that do the work he described *better*.
-
-
Should we ban the OED?
There is literally no way to study language itself without acquiring vast corpora of existing language, and no one in the history of scholarship has ever obtained permission to construct such a corpus.
@pluralistic I gave it a good thought, and you know what, I'm gonna argue that yes, for me there is a degree of unethical-ness to that lack of permission!
the things that makes me not mind that so much are a variety of differences in method and scale;
(*btw just explaining my personal reasons here, not arguing yours)
- every word in the OED was painstakingly researched by human experts to make the most possible sense of it
- coming from a place of passion on the end of the linguists, no doubt
- the ownership of said data isn't "techno-feudal mega-corporations existing under a fascist regime"
- the OED didn't spell the end of human culture (heh) like LLMs very much might.
so yeah. I guess we do agree that, on some level, the OED and an LLM have something in similar.
it's the differences in method and scale that make me draw the line somewhere in between them; in a different spot from where you may draw it.
and like @zenkat mentioned elsewhere, it's the whole thing around LLMs that makes me very wary of normalizing anything to do with it, and I concede I wouldn't mind your slightly unethical LLM spellchecker as much, if we didn't live in this horrible context. :)
I guess this has become a bit of a reconciliatory toot. agree to disagree on where we draw the line, to each their own, and all that.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante your writing is great. Thanks for articulating what I couldn't.
-
Beats me.
I thought Cory was supposed to be clever or something? I've blocked him for now. Not interested in banging my head against that particular lack of critical thinking.
Perhaps when the AI bubble bursts, he will become more rational.
> AI bubble bursts
For about ~30k I can run deepseek on a rig in my apartment, forever. That's pennies for a startup. How do you envision this bubble bursting such that LLMs are gone?
-
What is the incremental environmental damage created by running an existing LLM locally on your own laptop?
As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.
@pluralistic @simonzerafa @tante I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)