Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
-
@Colman @FediThing @tante That's interesting. I've never wondered that about you.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante cory is, at his heart, a conservative/liberal USian, putting him far to the right of mainstream European thought and politics.
He constantly refuses to apply his beliefs to underlying structures, arguing that AI or enshittification are aberrations in capitalism, refusing to acknowledge and blocking anyone who argues that it's just capitalism acting as intended.
It doesn't surprise me at all that he's acting hypocritically here.
-
This is a "fruit of the poisoned tree" argument.
Suppose you use a computer to post to Mastodon, despite the fact that silicon transistors were invented by the eugenicist William Shockley, who spent his Nobel money offering bribes to women of color to be sterlized?
Suppose you sent that Mastodon post on a packet-switched network, despite the fact that this technology was invented by the war criminals at the RAND corporation?
@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.
For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante Frankly, I don't think there are any ethical concerns with how he's using it.
The reason AI is a violation when it trains on openly available data and then outputs similar stuff is that it's creating derivative works. Something that reads everything produced by man and then uses that information to score similar output does NOT. It's completely fair use and it's a GOOD application of AI.
IFF all the evil crap that the people who made it are up to wasn't a concern there'd be none.
-
I am astonished that I have to explain this,
but very simply in words even a small child could understand:
using these products *creates further demand*
- surely you know this?
Well, either you know this and are being facetious, or you are a lot stupider than I ever thought possible for someone with your privilege and resources.
I am absolutely floored at this reveal, just wow, "where's Cory and what have you done with him?" 🤷
Massive loss of respect!
@kel it sounds like your respect is rooted only in someone agreeing with you. If you respected them you'd maybe take a minute to listen to their arguments and ask yourself more about why they might disagree with you.
Namely the fact that you don't understand how "using these products creates further demand" doesn't relate to their arguments at all?
-
@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.
For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.
No. Literally the same LLM that currently finds punctuation errors will continue to do so. I'm not inventing novel forms of punctuation error that I need an updated LLM to discover.
-
@FediThing @tante This is the use-case that is under discussion.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante Well, I mean, he's wrong, so there's that.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante I will point out that I don't think that Cory is engaged in erecting a strawman, I think he's making a focused argument.
LLMs are a _big_ topic, and there are so many different ways folks are using them. Some folks _are_ opposed to any use of an LLM because of the reasons he has said, I heard these arguments. I think Cory is bucking this specific argument, and I think he's trying to point out that we can still try to find what is useful amidst what is problematic, and then use it on our own terms.
I disagree with how you seem to have read his position here.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante thank you.
-
@pluralistic I'd be disappointed if I didn't see myself in the pattern of engaging with people on a post like this who are worlds away from having a fair discussion...
They literally can't see the reality of AI beyond their arguments, they've decided it's inherently evil and wrong and locked in their viewpoint.
So their "russian roulette every day for hours" is because, despite you saying what you use it for, they can't comprehend how it can be used outside of the worst possible use cases.
Same reason they're accusing you of being a libertarian, but that's already the purity culture you were originally calling out.
And this is one of the reasons I've struggled with staying on Mastodon/Fedi, and come and go often.
There's this super hardcore fanatism, not just about LLMs/AI, but other topics as well, and if a person puts one toe on the line, they are eviscerated.
At some point it becomes hard to really engage with people when you have to be careful not to go against the grain. I don't have a thick enough skin to handle people berating me for not thinking exactly like them.
-
@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.
For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.
@skyfaller Funny thing there... a frozen in time LLM doesn't really lose that much functionality. Most good uses of LLMs don't rely on timely knowledge.
For instance @pluralistic 's use case is checking punctuation and grammar. So an LLM only loses functionality there at the rate grammar fundamentally changes... which is glacially.
Also, not all local LLMs are crawler based. For instance when training on wikipedia data to have more recent and accurate knowledge, they offer a bittorrent download of the whole site contents.
The ones creating problems with crawlers are the ones I'm certain Cory will agree are a problem, the big companies that are competing for investors by constantly throwing more and more data at their model in the drive for increasingly small improvements as the only way they have to compete for investors.
-
@skyfaller I think you should be able to answer these questions yourself, but clearly are struggling...
On your mink fur argument: the one ethical way to wear something like that is to only purchase used and old. The harm is done regardless of whether you purchase, you don't increase demand because your refusal to purchase new or recent means there's no profit in it. (This argument is also flawed because it's assuming local LLMs are made for profit when no profit is made on them)
And on your Luddite argument: When someone is using a machine to further oppress workers, the issue is not the machine but the person using it. You attack the machine to deprive them of it. But when an individual is using a completely separate instance of the machine, contributing nothing to those who are using the machine to abuse people... attacking them is simply attacking the worker.
@shiri An used mink coat may not give money directly to mink farmers/killers, but wearing mink fur sends a message about the acceptability of mink. The average passerby can't tell if the mink was bought new. If you walk down the street and there are 10 new mink wearers, the 11th "ethical" mink wearer lends themselves to the message that mink farming is fine, unless they are constantly screaming "this is used mink!" which is strange and obnoxious.
-
Which parts of running a model on your own laptop are implicated in "destroying the planet?" How is checking punctuation "stealing labor?" Or, for that matter "giving power over knowledge to LLM owners?"
@pluralistic i'd start with the part that the model probably came pre-trained. Or was it trained by you on your laptop...? @FediThing @tante
-
@pluralistic i'd start with the part that the model probably came pre-trained. Or was it trained by you on your laptop...? @FediThing @tante
This is a purity culture argument about the "fruit of the poisoned tree." The silicon in your laptop was invented by a eugenicist. The network your packets transit was invented by war criminals. The satellite the signal travels on was launched on a rocket descended from Nazi designs that were built by death-camp slaves.
-
This is a purity culture argument about the "fruit of the poisoned tree." The silicon in your laptop was invented by a eugenicist. The network your packets transit was invented by war criminals. The satellite the signal travels on was launched on a rocket descended from Nazi designs that were built by death-camp slaves.
To be clear, I completely reject this argument as a form of special pleading. Everyone has a reason why *their* fruit of the poisoned tree is OK, but other peoples' fruit of the poisoned tree is immoral.
-
No. Literally the same LLM that currently finds punctuation errors will continue to do so. I'm not inventing novel forms of punctuation error that I need an updated LLM to discover.
@pluralistic Ok, fair enough, if spell checking is literally the only thing you use LLMs for.
I still think you wouldn't rely on a 1950s dictionary for checking modern language, and language moves faster on the internet, but I'm willing to concede that point.
I still think a deterministic spell checker could have done the job and not put you in this weird position of defending a technology with wide-reaching negative effects. But I guess your post was for just that purpose.
-
This is a purity culture argument about the "fruit of the poisoned tree." The silicon in your laptop was invented by a eugenicist. The network your packets transit was invented by war criminals. The satellite the signal travels on was launched on a rocket descended from Nazi designs that were built by death-camp slaves.
@pluralistic i guess this misses the point: the particular chip in my laptop wasn't made by war criminals (i hope...), but the model you do use was trained under vast amounts of energy and water consumption. I'm not sure this is completely comparable, tbh.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante I am pursuing what I am calling my "AI Tea Party". It's origin was in putting family birth certificates in a self-hosted tool and realizing I didn't know if that meant they would end up in a training set somewhere. That began a process of me purging direct links to any LLM. Next step is switching to tools that do not use any LLM in their operation. After that is switching from suppliers of anything that use LLMs to operate.
This is a Sisyphean task since these round to **fucking everything** but I'm motivated to pursue it anyway.
-
@shiri An used mink coat may not give money directly to mink farmers/killers, but wearing mink fur sends a message about the acceptability of mink. The average passerby can't tell if the mink was bought new. If you walk down the street and there are 10 new mink wearers, the 11th "ethical" mink wearer lends themselves to the message that mink farming is fine, unless they are constantly screaming "this is used mink!" which is strange and obnoxious.
@skyfaller that is a better argument and I'll definitely accept that.
I think for many of us, myself included, the big thing with AI there is the investment bubble. Users aren't making that much difference on the bubble, the people propping up the bubble are the same people creating the problems.
I know I harp on people about anti-AI rage myself, but I specifically harp on people who are overbroad in that rage. So many people dismiss that there are valid use cases for AI in the first place, they demonize people who are using it to improve their lives... people who can be encouraged now to move on to more ethical platforms, and when the bubble bursts will move anyways.
We honestly don't need public pressure to end the biggest abuses of AI, because it's not public interest that's fueling them... it's investor's believing AI techbros. Eventually they're going to wise up and realize there's literally zero return on their investment and we're going to have a truly terrifying economic crash.
It's a lot like the dot-com bubble... but drastically worse.