Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
-
@tante i think the strawman indeed IS the issue comparing (even it was just through context) an LLM for spell checking/grammar where it is really insignificant if IT performs well or not to a general usability, referring to liberation including critical tasks.
I don't detest AI because of the fascists that created most of IT but because they intentionally design and sell "tools" that are good at fascism and not much else of significance. A screwdriver with a grip that cuts the user.
@tante a screwdriver that only works on a low percentage of screws it was designed for, thus "Tools".
-
I'm not using it for spell checking.
Did you read the article that is under discussion?
@pluralistic I apologize, I did in fact read the relevant section of your post, and I was using spell-checking as shorthand for all typo checking, because deterministic grammar checkers have also existed for some time, although not as long as spell checkers and perhaps they have not been as reliable. I understand that LLMs can catch some typos that deterministic solutions may not.
I just think we should put more effort into improving deterministic tools instead of giving up.
-
@pluralistic I apologize, I did in fact read the relevant section of your post, and I was using spell-checking as shorthand for all typo checking, because deterministic grammar checkers have also existed for some time, although not as long as spell checkers and perhaps they have not been as reliable. I understand that LLMs can catch some typos that deterministic solutions may not.
I just think we should put more effort into improving deterministic tools instead of giving up.
-
@tante It seems to me Doctorow is obviously correct about this. But I don't think it matters too much if you don't agree... the trajectory of LLMs is going to be whatever it is going to be.
If you don't like it and have buddies that don't like it either, that's not a bad thing especially if you are undergoing real negative effects from it.
It's just if you stray from reality (whatever that will be) too far for too long, you will end up with a big shock when forced to rejoin it.
Don't mistake a hugely popular fad or bubble for "reality." And if you don't believe that "[nearly] everybody believes" can be quite detached from punishingly harsh reality, then you need to read about the "Tulip Mania" craze and bubble:
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
I completely agree with your view on us being messy, imperfect beings. And while many take such a realization as a free ticket to shrug themselves into deep cynicism, I deeply appreciate people who tend to try a little harder than most to do the right thing, and own every compromise they decide to make as what it is.
Once we start warping our analysis and critical thinking to match our actions instead of trying our best to make our actions fit the former, we'll quickly start losing any ability to act with accountability. -
Don't mistake a hugely popular fad or bubble for "reality." And if you don't believe that "[nearly] everybody believes" can be quite detached from punishingly harsh reality, then you need to read about the "Tulip Mania" craze and bubble:
And likewise, don't mistake "mainstream thinking" or what "most of the industry is doing" with "reality" or even "best practice." Agile, Lean, and Total Quality Management, and practically about every other significant improvement is a break from "the usual way of doing things." Improvement is a change from the mediocre.
"Appeal to Popularity" (as a signal of truth) is literally a well documented Logical Fallacy:
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
Hmmmm... How about this perspective?
LLM is just a programming technique. The ethicality of using LLMs relates to the type of use and the source of the data it was trained on.
Using LLMs to search the universe for dark matter using survey telescopic data or to identify drug efficacy using anonymized public health records is simply using the latest technology for good purpose. Cory's use seems like this.
LLMs trained on stolen data creating derivative work. That's just theft.
-
@tante Dunno where you got the idea that I have a "libertarian" background. I was raised by Trotskyists, am a member of the DSA, am advising and have endorsed Avi Lewis, and joined the UK Greens to back Polanski.
@pluralistic@mamot.fr
Well, we are not only influenced by our legacy: however strong we are, we can't avoid some fundamental influence from the hegemonic culture we live in.
Yet I see how the ethical misalignment here may not be about libertarian values but about utilitarian ones.
Even more subtly, it might be a misalignment about respective utility functions, while both #pluralistic and @tante@tldr.nettime.org adopt an utilitarian framework instead of a normative one.
For example, the Pluralistic use of a local LLM might be explained with a slightly higher evaluation of the benefits that his own writings brings to society and thus (indirectly) the value the LLM brings, despite its issues.
Otoh, Tante might value a lot more the political harm that Cory's words did by blaming a political choice as irrational while it's totally rationale: in a way, by justifying the use of a #LLM, #Doctorow justified (even just a little bit) the industry that built it.
And since Pluralistic's strawman is centered around a normative "purity culture" blamed as irrational, Tante framed his response over rationality.
What if a normative behaviour was in fact totally rational in presence of unreducible complexity and informational asymmetry?
I don't use LLM for so many technical and political reasons that would take hours to list. And you both would almost certainly nod to most of them as a strictly rational arguments.
Yet the choice itself, bound to the society I want to build for my daughters and children, is normative: based on the values of truth, freedom and communion.
None of these could ever come from the LLM we are talking about: they are weapons designed to fool people (Turing test included!), so there's no way to wield them to benefit people.
As for "purity culture", I'm a catholic #christian, not a puritan: we brag about the #Church being a casta meretrix (Latin for something like "a pure bitch" 🤣), and we preach a man who hanged with the worst sinners and sometimes even hacking the law to save their lifes, so... 🤷♂️
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante Since I assume all the #Epstein documents have been scraped into all the LLM models by now, I'd love to see an example of LLM tech being used for good.
Show me the list of Epstein co-conspirators.
Show me names of who helped them escape accountability, and how they did it.
Show me who raped children. Their names, addresses, passport photos.
Then I will believe LLMs and "AI" have delivered a benefit. -
No, this is just more "fruit of the poisoned tree" and your argument that your fruit of the poisoned tree doesn't count is the normal special pleading that this argument always decays into.
@pluralistic sorry, i'm just not good at making a point. To me, not "LLM" is the "forbidden fruit", but "using an LLM for certain purposes" is. I think there are actually use-cases for stochastic inference machines (like folding proteins or structuring references), but, as @tante wrote (better: as I understand him), there are use-cases that one very much can reject in its entirety. And that should be okay.
-
@FediThing I think the problem in discourse is the overwhelming amount of people experience anti-AI rage.
In the topic of LLMs, the two loudest groups by a wide margin are:
1. People who refuse to see any nuance or detail in the topic, who can not be appeased by anything other than the complete and total end of all machine learning technologies
2. AI tech bros who think they're only moments away from awakening their own personal machine godI like to think I'm in the same camp as @pluralistic , that there's plenty of valid use for the technology and the problems aren't intrinsic to the technology but purely in how it's abused.
But when those two groups dominate the discussions, it means that people can't even conceive that we might be talking about something slightly different than what they're thinking.
Cory in the beginning explicitly said they were using a local offline LLM to check their punctuation... and all of this hate you see right here erupted. If you read through the other comment threads, people are barely even reading his responses before lumping more hate on him.
And if someone as great with language as Cory can't put it in a way that won't get this response... I think that says alot.
-
@pluralistic sorry, i'm just not good at making a point. To me, not "LLM" is the "forbidden fruit", but "using an LLM for certain purposes" is. I think there are actually use-cases for stochastic inference machines (like folding proteins or structuring references), but, as @tante wrote (better: as I understand him), there are use-cases that one very much can reject in its entirety. And that should be okay.
I never denied the existence of "use-cases that...one can reject it its entirety."
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
enshittification of pluralistic
-
Hmmmm... How about this perspective?
LLM is just a programming technique. The ethicality of using LLMs relates to the type of use and the source of the data it was trained on.
Using LLMs to search the universe for dark matter using survey telescopic data or to identify drug efficacy using anonymized public health records is simply using the latest technology for good purpose. Cory's use seems like this.
LLMs trained on stolen data creating derivative work. That's just theft.
@mastodonmigration tagging @pluralistic because this is a good line of discussion and he might need the breath of fresh air you're bringing.
My own two cents: you're missing one of the big complaints in the form of "how they were trained" which is the environment impact angle. Not that it isn't addressed by Cory's use case, just a missing point in the conversation that's helpful to include.
The "stolen data" rabbit hole is sadly a neverending one that digs into deep issues that predate LLMs. Like the ethics of copyright (which is an actual discussion, just so old that it's forgotten in a time when copyright is taken for granted). Using it to create "art" and especially using it to replace artist jobs is however a much much more clear argument.
Nitpick: LLMs can't be used for checking drug efficacy or surveying telescopic data, I think in this line you're confusing LLM with the technology it's based on which is Machine Learning.
-
@tante cory is, at his heart, a conservative/liberal USian, putting him far to the right of mainstream European thought and politics.
He constantly refuses to apply his beliefs to underlying structures, arguing that AI or enshittification are aberrations in capitalism, refusing to acknowledge and blocking anyone who argues that it's just capitalism acting as intended.
It doesn't surprise me at all that he's acting hypocritically here.
@dgold @tante I'd like to ask your opinion on the policies of the candidate that Doctorow endorsed in the NDP (Canada's most progressive federal party) leadership election: https://lewisforleader.ca/ideas
This is a genuine question. I'm not very familiar with European politics, but Lewis aligns strongly with what my perception (again, north american) on what a progressive party should be like. I think Doctorow's endorsement of Lewis rejects the idea that he's far right, even in the context of European politics.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
I think the big issue is the combination of GenAI and LLMs.
GenAI by itself was a fun toy which would generate entertaining nonsense.
LLMs by themselves are effectively just a data classification technique for text. This can be used in a lot of ways. For some reason, the way that everyone in any kind of power is pushing is "generate a bunch of plausible sounding text" but it can also be used as a basis for a semantic search or as mentioned elsewhere grammar and spell checking.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@tante If you link to an academic paper as support for your argument, I will download that academic paper. This is simply nature taking its course.
-
Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
"Artifacts and technologies have certain logics built into their structure that do require certain arrangements around them or that bring forward certain arrangements… Understanding this you cannot take any technology and 'make it good.'"
-
@skyfaller that is a better argument and I'll definitely accept that.
I think for many of us, myself included, the big thing with AI there is the investment bubble. Users aren't making that much difference on the bubble, the people propping up the bubble are the same people creating the problems.
I know I harp on people about anti-AI rage myself, but I specifically harp on people who are overbroad in that rage. So many people dismiss that there are valid use cases for AI in the first place, they demonize people who are using it to improve their lives... people who can be encouraged now to move on to more ethical platforms, and when the bubble bursts will move anyways.
We honestly don't need public pressure to end the biggest abuses of AI, because it's not public interest that's fueling them... it's investor's believing AI techbros. Eventually they're going to wise up and realize there's literally zero return on their investment and we're going to have a truly terrifying economic crash.
It's a lot like the dot-com bubble... but drastically worse.
@skyfaller Added detail: much of the perceived popularity of AI is propped up and manufactured.
We're all aware how we're being force fed AI tools left and right... and the presence of those tools is much of what the perceived popularity comes from.
Like Google force feeding AI results in it's search then touting people actively using and engaging with it's AI.
There's a great post I saw, that sadly I can't easily find, that highlights the cycle where business leaders tout that they'll integrate AI to make things look good to the shareholders. They then roll out AI, and when people don't use it they start forcing people to use it. They then turn around and report to the shareholders that people are using the AI and they're going to integrate even more AI!
Once the bubble pops, we stop getting force fed AI and it starts scaling back to places where people actually want to use it and it actually works.
-
@tante Since I assume all the #Epstein documents have been scraped into all the LLM models by now, I'd love to see an example of LLM tech being used for good.
Show me the list of Epstein co-conspirators.
Show me names of who helped them escape accountability, and how they did it.
Show me who raped children. Their names, addresses, passport photos.
Then I will believe LLMs and "AI" have delivered a benefit.