Skip to content
0
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
shiri@foggyminds.comundefined

Shiri Bailem

@shiri@foggyminds.com
About
Posts
11
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @mastodonmigration
    it's the "copyright" issue, the outlook that unless everyone who posted anything that was used receives a check for a hefty sum then it's unethical.

    Copyright is in quotes because it's not really a violation of copyright (the LLMs are not producing whole copies of copywritten materials without basically being forced) nor is it a violation of the intent of copyright (people are confused, copyright was never intended to give artists total control, it's just to ensure new art continues to be created).

    @pluralistic @reflex @tante

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @FediThing The link in question where he talked about it, and did explicitly say it, though he didn't use the "offline" label specifically he basically described it as such. (The label itself is not purely self explanatory, so wouldn't have helped much)

    Here's the article link: pluralistic.net/2026/02/19/now…

    On friendica the thumbnail of the page is what I've attached here, incidentally the key paragraph in question.

    @tante

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @skyfaller Added detail: much of the perceived popularity of AI is propped up and manufactured.

    We're all aware how we're being force fed AI tools left and right... and the presence of those tools is much of what the perceived popularity comes from.

    Like Google force feeding AI results in it's search then touting people actively using and engaging with it's AI.

    There's a great post I saw, that sadly I can't easily find, that highlights the cycle where business leaders tout that they'll integrate AI to make things look good to the shareholders. They then roll out AI, and when people don't use it they start forcing people to use it. They then turn around and report to the shareholders that people are using the AI and they're going to integrate even more AI!

    Once the bubble pops, we stop getting force fed AI and it starts scaling back to places where people actually want to use it and it actually works.

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @mastodonmigration tagging @pluralistic because this is a good line of discussion and he might need the breath of fresh air you're bringing.

    My own two cents: you're missing one of the big complaints in the form of "how they were trained" which is the environment impact angle. Not that it isn't addressed by Cory's use case, just a missing point in the conversation that's helpful to include.

    The "stolen data" rabbit hole is sadly a neverending one that digs into deep issues that predate LLMs. Like the ethics of copyright (which is an actual discussion, just so old that it's forgotten in a time when copyright is taken for granted). Using it to create "art" and especially using it to replace artist jobs is however a much much more clear argument.

    Nitpick: LLMs can't be used for checking drug efficacy or surveying telescopic data, I think in this line you're confusing LLM with the technology it's based on which is Machine Learning.

    @tante

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @FediThing I think the problem in discourse is the overwhelming amount of people experience anti-AI rage.

    In the topic of LLMs, the two loudest groups by a wide margin are:
    1. People who refuse to see any nuance or detail in the topic, who can not be appeased by anything other than the complete and total end of all machine learning technologies
    2. AI tech bros who think they're only moments away from awakening their own personal machine god

    I like to think I'm in the same camp as @pluralistic , that there's plenty of valid use for the technology and the problems aren't intrinsic to the technology but purely in how it's abused.

    But when those two groups dominate the discussions, it means that people can't even conceive that we might be talking about something slightly different than what they're thinking.

    Cory in the beginning explicitly said they were using a local offline LLM to check their punctuation... and all of this hate you see right here erupted. If you read through the other comment threads, people are barely even reading his responses before lumping more hate on him.

    And if someone as great with language as Cory can't put it in a way that won't get this response... I think that says alot.

    @tante

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @skyfaller that is a better argument and I'll definitely accept that.

    I think for many of us, myself included, the big thing with AI there is the investment bubble. Users aren't making that much difference on the bubble, the people propping up the bubble are the same people creating the problems.

    I know I harp on people about anti-AI rage myself, but I specifically harp on people who are overbroad in that rage. So many people dismiss that there are valid use cases for AI in the first place, they demonize people who are using it to improve their lives... people who can be encouraged now to move on to more ethical platforms, and when the bubble bursts will move anyways.

    We honestly don't need public pressure to end the biggest abuses of AI, because it's not public interest that's fueling them... it's investor's believing AI techbros. Eventually they're going to wise up and realize there's literally zero return on their investment and we're going to have a truly terrifying economic crash.

    It's a lot like the dot-com bubble... but drastically worse.

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @skyfaller Funny thing there... a frozen in time LLM doesn't really lose that much functionality. Most good uses of LLMs don't rely on timely knowledge.

    For instance @pluralistic 's use case is checking punctuation and grammar. So an LLM only loses functionality there at the rate grammar fundamentally changes... which is glacially.

    Also, not all local LLMs are crawler based. For instance when training on wikipedia data to have more recent and accurate knowledge, they offer a bittorrent download of the whole site contents.

    The ones creating problems with crawlers are the ones I'm certain Cory will agree are a problem, the big companies that are competing for investors by constantly throwing more and more data at their model in the drive for increasingly small improvements as the only way they have to compete for investors.

    @tante @FediThing

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @kel it sounds like your respect is rooted only in someone agreeing with you. If you respected them you'd maybe take a minute to listen to their arguments and ask yourself more about why they might disagree with you.

    Namely the fact that you don't understand how "using these products creates further demand" doesn't relate to their arguments at all?

    @pluralistic @simonzerafa @tante

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com
    @pluralistic @Colman @FediThing @tante
    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @skyfaller I think you should be able to answer these questions yourself, but clearly are struggling...

    On your mink fur argument: the one ethical way to wear something like that is to only purchase used and old. The harm is done regardless of whether you purchase, you don't increase demand because your refusal to purchase new or recent means there's no profit in it. (This argument is also flawed because it's assuming local LLMs are made for profit when no profit is made on them)

    And on your Luddite argument: When someone is using a machine to further oppress workers, the issue is not the machine but the person using it. You attack the machine to deprive them of it. But when an individual is using a completely separate instance of the machine, contributing nothing to those who are using the machine to abuse people... attacking them is simply attacking the worker.

    @tante @FediThing @pluralistic

    Uncategorized

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    shiri@foggyminds.comundefined shiri@foggyminds.com

    @pluralistic I'd be disappointed if I didn't see myself in the pattern of engaging with people on a post like this who are worlds away from having a fair discussion...

    They literally can't see the reality of AI beyond their arguments, they've decided it's inherently evil and wrong and locked in their viewpoint.

    So their "russian roulette every day for hours" is because, despite you saying what you use it for, they can't comprehend how it can be used outside of the worst possible use cases.

    Same reason they're accusing you of being a libertarian, but that's already the purity culture you were originally calling out.

    @simonzerafa @raymaccarthy @tante

    Uncategorized
  • 1 / 1
  • Login

  • Login or register to search.
  • First post
    Last post