I offer Cassandra's Complete Class Theorem¹.
-
I offer Cassandra's Complete Class Theorem¹.
All "good use cases for AI" break down into one of four categories:
I. Bad use case.
II. Good use case, but not something AI can do.
III Good use case, but it's already been done without AI.
IV. Not AI in the LLMs and GANs sense, but in the sense of machine learning, statistical inference, or other similarly valid things AI boosters use to shield from critique.https://wandering.shop/@xgranade/115766140296237983
___
¹Not actually a theorem. -
I offer Cassandra's Complete Class Theorem¹.
All "good use cases for AI" break down into one of four categories:
I. Bad use case.
II. Good use case, but not something AI can do.
III Good use case, but it's already been done without AI.
IV. Not AI in the LLMs and GANs sense, but in the sense of machine learning, statistical inference, or other similarly valid things AI boosters use to shield from critique.https://wandering.shop/@xgranade/115766140296237983
___
¹Not actually a theorem.@xgranade the drum I’m banging is that there is no use case that could be valid enough, even if you were wrong, to justify its harms, so this conversation is ultimately irrelevant
-
I offer Cassandra's Complete Class Theorem¹.
All "good use cases for AI" break down into one of four categories:
I. Bad use case.
II. Good use case, but not something AI can do.
III Good use case, but it's already been done without AI.
IV. Not AI in the LLMs and GANs sense, but in the sense of machine learning, statistical inference, or other similarly valid things AI boosters use to shield from critique.https://wandering.shop/@xgranade/115766140296237983
___
¹Not actually a theorem.This is a very good point, and something I need to keep at the front of my mind. Even if there was somehow a magic use case for LLMs and other AI bullshit machines, that would not justify:
• Making fascists rich.
• Displacing labor rights.
• Fucking over the environment.
• Enclosing culture behind corporate ownership.
• Giving fascists a giant disinfo machine. -
This is a very good point, and something I need to keep at the front of my mind. Even if there was somehow a magic use case for LLMs and other AI bullshit machines, that would not justify:
• Making fascists rich.
• Displacing labor rights.
• Fucking over the environment.
• Enclosing culture behind corporate ownership.
• Giving fascists a giant disinfo machine.So like, yeah, I'm gonna shitpost about how laughably bad the pro-AI position is on the technical merits, but the inhumanity of the pro-AI position on a moral basis is no laughing matter.
-
@xgranade the drum I’m banging is that there is no use case that could be valid enough, even if you were wrong, to justify its harms, so this conversation is ultimately irrelevant
-
@codinghorror @zkat That's a fine instinct when approaching people acting in good faith, but is very badly misplaced when it comes to AI. I can't speak for @zkat, but as for myself, I am perfectly content to draw a pretty bright damned line between "people advocating or making excuses for anti-human AI bullshit" and "people who I'm willing to extend the assumption of good faith to."
-
undefined oblomov@sociale.network shared this topic