I could offer a similar "theorem" for replies to this toot:
-
RE: https://wandering.shop/@xgranade/115772870672213549
I could offer a similar "theorem" for replies to this toot:
• Bad-faith replies from AI boosters trying to pull a fast one.
• "Moderates" who believe at least some of the outlandish claims made by AI boosters, and who like to pretend that they are not also boosters.
• Reasonable people who are legitimately uninformed or confused by bullshit put out by AI boosters.
• Reasonable people angrily agreeing. -
RE: https://wandering.shop/@xgranade/115772870672213549
I could offer a similar "theorem" for replies to this toot:
• Bad-faith replies from AI boosters trying to pull a fast one.
• "Moderates" who believe at least some of the outlandish claims made by AI boosters, and who like to pretend that they are not also boosters.
• Reasonable people who are legitimately uninformed or confused by bullshit put out by AI boosters.
• Reasonable people angrily agreeing.The broad synthesis, though, is that once you stop believing extraordinary claims made with underwhelming evidence or even no evidence at all, the whole tower of AI bullshit falls apart.
"But how does AI do $x?"
It doesn't. Even if you don't personally have the knowledge to conclude that LLMs and GANs can't do $x, from a simple burden of proof perspective, you don't have to assume that AIs *can* do $x just because someone really wants you to think that they can. -
The broad synthesis, though, is that once you stop believing extraordinary claims made with underwhelming evidence or even no evidence at all, the whole tower of AI bullshit falls apart.
"But how does AI do $x?"
It doesn't. Even if you don't personally have the knowledge to conclude that LLMs and GANs can't do $x, from a simple burden of proof perspective, you don't have to assume that AIs *can* do $x just because someone really wants you to think that they can.From there, "OK, so I haven't seen any evidence sufficient to conclude that AIs can do $x" immediately calls into question any product or service which claims to do $x with LLMs and GANs.
Are they just pushing spam and/or slop on you? Category I!
Is the booster in question lying and not actually doing $x? Category II!
Are they doing $x but using techniques that can do $x and just calling it AI? III or IV! -
From there, "OK, so I haven't seen any evidence sufficient to conclude that AIs can do $x" immediately calls into question any product or service which claims to do $x with LLMs and GANs.
Are they just pushing spam and/or slop on you? Category I!
Is the booster in question lying and not actually doing $x? Category II!
Are they doing $x but using techniques that can do $x and just calling it AI? III or IV!Example: "But AIs can do math!"
First, no they can't. But computers are great at doing math, just as long as you don't force it through an LLM funnel first.
So when you see ChatGPT boosters saying that it's "good at math," that category III, easy. There's no need to use LLMs for doing something that has been done better, cheaper, faster, and more ethically without AI.
-
Example: "But AIs can do math!"
First, no they can't. But computers are great at doing math, just as long as you don't force it through an LLM funnel first.
So when you see ChatGPT boosters saying that it's "good at math," that category III, easy. There's no need to use LLMs for doing something that has been done better, cheaper, faster, and more ethically without AI.
Example: "But AIs can have conversations with your books!"
First, no, they can't. Even if they could (they can't), that's firmly a Category I — a bad use case, whether or not it works.
-
Example: "But AIs can have conversations with your books!"
First, no, they can't. Even if they could (they can't), that's firmly a Category I — a bad use case, whether or not it works.
Example: "But AIs can tell me what changed between two legal documents!"
First, no they can't. That's something that, at least as of yet, only humans can do. So at *best* that's a Category II, if you set aside the massive ethical problems with AI.
-
The broad synthesis, though, is that once you stop believing extraordinary claims made with underwhelming evidence or even no evidence at all, the whole tower of AI bullshit falls apart.
"But how does AI do $x?"
It doesn't. Even if you don't personally have the knowledge to conclude that LLMs and GANs can't do $x, from a simple burden of proof perspective, you don't have to assume that AIs *can* do $x just because someone really wants you to think that they can."stop believing extraordinary claims made with underwhelming evidence" is fantastic advice in any context ❤️
-
undefined pierostrada@sociale.network shared this topic on
undefined oblomov@sociale.network shared this topic on