falsehoods youtubers believe about "AI"
-
falsehoods youtubers believe about "AI"
-
falsehoods youtubers believe about "AI"
hearing gullible 20-somethings say "this technology DOES have good use-cases, like in medicine for example…"
is going to turn me into the fucking Joker
-
hearing gullible 20-somethings say "this technology DOES have good use-cases, like in medicine for example…"
is going to turn me into the fucking Joker
"But SnoopJ have you considered just not watching trash"
I mean, I *have* considered it
-
"But SnoopJ have you considered just not watching trash"
I mean, I *have* considered it
@SnoopJ I listened to the episode of This Machine Kills with Bruce Schneier and whoa on some of the claims Bruce mad there, same sentiment.
-
@SnoopJ I listened to the episode of This Machine Kills with Bruce Schneier and whoa on some of the claims Bruce mad there, same sentiment.
@huronbikes no kidding? That's a shame, I really like his writing
-
falsehoods youtubers believe about "AI"
@SnoopJ Should I ask which one this is?
-
@SnoopJ Should I ask which one this is?
@cthos nobody I'd ever heard of before so probably not coming for your fave
-
@cthos nobody I'd ever heard of before so probably not coming for your fave
@SnoopJ Oh I have no favorites, but I do occasionally watch to see who's gushing about what these days.
Like, one (semifamous) dude gets early access to models and was recently gushing about how it "one shot" a cryptography capture-the-flag puzzle and I'm just waiting to see how quickly that claim falls apart.
-
@SnoopJ Oh I have no favorites, but I do occasionally watch to see who's gushing about what these days.
Like, one (semifamous) dude gets early access to models and was recently gushing about how it "one shot" a cryptography capture-the-flag puzzle and I'm just waiting to see how quickly that claim falls apart.
@cthos ah no this was decidedly from the 'skeptic' corner of commentary.
Which are the ones that actually make me more annoyed because I don't like it when I agree with someone's fundamental premise but find their reasoning/claims to be undercooked.
The claim that "AI" could be great for medicine makes me want to tear my hair out and run screaming into the streat, though.
-
@cthos ah no this was decidedly from the 'skeptic' corner of commentary.
Which are the ones that actually make me more annoyed because I don't like it when I agree with someone's fundamental premise but find their reasoning/claims to be undercooked.
The claim that "AI" could be great for medicine makes me want to tear my hair out and run screaming into the streat, though.
@SnoopJ Like, are they conflating all kinds of Machine Learning or do they really think an LLM is gonna get the precription right all the time?
-
@SnoopJ Like, are they conflating all kinds of Machine Learning or do they really think an LLM is gonna get the precription right all the time?
@cthos most people in this bucket of annoyance are talking about "AI" writ large, so including models with vision capabilities, whether or not they have a language model bolted to the side.
It kinda makes no difference: the world pre-LLMs was rife with examples of people training vision models on *domain-specific* datasets, and suffering from the Clever Hans problem (e.g. learning that a photo of a mole on someone's skin is cancerous if a ruler is present in the photo, because the training data sucked)
-
@huronbikes no kidding? That's a shame, I really like his writing
@SnoopJ they weren't the worst I've heard and he did bring up the big security problem in that data and instruction cannot be separated in a prompt, but eh, kind of some out-there claims.
-
@cthos most people in this bucket of annoyance are talking about "AI" writ large, so including models with vision capabilities, whether or not they have a language model bolted to the side.
It kinda makes no difference: the world pre-LLMs was rife with examples of people training vision models on *domain-specific* datasets, and suffering from the Clever Hans problem (e.g. learning that a photo of a mole on someone's skin is cancerous if a ruler is present in the photo, because the training data sucked)
@cthos there was a whole spate of these back when people cared about SARS-nCoV-2, a whole little cottage industry of bad guessing machines that would look at an x-ray and try to diagnose something
As I recall many of them suffered from effectively learning to tell whether or not the radiograph indicated that the patient was lying down, because that is a decent correlate…
I get where it comes from when it comes to Money Bastards. Radiography is expensive, every other part of medicine is expensive. They'd love to eliminate the humans and increase their profit margin.
But the indignity of Joe Sixpack carrying water for this position and not even getting paid for spreading lies? OOF.
-
@cthos there was a whole spate of these back when people cared about SARS-nCoV-2, a whole little cottage industry of bad guessing machines that would look at an x-ray and try to diagnose something
As I recall many of them suffered from effectively learning to tell whether or not the radiograph indicated that the patient was lying down, because that is a decent correlate…
I get where it comes from when it comes to Money Bastards. Radiography is expensive, every other part of medicine is expensive. They'd love to eliminate the humans and increase their profit margin.
But the indignity of Joe Sixpack carrying water for this position and not even getting paid for spreading lies? OOF.
@cthos it is a useful indicator of just how staggeringly ignorant the general public is about the risks, though.
A lot of people are eating up claims of "doubling the human life-span" and other such obvious lies.
-
hearing gullible 20-somethings say "this technology DOES have good use-cases, like in medicine for example…"
is going to turn me into the fucking Joker
@SnoopJ Does the protein folding one count as not a good use case?
-
@SnoopJ Does the protein folding one count as not a good use case?
@OliviaVespera I am not really "into" protein folding enough to hold opinions about that space, or tools that aim for it (e.g. AlphaFold)
I'm open to tools that accelerate the search over protein structures, especially since "it either conforms or it doesn't" and other clear criteria for goodness apply.
All the ones I've ever heard anybody say good things about predate the current craze for "AI" and I think it would be generous to assume this is what people mean when they talk about applications in medicine, but maybe some of them do think of this.
-
@OliviaVespera I am not really "into" protein folding enough to hold opinions about that space, or tools that aim for it (e.g. AlphaFold)
I'm open to tools that accelerate the search over protein structures, especially since "it either conforms or it doesn't" and other clear criteria for goodness apply.
All the ones I've ever heard anybody say good things about predate the current craze for "AI" and I think it would be generous to assume this is what people mean when they talk about applications in medicine, but maybe some of them do think of this.
@OliviaVespera perhaps more importantly: the potential harms are much smaller than the "machine that diagnoses you" stuff, which is basically *made* out of harm.
-
@cthos there was a whole spate of these back when people cared about SARS-nCoV-2, a whole little cottage industry of bad guessing machines that would look at an x-ray and try to diagnose something
As I recall many of them suffered from effectively learning to tell whether or not the radiograph indicated that the patient was lying down, because that is a decent correlate…
I get where it comes from when it comes to Money Bastards. Radiography is expensive, every other part of medicine is expensive. They'd love to eliminate the humans and increase their profit margin.
But the indignity of Joe Sixpack carrying water for this position and not even getting paid for spreading lies? OOF.
@SnoopJ There's another one that I can't find lately but it was basically "machine predicts TB more often when the X-Ray machine is older because that is correlated with places with higher incidents of TB".
I'm pretty generally in the camp of "machine learning not great for healthcare" in general, but for specific sets of data crunching to expand or narrow what you might want to target for a therapeutic. But then you have a problem with all datasets being biased.
-
@SnoopJ There's another one that I can't find lately but it was basically "machine predicts TB more often when the X-Ray machine is older because that is correlated with places with higher incidents of TB".
I'm pretty generally in the camp of "machine learning not great for healthcare" in general, but for specific sets of data crunching to expand or narrow what you might want to target for a therapeutic. But then you have a problem with all datasets being biased.
@cthos yea, the great thing about specialist models is that they SOLVE a specific problem
the bad thing about specialist models is that solve a SPECIFIC problem (therefore you cannot get infinite growth out of just one)
-
@cthos yea, the great thing about specialist models is that they SOLVE a specific problem
the bad thing about specialist models is that solve a SPECIFIC problem (therefore you cannot get infinite growth out of just one)
@SnoopJ Ayup.
We could still probably have some utility on specialized models for, say, disease detection, if the output were strictly used as a "please check this again" mechanism rather than "check more faster go go go go go, also we fired all the techs". Not really the world we live in though.