FOUND IT
-
FOUND IT
@lokeloski Interesting I've also observed a dynamic where if someone - most commonly programmers - sees an LLM producing output that passes initial inspection or does something that they would consider a mark of human-level competence, there's a chance that they're completely beguiled by it and conclude from that point on that LLMs are now basically operating at approximately that competence level across *all fields*. The psychodynamics of it are really alarming and, clearly, socially corrosive.
-
FOUND IT
This is why CEOs assume it can do everything, because they don't know how to do anything.
-
FOUND IT
@lokeloski and CEOs think it can replace everything. I wonder why...
-
FOUND IT
@lokeloski
I recently went to an opera where the composer was not only present but also performing as one of the soloists, among five other vocalists, along with a men's choir, accompanied by a full orchestra.The backdrop to this rich contribution to human musical art was AI visuals projected onto a screen.
-
FOUND IT
@lokeloski
alt-text screenshot of post by magicmooshka from Jan 7:
recently my friend's comics professor told her that it's acceptable to use gen AI for script-writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sisters screenwriting professor said that they can use gen AI for concept art and visualization, but that it won't be able to generate script that's any good. and at my job, 1/2 -
@lokeloski
alt-text screenshot of post by magicmooshka from Jan 7:
recently my friend's comics professor told her that it's acceptable to use gen AI for script-writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sisters screenwriting professor said that they can use gen AI for concept art and visualization, but that it won't be able to generate script that's any good. and at my job, 1/2it seems like each department says that AI can be useful in every field except the one that they know best.
it's only ever the jobs we're unfamiliar with that we assume can be replaced with automation.
The more attuned we are with certain processes, crafts and occupations, the more we realize that gen AI will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to be everything we don't. 2/2 -
it seems like each department says that AI can be useful in every field except the one that they know best.
it's only ever the jobs we're unfamiliar with that we assume can be replaced with automation.
The more attuned we are with certain processes, crafts and occupations, the more we realize that gen AI will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to be everything we don't. 2/2strange conclusions by those professors,
in my mind it works differently,
when I say #salami output in a field where I'm expert and judge it to be inferior and conclude so-marketed gen AI will not be a competition, I would conclude this is valid for a l l other fields as well, and I as a dilettante in all other fields can be tricked to accept generated output as valid. -
strange conclusions by those professors,
in my mind it works differently,
when I say #salami output in a field where I'm expert and judge it to be inferior and conclude so-marketed gen AI will not be a competition, I would conclude this is valid for a l l other fields as well, and I as a dilettante in all other fields can be tricked to accept generated output as valid.the reason why mentioned professors and departments are willing to accept #salami output in other fields than theirs, may be caused by several year long propaganda on almost any channel, that claim we are living in a AI era, and that this is t h e new technology to be used, we see huge investments and think no one would just burn money on such a flawed dysfunctional slop generating "invention" so it must work
-
the reason why mentioned professors and departments are willing to accept #salami output in other fields than theirs, may be caused by several year long propaganda on almost any channel, that claim we are living in a AI era, and that this is t h e new technology to be used, we see huge investments and think no one would just burn money on such a flawed dysfunctional slop generating "invention" so it must work
-
FOUND IT
@lokeloski Gell-Mann Amnesia.
-
It's the Gell-Mann amnesia effect all over again.
-----------
The Gell-Mann amnesia effect is a claimed cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.@AdrianRiskin @lokeloski or more generally the egocentric bias. Veritasium has a nice video on this: https://youtu.be/3LopI4YeC4I?si=ZV6CuklywzLekwHd
-
It's the Gell-Mann amnesia effect all over again.
-----------
The Gell-Mann amnesia effect is a claimed cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.@AdrianRiskin @lokeloski It's one of the reasons that, when pointing out that I do not like Generative LLMs for the work they output, I do emphasize that it's not just *my* programming expertise that I feel this for.
Like, I feel the same way for books; if you wrote it with an LLM, and we can see because a prompt made it into the printed version, that tells me that you did not read what you claimed to have "Wrote" with an LLM - why should I read it then, when I know it can do the same thing it can do for math, or coding, or images?
-
FOUND IT
@lokeloski Very well put. To me, this is similar to the Gell-Mann amnesia effect, where for subjects we have deep knowlege about, we see all the flaws in media reports, but tend to assume that for all other subjects, the media reports are basically fine. @davidgerard
https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect?wprov=sfla1
-
@lokeloski dev seems to be the only one thinking they can replace their own job with AI and everything will be fine
@gkrnours
Some mathematicians are also on this "let's automatize our own job" path…
@lokeloski -
FOUND IT
@lokeloski Gen AI can replace incompetent people. Well, it will be incompetent, too, but often somewhat better.
Same with self-driving cars. Self-driving cars replacing incompetent drivers and driving somewhat better than them is good enough to improve overall traffic safety.
We like to compare AI with the best people out there — we made the same mistake with chess players and go players and only accepted AI superiority when AI was able to beat the world champion; but it was playing better than average a decade before.
Current Gen AI is certainly worse than the best. But we don't have that many best people out there. We have a lot of stupid, uneducated people. And we have them in positions of power where they never should have been promoted to, and they do spectacularly wrong things there.
We are constantly overestimating human intelligence, too. Not just Gen AI intelligence.
-
FOUND IT
@lokeloski@mastodon.social I just shared this at work.
With some of the people pushing for AI integration everywhere. -
FOUND IT
@lokeloski rediscovering https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amnesia_effect all over are we -
FOUND IT
@lokeloski I call that Mount Stupid
-
@lokeloski Very well put. To me, this is similar to the Gell-Mann amnesia effect, where for subjects we have deep knowlege about, we see all the flaws in media reports, but tend to assume that for all other subjects, the media reports are basically fine. @davidgerard
https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect?wprov=sfla1
@geeeero @lokeloski important to note the Gell-Mann effect is made up trash. It's literally something Crichton said once. So imagine how cognitive psychologists feel about it.
-
FOUND IT
@lokeloski Fortunately for AI pushers, most people are ignorant about most things. Optimistically, the Inverse 80/20 rule applies.