@glyph Did you quote post something?
-
@smacintyre@mastodon.social @glyph
You forgot to start with "Well, actually..."
-
@FediThing @glyph LLMs are trained on human-created material in much the same way a person learns by reading books and then acting on what they've learned. They don't directly reproduce that material.
As I mentioned I strongly believe that broad sharing of knowledge is a net benefit to humanity. Questions of credit and attribution are a separate issue and to discuss them meaningfully, you first have to be clear about what you consider reasonable attribution in the first place.
You can take for instance the tankgame and then tell me which part should be attributed and is not, and what you would be attributing it to.
On the "against the will": I want you to use the code I wrote, it's definitely not against my will that LLMs are trained on the code I wrote over the years.
@mitsuhiko @FediThing @glyph On "against the will" and open source and such - You do realise not everything that's public (by some definition of public, because these DDoSing jerks at Perplexity etc. think anything with an open HTTP port is "public") is free to use, right?
That *you in particular* are happy to offer *your* works to train LLMs is, well, that's nice, but it's also pretty irrelevant in the big scope of the rest of the internet.
-
@mitsuhiko @FediThing @glyph On "against the will" and open source and such - You do realise not everything that's public (by some definition of public, because these DDoSing jerks at Perplexity etc. think anything with an open HTTP port is "public") is free to use, right?
That *you in particular* are happy to offer *your* works to train LLMs is, well, that's nice, but it's also pretty irrelevant in the big scope of the rest of the internet.
@phl @FediThing @glyph the reason I stated this is to be explicit about my stance towards IP in isolation to AI. It’s my strong belief that we should share more and that copyrights are too strong. AI has in my mind the nice side effect that it pushes us potentially closer to that.
-
It really reminds me of tarot card readers. It's not about the cards, it's about how the person reading the cards interacts with the other person. They could be actually helpful. Or performing a grift. The latter is far more common.
Just like snake oil was useful, but there were so many traveling con men selling counterfeit fake stuff that the bad experiences were applied to the real thing too.
@cainmark @nyrath @glyph Yes, and another interesting one is astrology columns.
The ELIZA effect was surprising to its naive computer programmers, because they thought it should be obvious ELIZA was just a "parlor trick" because anyone could see how it worked.
But astrology columns were equally obvious. When a mass printed column is read by thousands of readers, it's obviously impossible for it to be a personal message to you specifically. And yet ...
-
@phl @FediThing @glyph the reason I stated this is to be explicit about my stance towards IP in isolation to AI. It’s my strong belief that we should share more and that copyrights are too strong. AI has in my mind the nice side effect that it pushes us potentially closer to that.
@mitsuhiko @FediThing @glyph It's as much a nice side-effect as the redistribution of personal items acquired from unlocked sheds and yards on the aftermarket is.
Keep in mind there's a huge swath of grey between open source and large corporate lawyer-backed death+90yr IP copyrights
-
@cainmark @nyrath @glyph Yes, and another interesting one is astrology columns.
The ELIZA effect was surprising to its naive computer programmers, because they thought it should be obvious ELIZA was just a "parlor trick" because anyone could see how it worked.
But astrology columns were equally obvious. When a mass printed column is read by thousands of readers, it's obviously impossible for it to be a personal message to you specifically. And yet ...
@cainmark @nyrath @glyph I mean - if tarot card reading or crystal ball reading was a 1-on-1 person to person experience, the astrology column was a mechanized mass produced machine version.
Or those gumball machines with astrology scrolls. Or fortune cookies. It should have been obvious they were just mechanistic devices, just like ELIZA.
And yet ...
-
-
@bjorndown @_L1vY_ @glyph @xgranade I go to work by bus, and I've made a habit of always carrying a book with me, so I can read on my commute. *Very* highly recommended.
(This morning it felt like me and the woman sitting next to me were a kind of cozy time travellers. I was reading my book and she was doing some knitting; almost everyone else were doomscrolling on their phones.)
@bjorndown @_L1vY_ @glyph @xgranade A semi-related observation: I'm often the only bus passenger reading a book - but when I'm not, the other book-readers are almost always young. I almost never see my fellow middle-ageds reading books, but it's not *that* unusual to see a teenager or two reading something.
-
@mttaggart my suspicion is that when this happens we are going to find out that there is a huge predisposition component. there will be people who say "ah well. guess I'm a little rusty writing unit tests now, but time to get back at it" and we will have people who will go to sleep crying tears of frustration for the rest of their lives as they struggle to reclaim the feeling of being predictably productive again
@glyph @mttaggart this is why I don't play MMOs or gatchas and I don't touch LLMs and I mask in public.
I. Am. Vulnerable. To. These. Things. They. Are. Dangerous.
-
@cainmark @nyrath @glyph Yes, and another interesting one is astrology columns.
The ELIZA effect was surprising to its naive computer programmers, because they thought it should be obvious ELIZA was just a "parlor trick" because anyone could see how it worked.
But astrology columns were equally obvious. When a mass printed column is read by thousands of readers, it's obviously impossible for it to be a personal message to you specifically. And yet ...
-
@mitsuhiko @FediThing @glyph It's as much a nice side-effect as the redistribution of personal items acquired from unlocked sheds and yards on the aftermarket is.
Keep in mind there's a huge swath of grey between open source and large corporate lawyer-backed death+90yr IP copyrights
@phl @FediThing @glyph digital goods cannot be stolen. They can only be duplicated.
-
May I suggest doing the best you can, in collaboration with anyone else you work with who agrees with you/us, to put your own guardrails around your LLM use.
Build a set of effective prompt contexts and quality prompts for frequent tasks.
Where there are places it provides real speed advantages, finding relevant examples of something you need, producing a first draft or perhaps a copy edit (no idea if it can actually do that), use it. Otherwise, don't.
Etc.
-
@bjorndown @_L1vY_ @glyph @xgranade A semi-related observation: I'm often the only bus passenger reading a book - but when I'm not, the other book-readers are almost always young. I almost never see my fellow middle-ageds reading books, but it's not *that* unusual to see a teenager or two reading something.
-
-
-
-
@nyrath @isaackuo @cainmark @glyph in the case of ELIZA its script/algorithm for directing how it converses with the user was simple and predictable enough that after a day or so, maybe even a few hours or less, an intelligent user can experience the AHA moment where they realize how they've been manipulated. the abstraction leaks through. cracks appear. and it shatters their willing suspension of disbelief.
with LLMs I'd argue that since their script/algorithm and database is so much bigger they can keep the user in their honeymoon period longer. one might go months, if ever, before the illusion is broken. for younger or less sophisticated users that honeymoon period will likely last longer, possibly forever
corollary: youngest people might be at most danger of getting warped by heavy/longtime LLM use. and the already mentally ill, or IQ-challenged. now add in a culture with easy access to guns, and hostile nation states running influence ops online, at scale. BAD
-
@FediThing @phl @glyph LLMs also don’t do literal recall. They work not too dissimilar how you work if you implement something from what you learned before.
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
@glyph @xgranade I've unfortunately struck a mental health jackpot: I'm diagnosed with OCD, generalized anxiety disorder and major depressive disorder. Also, I'm on the autism spectrum. I'm actually grateful that I'm actually able to work (but presently terrified because I'm well-aware that it's only a relatively small niche - and one that a trillion-dollar industry is currently betting on destroying - that I can work productively in).
-
@nyrath @isaackuo @cainmark @glyph in the case of ELIZA its script/algorithm for directing how it converses with the user was simple and predictable enough that after a day or so, maybe even a few hours or less, an intelligent user can experience the AHA moment where they realize how they've been manipulated. the abstraction leaks through. cracks appear. and it shatters their willing suspension of disbelief.
with LLMs I'd argue that since their script/algorithm and database is so much bigger they can keep the user in their honeymoon period longer. one might go months, if ever, before the illusion is broken. for younger or less sophisticated users that honeymoon period will likely last longer, possibly forever
corollary: youngest people might be at most danger of getting warped by heavy/longtime LLM use. and the already mentally ill, or IQ-challenged. now add in a culture with easy access to guns, and hostile nation states running influence ops online, at scale. BAD
@synlogic4242 @nyrath @cainmark @glyph FWIW, I think young people are doing a LOT better than older people when it comes to sussing out AI is bad.
For one thing, and this is the big one, schools constantly tell them not to use AI and that AI is cheating and that AI is often wrong. Many directly have class assignments where they are told to have LLM AI write a paper for them and their assignment is to research all the stuff it got wrong. That's an eye-opener.
So, the young folks already use