@glyph Did you quote post something?
-
@glyph I know that this is a tangent to your original point about LLMs (100% agreed), but fucking yikes, this is terrifying. It's a very good video, and makes me glad that I never seriously considered self-medicating my (at the time hypothetical) ADHD. Of course everyone thinks they can't get addicted, because they're smarter than those other addicts, and they have *rules*. There but for the grace, etc.. I even have the exact same kind of food safe, to use for snacks.
What is frustrating is that the (extreme) expense of access to legitimate, regulated ADHD medication makes it unaffordable for a lot of people (and given that it runs in families, it's likely to impact the same family multiple times), and I'm sure that makes drugs like this very tempting. Especially because of the bullshit "natural, herbal" branding.
Apparently kratom is legal in South Africa. I'm going to be paying a *lot* more attention to the ingredients lists on artisanal iced teas from now on.
-
@mitsuhiko @glyph One million lines of code, I'm actually quite curious what requires so much. Even if it's closed source, can you share a link to the project?
@dragonfi @glyph Here are some projects I built with agents or contributed alone which are public:
~20k for github issue sync: github.com/mitsuhiko/gh-issue-sync/
~55k for the tank game: https://github.com/mitsuhiko/tankgame/
~10k contributed to pi: https://github.com/badlogic/pi-mono (it's mostly vibecoded)Most code is not public and I have little desire to publish it for the moment. One piece of infrastructure we built for earendil is partially public in that I published API documentation: https://api.mailhook.sh/api/
It's around 100.000 lines of code, mostly go.
-
@FediThing @glyph I disagree with that notion entirely. The reason I went into Open Source is because I believe in sharing. LLMs are just adding to this. IP laws are way to long lasting and I have been writing to this end many times over the years on my blog. 70 year + copyright terms are the exact opposite to what we should be aiming for as humans.
-
@mttaggart unfortunately I have no concept of what "rock bottom" for an LLM user looks like. Step 0 is going to have to be that society stops massively rewarding them with prestige and huge piles of cash first
@glyph @mttaggart unexpected and sudden price jacking might do it.
-
@FediThing @glyph LLMs are trained on human-created material in much the same way a person learns by reading books and then acting on what they've learned. They don't directly reproduce that material.
As I mentioned I strongly believe that broad sharing of knowledge is a net benefit to humanity. Questions of credit and attribution are a separate issue and to discuss them meaningfully, you first have to be clear about what you consider reasonable attribution in the first place.
You can take for instance the tankgame and then tell me which part should be attributed and is not, and what you would be attributing it to.
On the "against the will": I want you to use the code I wrote, it's definitely not against my will that LLMs are trained on the code I wrote over the years.
-
@FediThing @glyph We don't know enough yet about how the systems work. If you for instance look into the grokking paper you can see that there is a generalized learning going on, but we haven't been able to replicate this yet to large models everywhere. The idea that they are not learning anything in training is not state of the art anymore.
-
@FediThing @glyph LLMs can learn from other LLMs and that's why, for instance, reasoning traces are not exposed by OpenAI because they are afraid that the Chinese are learning from it.
This debate is not going anywhere so we'll exit it here now.
-
@glyph I want to share this, and I don't want to accidentally introduce it to someone who has the belief: "Oh, that won't happen to me. I don't have an addictive personality."
-
@glyph
I see your analogy and raise you the follow-up video at https://youtube.com/watch?v=atbAWMUJxs8 -- I feel pretty sure we're just cooked, in some ways, as a society. Because there's always this contingency of assholes who are so intent that nobody could possibly have a bad experience that they must be getting paid or have some other incentive or ulterior motive to be slagging off their thing of choice."You're just holding it wrong."
"You just need some self control."
"If you use them all in tandem and prompt them in this specific way it shouldn't hallucinate as much."It's this thread of relentless justification and rationalization, this unwillingness to consider the costs until they are undeniable, which is generally the point at which they are the most difficult to pay. This sophistry, this abuse of nuance and reason and technicality to create semi-reasonable doubts and logical footholds. Like, I'm fucking tired of it, and I'm not sure it's ever actually gotten us anything good. And I don't really know what, if anything, I can do about it, besides watch it burn the world in a hundred different ways, culminating in a hundred different variations of that "For a beautiful moment in time, we created a lot of value for shareholders" meme.
-
-
@glyph I can see how that's worrisome, I would point people to a more boring source: https://en.wikipedia.org/wiki/Mitragyna_speciosa
The main issues appear to be that:
1) the plant contains a mixture of something like 50 active compounds and any of those could have positive and/or negative effects
2) users probably have no idea of the dosage they're getting, trust me folks if your daily Adderall was a random dosage it would be a bad time
3) people do develop addictive behavior and that *will* fuck them up
-
@glyph I can see how that's worrisome, I would point people to a more boring source: https://en.wikipedia.org/wiki/Mitragyna_speciosa
The main issues appear to be that:
1) the plant contains a mixture of something like 50 active compounds and any of those could have positive and/or negative effects
2) users probably have no idea of the dosage they're getting, trust me folks if your daily Adderall was a random dosage it would be a bad time
3) people do develop addictive behavior and that *will* fuck them up
@glyph I would also put this in the context of marijuana and alcohol that most people are familiar with:
1) there are always negative effects to blowing smoke into your lungs
2) there are always dosages of drugs that will fuck up your kidneys/liver
3) stimulants in unknown doses will damage your heart sooner or later
4) opioids in unknown doses, especially combined with alcohol, will risk hypoxia and death
Addictive behavior causes people to make worse choices wrt to do these risks
-
-
@glyph I know this is completely beside the point but… man, did that supposedly addictive game look dumb AF. It looked hideous and barely functional, and watching it I'm just like "How the hell did anyone intentionally use that thi-"
…
Oh wait. Maybe it's not as beside the point as I thought. Wow, I just got that. Nicely done.
-
@glyph I was about to ask about it because I'd never heard of it before mew. But I do actually recognise it now, is just weird to me because LLM addiction seems like it predates the annalogy/example given here mew.
-
-
@isaackuo @nyrath @glyph
I can't find it anymore but the local science museum used to have a rudimentary psychologist program running on a computer - in the eighties!The logic behind it was on par with adventure games of the era, including the eponymous _Adventure_.
It would answer anything you typed with more questions like "what makes you say that?" etc.
The exhibit's purpose was to show that even simple repetitive responses could offer the illusion of meaningful conversation.
-
@isaackuo @nyrath @glyph
I can't find it anymore but the local science museum used to have a rudimentary psychologist program running on a computer - in the eighties!The logic behind it was on par with adventure games of the era, including the eponymous _Adventure_.
It would answer anything you typed with more questions like "what makes you say that?" etc.
The exhibit's purpose was to show that even simple repetitive responses could offer the illusion of meaningful conversation.
Yes, back in the 1960s the ELIZA effect was a rude surprise.
-
@mitsuhiko @dragonfi @glyph If it’s at the limits of complexity as you claimed earlier, by definition, it is not maintainable. These are tradeoffs.
-
@mitsuhiko @dragonfi @glyph You… do know that good engineering expresses the system with as few lines of code as necessary?