This is bad.
-
@xgranade @dave @theorangetheme re: the CO guidance I think it's going to prove easy to cross the threshold of sufficient human authorship to 'heal' that ownership problem as long as you can spread that human authorship around.
even in the worst-case scenario, there's no liability in distributing an unowned work: by definition, nobody can sue you for infringement
so me the bigger threat is the risk that parts of the generated work are sufficiently-large verbatim repetitions of a protected work
as I punch that out, I'm realizing that there may be some interesting questions of whether the GPL can 'really' be applied to generated code but it probably comes back to the human authorship thing. gods know FSF probably aren't going to offer any useful guidance about that
There's already copyright case law regarding llm generated text.
Judges have ruled it is not human authored and therefore not subject to copyright.
The latest one i read specifically said that you must specifically state which portions were generated and exclude those sections from claimed copyright.
So "human put llm code chunks together" is likely only protected for the arrangement of the chunks and not any of the code itself. (Not a lawyer, making reasonable guess off of lots and lots of copyright knowledge and case law for things like remixing and collage work.)
-
I'm gonna be real with folks here. I fucked up, and bad, with my participation in the open-slopware list. As a result, I'm not the right person to do it, but there has to be some kind of accounting for what damage AI is doing to open source.
For all the whinging about "supply chains" over the past few years, it *is* a problem when your code suddenly depends on AI, even if only indirectly.
@xgranade What's wrong with the open-slopware list though? Are we talking about the one on codeberg?
-
There's already copyright case law regarding llm generated text.
Judges have ruled it is not human authored and therefore not subject to copyright.
The latest one i read specifically said that you must specifically state which portions were generated and exclude those sections from claimed copyright.
So "human put llm code chunks together" is likely only protected for the arrangement of the chunks and not any of the code itself. (Not a lawyer, making reasonable guess off of lots and lots of copyright knowledge and case law for things like remixing and collage work.)
@pathunstrom @xgranade I would appreciate case references if you happen to have any handy. I was trying to keep up with case law a few years ago, but not anymore.
-
@xgranade @theorangetheme yea I didn't mean to minimize the impact, just wanted to share the cantrip I've been using to check this when I run into the same thing
@SnoopJ echo 'alias cantrip=alias' | sudo tee /etc/profile
-
@pathunstrom @xgranade I would appreciate case references if you happen to have any handy. I was trying to keep up with case law a few years ago, but not anymore.
-
@pathunstrom @SnoopJ @xgranade same please include me on any reply:) I am working on a truly epic blog-length crash out and I would like to cite it there
-
There's already copyright case law regarding llm generated text.
Judges have ruled it is not human authored and therefore not subject to copyright.
The latest one i read specifically said that you must specifically state which portions were generated and exclude those sections from claimed copyright.
So "human put llm code chunks together" is likely only protected for the arrangement of the chunks and not any of the code itself. (Not a lawyer, making reasonable guess off of lots and lots of copyright knowledge and case law for things like remixing and collage work.)
@pathunstrom @SnoopJ @xgranade that matches my understanding (from reading about rulings), but it’s not clear to me what that means when an LLM reproduces already copyrighted material. Does the prior copyright mean the output can be a violation even tho it can’t be copyrighted itself? Does the non-copyrightability of output override the previous ownership? That sounds absurd, but to me treating LLM output as anything but a derivative work was already absurd
-
@pathunstrom @SnoopJ @xgranade that matches my understanding (from reading about rulings), but it’s not clear to me what that means when an LLM reproduces already copyrighted material. Does the prior copyright mean the output can be a violation even tho it can’t be copyrighted itself? Does the non-copyrightability of output override the previous ownership? That sounds absurd, but to me treating LLM output as anything but a derivative work was already absurd
@ShadSterling @xgranade my understanding is that if you distribute someone else's protected work¹, you have infringed by distributing (that portion of) their work, full stop.
the particular means by which the infringement occurred are AFAIK entirely irrelevant to legal standing (i.e. the right for the owner of that work to sue the infringer), but the cases @pathunstrom is referring to may represent a gap between my understanding and the current practice of law in the US.
---
¹ or more precisely in this case: enough of someone else's protected work that they can make a convincing argument that it *is* their protected work in court, since outputs of all the models people talk about are in some sense *always* built from the protected works of others -
@ShadSterling @xgranade my understanding is that if you distribute someone else's protected work¹, you have infringed by distributing (that portion of) their work, full stop.
the particular means by which the infringement occurred are AFAIK entirely irrelevant to legal standing (i.e. the right for the owner of that work to sue the infringer), but the cases @pathunstrom is referring to may represent a gap between my understanding and the current practice of law in the US.
---
¹ or more precisely in this case: enough of someone else's protected work that they can make a convincing argument that it *is* their protected work in court, since outputs of all the models people talk about are in some sense *always* built from the protected works of others@SnoopJ @xgranade @pathunstrom that’s what I would have expected before the (IMO nonsensical) rulings about LLM outputs; AFAIK, whether LLM output can be infringing in that way has not yet been tested in court. I don’t know what to expect when such a case is heard, and the way things have been going I’m not looking forward to finding out
-
@xgranade @ireneista "do you have five million dollars of disposable income to fund an alternative to the PSF" is a good place to start, if you want to frame it as a "hostile fork" situation. the only solution is to get involved in the messy process of politics and governance and try to figure out a way to negotiate a durable peace
@glyph @xgranade @ireneista why do we need an alternative to Pumpkin Spice Farts? And why does it have to cost so much?
-
I'm gonna be real with folks here. I fucked up, and bad, with my participation in the open-slopware list. As a result, I'm not the right person to do it, but there has to be some kind of accounting for what damage AI is doing to open source.
For all the whinging about "supply chains" over the past few years, it *is* a problem when your code suddenly depends on AI, even if only indirectly.
@xgranade why do you consider open-slopware a mistake, btw?
-
@xgranade why do you consider open-slopware a mistake, btw?
@outfrost I don't, per se, but I consider the way I participated in it to be a mistake, and one that got people hurt. Any time you make a list of people, no matter your intentions, that takes caution --- caution that I did not personally put into practice. I don't get to hide behind my intentions on that.
-
@outfrost I don't, per se, but I consider the way I participated in it to be a mistake, and one that got people hurt. Any time you make a list of people, no matter your intentions, that takes caution --- caution that I did not personally put into practice. I don't get to hide behind my intentions on that.
@xgranade gotcha, so it's not about the main list itself, but callouts on specific maintainers?
-
@xgranade I also dislike it, but the cat's out of the bag, even if it wasn't allowed people would still be using it, just without revealing it
@MissingClara @xgranade That's a bad argument against having a policy. Policy is a statement of who does and doesn't belong. If they're using it without revealing it, trying to launder code with fraudulent provenance into your project, that's highly malicious behavior worthy of a ban from the project once they're caught. And it's a signal to your good contributors that the project is healthy and not going to be turned into irreparable garbage by slop bros.
-
@xgranade gotcha, so it's not about the main list itself, but callouts on specific maintainers?
@outfrost It's complicated. There's very valid critiques of the list, and also bad faith misrepresentations of what the list was. I can only speak to my own actions, given how complex everything got.
-
@iampytest1 @theorangetheme The idea of putting a noreply email address on your commits is extremely funny to me. What exactly is the point of putting an email address on the commit message at all, then? It's not supposed to be your ID badge. What do you think is the reason that standard was created?
-
@outfrost It's complicated. There's very valid critiques of the list, and also bad faith misrepresentations of what the list was. I can only speak to my own actions, given how complex everything got.
@outfrost I put more of those thoughts together when I first apologized, and when I also tried to be clear about what parts I did not and still do not apologize for.
-
@iampytest1 @theorangetheme The idea of putting a noreply email address on your commits is extremely funny to me. What exactly is the point of putting an email address on the commit message at all, then? It's not supposed to be your ID badge. What do you think is the reason that standard was created?
@iampytest1 @theorangetheme Just another example of how LLMs degrade code quality. If a commit is "authored" by Claude, there is absolutely no one accountable to that code. If you want to reach out to the committer, it goes to an unmonitored email address. Great! Very healthy for our systems.
-
@joXn I'm not here to critique specific individuals, and I'll ask that you don't use my replies to do so either. In particular, this is a large systemic problem across OSS, and while I can think of a few specific bad actors making things worse on purpose, I don't think that's the most common modality by far.
Besides, if having worked for msft is QN automatic disqualification, I'll disclose that I worked there about 5.5 years.
-
@iampytest1 @theorangetheme Just another example of how LLMs degrade code quality. If a commit is "authored" by Claude, there is absolutely no one accountable to that code. If you want to reach out to the committer, it goes to an unmonitored email address. Great! Very healthy for our systems.
@iampytest1 @theorangetheme The only purpose behind it is anthropomorphizing Claude Code in order to sell "AI". You wouldn't put Visual Studio as a co-author on your commit. But because Claude is supposed to be "a person" (read: slave), we pretend this tool is an equal author, even though they don't exist and it's impossible to contact them.