This is bad.
-
@SnoopJ @xgranade @theorangetheme gotcha. On second look, I see that you were grepping, I misunderstood what I was reading there.
As I've thought about it some more, I think I'm standing by my take. IMO the fact that you contributed with Claude is barely more interesting than the fact that you contributed with VS Code. I think that "oh I used an LLM/Agent" is not a defense against, well, anything.
@dave @SnoopJ @theorangetheme It's not interesting, but it is important as part of understanding the vulnerability surface introduced by that code. There are many things about code that are simultaneously boring as fuck and also critically important.
-
@SnoopJ @xgranade @theorangetheme I don't think we should be personifying LLMs by calling them "co-authors". Claude didn't author, it recursively autocompleted.
@dave @SnoopJ @theorangetheme I don't even disagree, but that's the signal that Claude gives us, and there's no Git metadata for "this code was extruded by $x slop machine."
-
@astraluma @xgranade If you search for 'claude' you can find the commits where Claude is a "co-author" https://github.com/search?q=repo%3Apython%2Fcpython+claude&type=commits
-
@astraluma @xgranade If you search for 'claude' you can find the commits where Claude is a "co-author" https://github.com/search?q=repo%3Apython%2Fcpython+claude&type=commits
@nausicaa @astraluma As @joelle pointed out, Claude is also a name that real people have. @SnoopJ's cantrip is going to be less susceptible to false positives by filtering on "anthropic.com" as well.
-
@srtcd424 If that's what's useful to you? But I don't personally recommend moving away from Python, nor do I think that's an effective tactic for dealing with the problem.
As mentioned, this is a broad problem in OSS *in general*, and Python is now in the blast radius of that problem. Trying to create a dependency path that doesn't include any AI-vulnerable code is very difficult right now.
@xgranade
Yeah, sorry, it was dark humour. I'm honestly terrified about where all this heading :( Not personally a python fan probably due to my vintage but it's used for a frightening proportion of software I rely on. -
@dave @SnoopJ @theorangetheme It's not interesting, but it is important as part of understanding the vulnerability surface introduced by that code. There are many things about code that are simultaneously boring as fuck and also critically important.
@xgranade @SnoopJ @theorangetheme yeah I've been thinking about that, and I'm not sure I agree. The outputted code is the outputted code. "y = x + 1" doesn't gain additional attack surface because Claude autocompleted it.
I think there are all sorts of *human* exploits that can happen and are happening, but those are all based on our laziness checking Claude's work, not Claude's output itself. Things like maintainers going "Jesus take the wheel" when Claude writes commits because it's easier
-
@xgranade
Yeah, sorry, it was dark humour. I'm honestly terrified about where all this heading :( Not personally a python fan probably due to my vintage but it's used for a frightening proportion of software I rely on.@srtcd424 No need to apologize, I just want to be clear about my own views on this rather than inadvertently implying criticism of Python *in particular* that I neither mean nor want to make.
-
@xgranade @SnoopJ @theorangetheme yeah I've been thinking about that, and I'm not sure I agree. The outputted code is the outputted code. "y = x + 1" doesn't gain additional attack surface because Claude autocompleted it.
I think there are all sorts of *human* exploits that can happen and are happening, but those are all based on our laziness checking Claude's work, not Claude's output itself. Things like maintainers going "Jesus take the wheel" when Claude writes commits because it's easier
@xgranade @SnoopJ @theorangetheme please don't read any of this as my endorsement of slop, I can't stand it. I'm just trying to pick apart how code autocompleted by Claude is different from the moral hazard of trusting Claude in the first place.
-
@xgranade @SnoopJ @theorangetheme yeah I've been thinking about that, and I'm not sure I agree. The outputted code is the outputted code. "y = x + 1" doesn't gain additional attack surface because Claude autocompleted it.
I think there are all sorts of *human* exploits that can happen and are happening, but those are all based on our laziness checking Claude's work, not Claude's output itself. Things like maintainers going "Jesus take the wheel" when Claude writes commits because it's easier
@dave @SnoopJ @theorangetheme My views here are complicated, but let me try and give a somewhat accurate condensed version?
First, to your `y = x + 1` example, if the code is simple enough, that vulnerability can be mitigated by human review — the problem is still there, I contend, but was contained by review. The problem is that humans *suck* at scanning for that kind of problem. Take the TSA looking for guns in x-ray scans... they keep failing at that, and incredibly badly.
-
@srtcd424 No need to apologize, I just want to be clear about my own views on this rather than inadvertently implying criticism of Python *in particular* that I neither mean nor want to make.
@xgranade
Yeah, fair. It feels like we're fish trapped in a pool of trustworthy software that's rapidly drying up & shrinking :( -
@dave @SnoopJ @theorangetheme My views here are complicated, but let me try and give a somewhat accurate condensed version?
First, to your `y = x + 1` example, if the code is simple enough, that vulnerability can be mitigated by human review — the problem is still there, I contend, but was contained by review. The problem is that humans *suck* at scanning for that kind of problem. Take the TSA looking for guns in x-ray scans... they keep failing at that, and incredibly badly.
@dave @SnoopJ @theorangetheme As code changes grow, it's even harder to do that mitigation, especially when those code changes interact with a highly complex code base. There's times where `y = x + 1` would be a catastrophic error due to someone else doing pointer math and whatnot, say.
Beyond that, though, it's not clear to what degree *if any* extruded code can be copyrighted. If it can't be, what impact does that have on the project.
-
@joelle @nausicaa @astraluma @SnoopJ True, but that at least biases towards false negatives instead of false positives, which seems like a fair tradeoff?
-
@dave @SnoopJ @theorangetheme As code changes grow, it's even harder to do that mitigation, especially when those code changes interact with a highly complex code base. There's times where `y = x + 1` would be a catastrophic error due to someone else doing pointer math and whatnot, say.
Beyond that, though, it's not clear to what degree *if any* extruded code can be copyrighted. If it can't be, what impact does that have on the project.
@dave @SnoopJ @theorangetheme What happens if, as sometimes happens, the code extruded by a generator is a verbatim quotation of code in its training set, and that comes from a different license? I'm not a lawyer, so I don't understand these risks well enough to always know what is and isn't safe for me to accept, especially if slop extruders are involved.
-
@dave @SnoopJ @theorangetheme What happens if, as sometimes happens, the code extruded by a generator is a verbatim quotation of code in its training set, and that comes from a different license? I'm not a lawyer, so I don't understand these risks well enough to always know what is and isn't safe for me to accept, especially if slop extruders are involved.
@xgranade @dave @theorangetheme IANAL either but it is worth pointing out that generation and *distribution* are separate activities, and humans are still holding all the liability for the latter (which is also the only legally-enforceable part to begin with)
-
@xgranade @dave @theorangetheme IANAL either but it is worth pointing out that generation and *distribution* are separate activities, and humans are still holding all the liability for the latter (which is also the only legally-enforceable part to begin with)
@SnoopJ @dave @theorangetheme That's fair, yeah. My point is more I don't understand the exact shape of the risk... if I redistribute code that was generated by an AI agent, what additional risk if any do I incur?
-
@nausicaa @astraluma As @joelle pointed out, Claude is also a name that real people have. @SnoopJ's cantrip is going to be less susceptible to false positives by filtering on "anthropic.com" as well.
@xgranade @astraluma @joelle @SnoopJ Fair. Given the current scale, I just clicked through to check the different commits, but that doesn't scale as well as SnoopJ's approach.
-
@xgranade @astraluma @joelle @SnoopJ Fair. Given the current scale, I just clicked through to check the different commits, but that doesn't scale as well as SnoopJ's approach.
@nausicaa @astraluma @joelle @SnoopJ That's fair, too, this is so far a small handful and it's not too hard to manually validate that positives are actually true positives.
-
@SnoopJ @dave @theorangetheme That's fair, yeah. My point is more I don't understand the exact shape of the risk... if I redistribute code that was generated by an AI agent, what additional risk if any do I incur?
@xgranade @dave @theorangetheme IMO the risk profile from a legal liability standpoint is exactly the same as if you'd written it by hand
that is, if you distribute a machine-generated copy of a protected work, that doesn't really factor into the ability of that work's owner to sue you for said distribution. the owner has as much standing (in the legalistic sense) as they would if you'd copied and pasted by hand
now the actual *trial* that might arise could have some differences, especially where a judge's discretion is involved (e.g. in awarding damages), but considering how things have gone in the courts so far, I feel reasonably confident in saying that a litigant with a big enough warchest to be a pain in the ass in court over it is going to get treated about the same?
(which might be a complicated way to say "the legalistic arguments are moot, whoever has the deeper pockets wins" but I do enjoy pondering the legal theory even if I know how little it matters to the legal system that actually exists)
-
I'm gonna be real with folks here. I fucked up, and bad, with my participation in the open-slopware list. As a result, I'm not the right person to do it, but there has to be some kind of accounting for what damage AI is doing to open source.
For all the whinging about "supply chains" over the past few years, it *is* a problem when your code suddenly depends on AI, even if only indirectly.
@xgranade As someone who doesn't know anything about open-slopware, what was bad about it?
-
@ireneista @glyph I hope it doesn't, if only because I want to be focusing on my specfic and screenplays, but if it does come to that, I very very much so appreciate your support. ♥