This is bad.
-
@MissingClara @xgranade If someone actually wrote it and it looks right, you have whatever mental process they went thru writing it plus the "looks right" as reasons it's likely right. If it was LLM slop, you only have how it looks.
-
@MissingClara the intent of a real contributor is to provide a good, safe, working code change. the goal of an llm is to deceive you into believing that it produced good, safe, working code.
given how important details are in something like Python, how far do you want to ride this conflict of interest?
-
@MissingClara @xgranade That process still has some error rate. So your overall error rate is going to be much higher when the code has no provenance and is just slop than when both you and the author would need to have mistakes in your thought processes at the same time.
-
@dalias @MissingClara From that point, I guess I'm making a "just because the hole is there doesn't mean we should keep digging" kind of argument. Code review, even by excellent and competent reviewers, has at best *reduced* but not eliminated defects and vulnerabilities in code.
Code review is incredibly difficult, it's why rely so much on having incredibly competent reviewers.
@xgranade @dalias I understand, but at least for Python in specific, the level of scrutiny put on contributions is greater than the kind of issues you are mentioning. if these contributions are being accepted, it is because it was practical to review contributions that made use of LLMs, if that starts not being the case, they will stop being accepted. that said, I still disagree with it on an ethical level.
-
@xgranade @dalias I understand, but at least for Python in specific, the level of scrutiny put on contributions is greater than the kind of issues you are mentioning. if these contributions are being accepted, it is because it was practical to review contributions that made use of LLMs, if that starts not being the case, they will stop being accepted. that said, I still disagree with it on an ethical level.
@MissingClara @dalias I'll reserve a more detailed disagreement here, as I'm not sure it's productive at this point. Suffice to say, I do not agree with the use of LLMs at an ethical *or technical* level.
That said, I do want to pull back slightly — my original point as per "I'm not trying to pick on Python here" is that OSS *in general* is under significant threat from AI products, and not that Python in particular is worse off here than the field in general.
-
@MissingClara @xgranade That process still has some error rate. So your overall error rate is going to be much higher when the code has no provenance and is just slop than when both you and the author would need to have mistakes in your thought processes at the same time.
-
@MissingClara @dalias I'll reserve a more detailed disagreement here, as I'm not sure it's productive at this point. Suffice to say, I do not agree with the use of LLMs at an ethical *or technical* level.
That said, I do want to pull back slightly — my original point as per "I'm not trying to pick on Python here" is that OSS *in general* is under significant threat from AI products, and not that Python in particular is worse off here than the field in general.
@MissingClara @dalias I'm dismayed that a project I depend on and a project that I make so much use of is now part of that larger problem. I'm dismayed that for a while now, Python (and here I am saying something particular about Python) has had an uncomfortable relationship with AI. I'm dismayed that the AI corporate project of disrupting software engineering labor and OSS development has largely continued unabated.
-
@MissingClara @xgranade What does "responsibly" mean?
-
@MissingClara @xgranade What does "responsibly" mean?
-
undefined oblomov@sociale.network shared this topic