an entire presentation at DEFCON appears to just be straight up AI slop: https://www.openwall.com/lists/oss-security/2025/09/25/1'nwhen will it end?
-
@androcat you are going to reply with something about CoCs, no doubt.
so i am going to preemptively cut it off: i can put whatever i want in CoC for example, sure. but the reality is:
- most people don’t actually read it, and
- bad actors will ignore it anywaywhich brings us back to the beginning where i am manually banning bad actors for doing bad work.
and at that point it doesn’t change anything from what i’ve already said.
but it feels good, right?
@androcat at the end of the day, what matters *to me* is whether work submitted to me for review is accurate or not. it is my job as maintainer to judge the accuracy.
how the work was created is not the part that is interesting to me, but instead its accuracy. if someone uses an LLM to workshop something, and they test it, they verify it is correct, and they are prepared to effectively defend it in review, then it does not really matter to me, because it still checks the boxes.
the problem isn’t the LLM, it’s the lack of care in generating the work. this is why we call it “workslop”. LLM abuse is just the latest generation of workslop production, automated code scanning is another type of workslop. fuzzing without appropriate context is another type of workslop. these don’t involve LLMs at all.
-
@androcat at the end of the day, what matters *to me* is whether work submitted to me for review is accurate or not. it is my job as maintainer to judge the accuracy.
how the work was created is not the part that is interesting to me, but instead its accuracy. if someone uses an LLM to workshop something, and they test it, they verify it is correct, and they are prepared to effectively defend it in review, then it does not really matter to me, because it still checks the boxes.
the problem isn’t the LLM, it’s the lack of care in generating the work. this is why we call it “workslop”. LLM abuse is just the latest generation of workslop production, automated code scanning is another type of workslop. fuzzing without appropriate context is another type of workslop. these don’t involve LLMs at all.
-
-
-
@androcat @adriano we literally live in a time where people submit automated code scanning results, that they have signed off on and assigned CVEs to, that are just total bullshit. in fact a large minority of CVEs, if not majority at this point, are sadly this.
we live in a time where people have their *unsupervised* LLM agents are submitting bugs to public mailing lists offering a 30 day embargo on their non-bug.
the problem isn’t the LLM, it’s the person who lets it go do its thing without supervision, without quality assurance. this is why i focus on the person, not the specific method with which they are annoying.
-
-
@lamanche Not that this is false, but it's generalizing a specific problem without actually doing much. People have been bullshitting their way through life for ages, encouraged etc.
This particular thing of submitting security vuln reports or bug reports without even checking them is new and specific.
-
@lamanche Not that this is false, but it's generalizing a specific problem without actually doing much. People have been bullshitting their way through life for ages, encouraged etc.
This particular thing of submitting security vuln reports or bug reports without even checking them is new and specific.
@adriano @lamanche @androcat yes, i would say the social problem dates way before capitalism. in fact, history proves this.
and, blaming all problems on “late stage capitalism” is just another flavor of the same social problem, honestly.
things have actual causes which cause the actual effects we complain about. to short-circuit the analysis with a talking point is not intellectually stimulating…
-
@androcat at the end of the day, what matters *to me* is whether work submitted to me for review is accurate or not. it is my job as maintainer to judge the accuracy.
how the work was created is not the part that is interesting to me, but instead its accuracy. if someone uses an LLM to workshop something, and they test it, they verify it is correct, and they are prepared to effectively defend it in review, then it does not really matter to me, because it still checks the boxes.
the problem isn’t the LLM, it’s the lack of care in generating the work. this is why we call it “workslop”. LLM abuse is just the latest generation of workslop production, automated code scanning is another type of workslop. fuzzing without appropriate context is another type of workslop. these don’t involve LLMs at all.
-
@kevingranade @androcat it isn’t. there are plenty of cases where machine transformations are perfectly fine. i have been using transformers to rewrite code for 20+ years. Coccinelle, for example is a type of transformer.
this is a problem of “garbage in, garbage out” paired with the time immemorial problem that some choose to bullshit their way through life and make it everyone else’s problem. those people don’t play by “the rules”.