I wish that those surveys so often cited by InfoSec pundits that ask
-
I wish that those surveys so often cited by InfoSec pundits that ask
Do you fully trust AI output?
Do you always verify AI output?also asked
Do you fully trust your colleagues' output?
Do you always verify your colleagues' output?Just to have comparative numbers, you know.
-
One could go on!
Do you fully trust third-party dependencies?
Do you always verify third-party dependencies?But somehow AI output is special and harbinger of all security issues.
Anyway, read Russ Cox's take on AI tool use in the Go project.
https://groups.google.com/g/golang-dev/c/4Li4Ovd_ehE/m/8L9s_jq4BAAJ
-
I wish that those surveys so often cited by InfoSec pundits that ask
Do you fully trust AI output?
Do you always verify AI output?also asked
Do you fully trust your colleagues' output?
Do you always verify your colleagues' output?Just to have comparative numbers, you know.
One could go on!
Do you fully trust third-party dependencies?
Do you always verify third-party dependencies?But somehow AI output is special and harbinger of all security issues.
-
I wish that those surveys so often cited by InfoSec pundits that ask
Do you fully trust AI output?
Do you always verify AI output?also asked
Do you fully trust your colleagues' output?
Do you always verify your colleagues' output?Just to have comparative numbers, you know.
@filippo the failure mode of LLM outputs is very different and harder to reason about, but ironically more often wrong in such an obvious way that it's easy to anthropomorphize the LLM and make false assumptions about future failure modes.
reviewing LLM output definitely requires a different type of vigilance. -
I wish that those surveys so often cited by InfoSec pundits that ask
Do you fully trust AI output?
Do you always verify AI output?also asked
Do you fully trust your colleagues' output?
Do you always verify your colleagues' output?Just to have comparative numbers, you know.
@filippo A colleague is responsible for the output even when I'm the reviewer, AI is not.
A colleague is expected to learn from its mistakes and grow in responsibilities, AI only improves if the big tech firm decides to retrain.
Colleagues are very different from each other and each one has their own flaws and strengths when you try to trick them into doing something. There are like 5 AIs sharing 90% of the work and can be tricked by asking them to write a haiku.
-
I wish that those surveys so often cited by InfoSec pundits that ask
Do you fully trust AI output?
Do you always verify AI output?also asked
Do you fully trust your colleagues' output?
Do you always verify your colleagues' output?Just to have comparative numbers, you know.
@filippo Agree in terms of numbers, but also I don't think it is the same. People have incentives not to write bad code (you don't want to look dumb, don't want to loose your job, have some internal motivation for doing good job, etc), while AI has no such incentives. And no, prompting it for it is not the same thing. Moreover, people reason, while AI does not, so unless someone just copies and pastes code from StackOverflow, they would put at least minimal amount of thought about their work, while AI can produce code that does not even compile or is bluntly wrong in other ways.
-
Anyway, read Russ Cox's take on AI tool use in the Go project.
https://groups.google.com/g/golang-dev/c/4Li4Ovd_ehE/m/8L9s_jq4BAAJ
@filippo A reassuringly sensible take from Russ.
-
One could go on!
Do you fully trust third-party dependencies?
Do you always verify third-party dependencies?But somehow AI output is special and harbinger of all security issues.
No. I don’t trust third-party dependencies in general.
Define verify? The extent of verification depends in trust in third party and the project I am making dependent (risk profile, expected life etc.).
Red flags would include things like too many onwards dependencies, dependencies that I consider to be privacy risks etc.
On the other hand I have high trust in things like official Swift project packages, SQLite and substantial trust in things like the Vapor project.
-
I wish that those surveys so often cited by InfoSec pundits that ask
Do you fully trust AI output?
Do you always verify AI output?also asked
Do you fully trust your colleagues' output?
Do you always verify your colleagues' output?Just to have comparative numbers, you know.
@filippo That's missing the point. Your colleagues understand there are consequences to fucking up, avoid doing it, and work to make things right if they do. The slop extruder just digs in and feeds you more slop.
-
undefined cybersecurity@poliverso.org shared this topic