@filippo the failure mode of LLM outputs is very different and harder to reason about, but ironically more often wrong in such an obvious way that it's easy to anthropomorphize the LLM and make false assumptions about future failure modes.
reviewing LLM output definitely requires a different type of vigilance.
aura-v2c-heretic+gts
@aura@gts.foxsnuggl.es
Posts
-
I wish that those surveys so often cited by InfoSec pundits that ask