Ahh, got to fight with a coworker who has LLM brain.
-
Ahh, got to fight with a coworker who has LLM brain. Sorry dude. You’re wasting *days* of work on a task that could finish in *minutes* if you used unironically the greatest data science algorithm ever — counting.
An LLM — or any nonomniscient intelligence — fundamentally *can’t* answer that question, because it doesn’t have the information to answer that question.
He did this same thing on a related problem, and his anecdotal results are showing the same behavior he spent weeks cleaning up.
Is his approach going to yield the same dirty results? Let’s consult the oracle…
-
Ahh, got to fight with a coworker who has LLM brain. Sorry dude. You’re wasting *days* of work on a task that could finish in *minutes* if you used unironically the greatest data science algorithm ever — counting.
An LLM — or any nonomniscient intelligence — fundamentally *can’t* answer that question, because it doesn’t have the information to answer that question.
He did this same thing on a related problem, and his anecdotal results are showing the same behavior he spent weeks cleaning up.
Is his approach going to yield the same dirty results? Let’s consult the oracle…
@jonathankoren I think LLMs are maybe made more dangerous by the fact that they work remarkably well on some problems, but on others there is 0% chance they'll provide a useful solution but they're incapable of saying that. So, it's effectively intermittent reinforcement (even if unintentional, though I think it's probably intentional), which is why gambling is so addictive. My next prompt is gonna hit, I just know it.