@jonathankoren I think LLMs are maybe made more dangerous by the fact that they work remarkably well on some problems, but on others there is 0% chance they'll provide a useful solution but they're incapable of saying that. So, it's effectively intermittent reinforcement (even if unintentional, though I think it's probably intentional), which is why gambling is so addictive. My next prompt is gonna hit, I just know it.