@Viss What is the difference between "poisoned" training data, and "accidentally incorrect" training data in the long run? Should be the same effect, no? What are the odds of finding incorrect info on the Internet compared to disinformation on the Internet?
Where am I going with this? The models are all already flawed--this is just a way for the trainer/corp to create a bogeyman to blame for bad quality and dangerous outcomes (like overdosing meds after consulting an AI). They have already tried to anthropomorphize the models to shift blame/liability from themselves in those situations. I guess that wasn't enough.