They want to tell you that the models train themselves now.
-
They want to tell you that the models train themselves now. That it's clean. It's pure.
It never is, is it? There's always a damnable human cost.
-
They want to tell you that the models train themselves now. That it's clean. It's pure.
It never is, is it? There's always a damnable human cost.
@mttaggart That is not healthy. Please stop. Viewers may start hating anyone who looks like the abuser in the video. Watching or listening such material is not for anyone to process.
-
They want to tell you that the models train themselves now. That it's clean. It's pure.
It never is, is it? There's always a damnable human cost.
@mttaggart three years ago it was going on with the RLHF filter layer:
careful, article includes descriptions of SA (another CW on the site):
https://time.com/6247678/openai-chatgpt-kenya-workers/ -
They want to tell you that the models train themselves now. That it's clean. It's pure.
It never is, is it? There's always a damnable human cost.
@mttaggart Meta, Muskcorp, OpenAI, etc - they are all all exploiters, help promote misogyny, hatred, child abuse, scams - just so long as it makes money.
-
They want to tell you that the models train themselves now. That it's clean. It's pure.
It never is, is it? There's always a damnable human cost.
@mttaggart presumably it wouldn't be better to *not* have anyone look at material that might be abusive, though?
Like, how can Mastodon instance admins solve the problem of abusive content without checking it?
This task is thankless, low paid, and takes a heavy emotional toll. I don't see a way to get rid of it though without either deleting online platforms (not just big tech, but fediverse ones too) or farming it out to AI. And the AI needs to be trained.
-
@mttaggart presumably it wouldn't be better to *not* have anyone look at material that might be abusive, though?
Like, how can Mastodon instance admins solve the problem of abusive content without checking it?
This task is thankless, low paid, and takes a heavy emotional toll. I don't see a way to get rid of it though without either deleting online platforms (not just big tech, but fediverse ones too) or farming it out to AI. And the AI needs to be trained.
@FishFace I really think there's a material difference between moderating a server like a Mastodon instance and creating an entire subindustry of misery for your inhuman product—all while presenting the thing as some magical entity that ends tool.
-
undefined oblomov@sociale.network shared this topic