Has anyone ever investigated moderation by sortition?
-
Has anyone ever investigated moderation by sortition?
Instead of this current system where people have to be recruited to a scary "moderator" role, what if everyone on a server got randomly assigned mod requests on a jury basis of say, 3-9 people?
The current status quo just sets up maintainers for burnout: as your instance grows, so does the free emotional labour you're expected to conduct.
What if instead of this we all agreed to take on this responsibility as part of joining a community?
-
Has anyone ever investigated moderation by sortition?
Instead of this current system where people have to be recruited to a scary "moderator" role, what if everyone on a server got randomly assigned mod requests on a jury basis of say, 3-9 people?
The current status quo just sets up maintainers for burnout: as your instance grows, so does the free emotional labour you're expected to conduct.
What if instead of this we all agreed to take on this responsibility as part of joining a community?
@kim @andypiper there's been many experiments about that, but ultimately it comes down to a safety thing: you don't want to have $randomPerson reviewing reports of extreme content like CSAM, TVEC, and Gore.
https://bsky.app/profile/rahaeli.bsky.social has discussed this at length recently, but yeah, some moderation work could be handled by a jury of peers, but a lot of other moderation work requires training and trauma care.
-
Has anyone ever investigated moderation by sortition?
Instead of this current system where people have to be recruited to a scary "moderator" role, what if everyone on a server got randomly assigned mod requests on a jury basis of say, 3-9 people?
The current status quo just sets up maintainers for burnout: as your instance grows, so does the free emotional labour you're expected to conduct.
What if instead of this we all agreed to take on this responsibility as part of joining a community?
@kim sounds like Slash!
-
@kim sounds like Slash!
@kim Advogato had an interesting trust-based mechanism.
-
@kim Advogato had an interesting trust-based mechanism.
@kim mostly, I'd say: let a thousand flowers bloom. Different instances have different moderation structures, and what you're describing sounds really interesting.
-
@kim mostly, I'd say: let a thousand flowers bloom. Different instances have different moderation structures, and what you're describing sounds really interesting.
@kim Reddit's upvote/downvote system would probably work, too.
-
@kim @andypiper there's been many experiments about that, but ultimately it comes down to a safety thing: you don't want to have $randomPerson reviewing reports of extreme content like CSAM, TVEC, and Gore.
https://bsky.app/profile/rahaeli.bsky.social has discussed this at length recently, but yeah, some moderation work could be handled by a jury of peers, but a lot of other moderation work requires training and trauma care.
@thisismissem @kim @andypiper I think it's possible to have defence in depth; multiple layers of moderation. Peer moderation can be a helpful addition to dedicated moderators.
I added an issue to track peer moderation options on the AP T&S repo:
https://github.com/swicg/activitypub-trust-and-safety/issues/107
-
@thisismissem @kim @andypiper I think it's possible to have defence in depth; multiple layers of moderation. Peer moderation can be a helpful addition to dedicated moderators.
I added an issue to track peer moderation options on the AP T&S repo:
https://github.com/swicg/activitypub-trust-and-safety/issues/107
@evan @kim @andypiper no, not really. The sort of moderation described here would be "taking in reports and handling them in some way by voting on action to take"
Currently only your server receives Flag activities in Mastodon-style AP, which gives you a mix of benign and potentially highly traumatism or illegal content in reports. The benign stuff can go to community, sure, but the other stuff needs to be handled with care.
This is where third-party moderation services could operate: stuff that doesn't necessarily need to be actioned by the server mod team.
But this is well beyond just upvoting or downvoting posts, you really don't want to be liable for traumatising people using your service.
-
@evan @kim @andypiper no, not really. The sort of moderation described here would be "taking in reports and handling them in some way by voting on action to take"
Currently only your server receives Flag activities in Mastodon-style AP, which gives you a mix of benign and potentially highly traumatism or illegal content in reports. The benign stuff can go to community, sure, but the other stuff needs to be handled with care.
This is where third-party moderation services could operate: stuff that doesn't necessarily need to be actioned by the server mod team.
But this is well beyond just upvoting or downvoting posts, you really don't want to be liable for traumatising people using your service.