Has anyone ever investigated moderation by sortition?
-
Has anyone ever investigated moderation by sortition?
Instead of this current system where people have to be recruited to a scary "moderator" role, what if everyone on a server got randomly assigned mod requests on a jury basis of say, 3-9 people?
The current status quo just sets up maintainers for burnout: as your instance grows, so does the free emotional labour you're expected to conduct.
What if instead of this we all agreed to take on this responsibility as part of joining a community?
-
Has anyone ever investigated moderation by sortition?
Instead of this current system where people have to be recruited to a scary "moderator" role, what if everyone on a server got randomly assigned mod requests on a jury basis of say, 3-9 people?
The current status quo just sets up maintainers for burnout: as your instance grows, so does the free emotional labour you're expected to conduct.
What if instead of this we all agreed to take on this responsibility as part of joining a community?
@kim @andypiper there's been many experiments about that, but ultimately it comes down to a safety thing: you don't want to have $randomPerson reviewing reports of extreme content like CSAM, TVEC, and Gore.
https://bsky.app/profile/rahaeli.bsky.social has discussed this at length recently, but yeah, some moderation work could be handled by a jury of peers, but a lot of other moderation work requires training and trauma care.
-
Has anyone ever investigated moderation by sortition?
Instead of this current system where people have to be recruited to a scary "moderator" role, what if everyone on a server got randomly assigned mod requests on a jury basis of say, 3-9 people?
The current status quo just sets up maintainers for burnout: as your instance grows, so does the free emotional labour you're expected to conduct.
What if instead of this we all agreed to take on this responsibility as part of joining a community?
@kim sounds like Slash!
-
@kim sounds like Slash!
@kim Advogato had an interesting trust-based mechanism.
-
@kim Advogato had an interesting trust-based mechanism.
@kim mostly, I'd say: let a thousand flowers bloom. Different instances have different moderation structures, and what you're describing sounds really interesting.
-
@kim mostly, I'd say: let a thousand flowers bloom. Different instances have different moderation structures, and what you're describing sounds really interesting.
@kim Reddit's upvote/downvote system would probably work, too.
-
@kim @andypiper there's been many experiments about that, but ultimately it comes down to a safety thing: you don't want to have $randomPerson reviewing reports of extreme content like CSAM, TVEC, and Gore.
https://bsky.app/profile/rahaeli.bsky.social has discussed this at length recently, but yeah, some moderation work could be handled by a jury of peers, but a lot of other moderation work requires training and trauma care.
@thisismissem @kim @andypiper I think it's possible to have defence in depth; multiple layers of moderation. Peer moderation can be a helpful addition to dedicated moderators.
I added an issue to track peer moderation options on the AP T&S repo:
https://github.com/swicg/activitypub-trust-and-safety/issues/107
-
@thisismissem @kim @andypiper I think it's possible to have defence in depth; multiple layers of moderation. Peer moderation can be a helpful addition to dedicated moderators.
I added an issue to track peer moderation options on the AP T&S repo:
https://github.com/swicg/activitypub-trust-and-safety/issues/107
@evan @kim @andypiper no, not really. The sort of moderation described here would be "taking in reports and handling them in some way by voting on action to take"
Currently only your server receives Flag activities in Mastodon-style AP, which gives you a mix of benign and potentially highly traumatism or illegal content in reports. The benign stuff can go to community, sure, but the other stuff needs to be handled with care.
This is where third-party moderation services could operate: stuff that doesn't necessarily need to be actioned by the server mod team.
But this is well beyond just upvoting or downvoting posts, you really don't want to be liable for traumatising people using your service.
-
@evan @kim @andypiper no, not really. The sort of moderation described here would be "taking in reports and handling them in some way by voting on action to take"
Currently only your server receives Flag activities in Mastodon-style AP, which gives you a mix of benign and potentially highly traumatism or illegal content in reports. The benign stuff can go to community, sure, but the other stuff needs to be handled with care.
This is where third-party moderation services could operate: stuff that doesn't necessarily need to be actioned by the server mod team.
But this is well beyond just upvoting or downvoting posts, you really don't want to be liable for traumatising people using your service.
-
@kim @andypiper there's been many experiments about that, but ultimately it comes down to a safety thing: you don't want to have $randomPerson reviewing reports of extreme content like CSAM, TVEC, and Gore.
https://bsky.app/profile/rahaeli.bsky.social has discussed this at length recently, but yeah, some moderation work could be handled by a jury of peers, but a lot of other moderation work requires training and trauma care.
@thisismissem @andypiper i suppose the context im talking about, they would not be a randomperson, theyd be a member of a service user coop or similar where agreeing to share the admin load is a condition of having an account. its not great but it feels like an improvement on admins (like me!) having sole responsibiltiy for it. obviously it would require a bit of a shift in constitution for existing users.
thankyou for this info!! i will dig in
-
@thisismissem @andypiper i suppose the context im talking about, they would not be a randomperson, theyd be a member of a service user coop or similar where agreeing to share the admin load is a condition of having an account. its not great but it feels like an improvement on admins (like me!) having sole responsibiltiy for it. obviously it would require a bit of a shift in constitution for existing users.
thankyou for this info!! i will dig in
@kim @andypiper yeah, but even by joining a social coop, you don't necessarily want or consent to be exposed to the worst of the internet. You don't want to see CSAM, TVEC, NCII, or Gore. You don't want to be seeing harassment about your identity, etc.
Though, it is possible to do this in a way where the mod says "hey, for this report that I've reviewed, I'd like community input on what we should do", which I think social.coop already do.
-
@kim @andypiper yeah, but even by joining a social coop, you don't necessarily want or consent to be exposed to the worst of the internet. You don't want to see CSAM, TVEC, NCII, or Gore. You don't want to be seeing harassment about your identity, etc.
Though, it is possible to do this in a way where the mod says "hey, for this report that I've reviewed, I'd like community input on what we should do", which I think social.coop already do.
@thisismissem @andypiper agreed, but nobody wants to see that. neither of these models change the volume of it, errant server admins trying to make a nice thing for their friends are no more or less qualified than people using the server and currently have to deal with all of it with no visibility on the problem. ive put a better formatted version of this in the github ticket by @evan anyway which might be a more constructive place for it 😅
-
@thisismissem @andypiper agreed, but nobody wants to see that. neither of these models change the volume of it, errant server admins trying to make a nice thing for their friends are no more or less qualified than people using the server and currently have to deal with all of it with no visibility on the problem. ive put a better formatted version of this in the github ticket by @evan anyway which might be a more constructive place for it 😅