Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

More Distributed Moderation for PieFed

PieFed Meta
8 6 21
  • I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

    One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

    On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

    Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

    Worth checking out this related discussion:
    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

  • I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

    One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

    On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

    Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

    Worth checking out this related discussion:
    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

    Concerning NSFW in specific: I'm not sure, there might be more issues. I've had a look at lemmynsfw and a few others and had a short run-in with moderation. Most content is copied there by random people without consent of the original creators. So we predominantly got copyright issues and some ethical woes if it's amateur content that gets taken and spread without the depicted people having any sort of control over it. If I were in charge of that, I'd remove >90% of the content, plus I couldn't federate content without proper age-restriction with how law works where I live.

    But that doesn't take away from the broader argument. I think an automatic trust level system and maybe even a web of trust between users could help with some things. It's probably a bit tricky to get it right not to introduce stupid hierarchies. Or incentivise Karma-farming and these things.

  • I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

    One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

    On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

    Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

    Worth checking out this related discussion:
    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

    Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

    Speaking for myself - I'm not opposed to taking some elements of this level-up system that gives users more rights as they show that they're not a troll. To what extent would vary though. Discourse seems to be somewhat different type of forum to Lemmy or Piefed.

    One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the same complaints about power-hungry mods.

    I cannot imagine any reddit-clone not, at some point, needing to rely on automoderation tools.

  • I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

    One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

    On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

    Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

    Worth checking out this related discussion:
    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

    I'm always concerned with these kinds of systems and how minorities would be treated within them. Plenty of anti trans stuff gets upvoted by non trans people from a number of other instances, both on and off trans instances. Any such system would favor the most popular opinions, disallowing anything else, at least from how I interpret them when they are explained to me.

    There's also the issue that mods would still have to be a thing and they would need to be able to both ban and remove spam and unacceptable content, so how do you make sure these features aren't also used to just do moderation the old fashioned way?

    And how do trusted users work in a federated system? Are users trusted on one server trusted on another? If so that makes things worse for minorities again and allows for abusive brigading. Are users only trusted on their home instance? If so that's better, but minorities are still at a disadvantage outside of their own instances.

    There's also the issue with scale. Piefed/lemmy isn't large. What is the threshold to remove something? What happens when there's few reports on a racist post? How long does it get to stay up before enough time passes for it to accrue enough reports? Any such system would need to be scaled individually and automatically to the activity level of each community, which might be an issue in small comms. There are cases where non-marginalized people struggle to understand when something is marginalizing, so they defend it as free speech. What happens in these cases? Will there be enough minorities to remove it? I doubt it.

    I'm sure there is some way to make some form of self-moderation, but it would need to be well thought out.

  • I'm always concerned with these kinds of systems and how minorities would be treated within them. Plenty of anti trans stuff gets upvoted by non trans people from a number of other instances, both on and off trans instances. Any such system would favor the most popular opinions, disallowing anything else, at least from how I interpret them when they are explained to me.

    There's also the issue that mods would still have to be a thing and they would need to be able to both ban and remove spam and unacceptable content, so how do you make sure these features aren't also used to just do moderation the old fashioned way?

    And how do trusted users work in a federated system? Are users trusted on one server trusted on another? If so that makes things worse for minorities again and allows for abusive brigading. Are users only trusted on their home instance? If so that's better, but minorities are still at a disadvantage outside of their own instances.

    There's also the issue with scale. Piefed/lemmy isn't large. What is the threshold to remove something? What happens when there's few reports on a racist post? How long does it get to stay up before enough time passes for it to accrue enough reports? Any such system would need to be scaled individually and automatically to the activity level of each community, which might be an issue in small comms. There are cases where non-marginalized people struggle to understand when something is marginalizing, so they defend it as free speech. What happens in these cases? Will there be enough minorities to remove it? I doubt it.

    I'm sure there is some way to make some form of self-moderation, but it would need to be well thought out.

    I appreciate your insights, but I see many issues raised without clear suggestions for how to enhance the moderation system effectively.

  • I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

    One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

    On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

    Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

    Worth checking out this related discussion:
    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

    I haven't used Discourse, but what you describe sounds like the way that Slashdot has been doing moderation since the late 90s, by randomly selecting users with positive karma to perform a limited number of moderation actions, including meta-moderation where users can rate other moderation decisions.

    I always thought that this was the ideal way to do moderation to avoid the powermod problem that reddit and lemmy have, although I acknowledge the other comments here about neglecting minorities being a result of random sampling of the userbase, but it is likely that this also happens with self-selected moderation teams.

    Within minority communities though, a plurality of members of that community will belong to that minority and so moderating their own community should result in fair selections. Another way to mitigate the exclusion of minorities might be to use a weighted sortition process, where users declare their minority statuses, and the selection method attempts to weight selections to boost representation of minority users.

    A larger problem would be that people wanting to have strong influence on community moderation could create sock-puppet accounts to increase their chance of selection. This already happens with up/downvotes no doubt, but for moderation perhaps the incentive is even higher to cheat in this way.

    I think a successful system based on this idea at least needs some strong backend support for detecting sock-puppetry, and this is going to be a constant cat and mouse game that requires intrusive fingerprinting of the user's browser and behaviour, and this type of tracking probably isn't welcome in the fediverse which limits the tools available to try to track bad actors. It is also difficult in an open source project to keep these systems secret so that bad actors cannot find ways to work around them.

  • I appreciate your insights, but I see many issues raised without clear suggestions for how to enhance the moderation system effectively.

    Well are you against the idea that an individual or a few people, whether or not they gain the position democratically or on a first-come-first server basis are allowed to moderate a community as they see fit?

  • I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.

    One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.

    On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between "users vs. mods," and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.

    Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.

    Worth checking out this related discussion:
    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.

    Discourse's Trust Levels are an interesting idea, but not one that is novel. It was lifted almost entirely from Stack Overflow. At the time, Discourse and Stack Overflow had a common founder, Jeff Atwood.

    There's a reason Stack Overflow is rapidly fading into obscurity... its moderation team (built off of trust levels) destroyed the very foundation of what made Stack Overflow good.

    I am also not saying that what we have now (first-mover moderation or top-down moderation granting) is better... merely that should you look into this, tread lightly.


Gli ultimi otto messaggi ricevuti dalla Federazione
  • As if multiple servers wasn't complicated enough, there are multiple apps and multiple web frontends you can use too! p.piefed.social is an installation of Photon.

    Log in with your piefed.social credentials.

    Like Blorp, it looks great on a phone and can be installed as a PWA.

    read more

  • Try https://b.piefed.social instead! Log in with the same username and password as piefed.social.

    It works best on a phone / tablet but on a desktop it's fine too once you switch to compact mode.

    It can be installed as a PWA.

    There are also lots of themes you can try at https://piefed.social/user/settings. Here are some screenshots of them - https://join.piefed.social/screenshots/

    read more

  • Oh so it is, I wasn't seeing it yesterday when I checked. Perhaps unpinning and re-pinning, or removing and re-allowing the ability to show on popular fixed it and I didn't notice it.

    Nonetheless, thank you for all the help!

    read more

  • Ok, it seems to be working now then. Let us know if you continue to see weirdness with the pinned posts.

    image

    read more

  • Oh we actually unpinned the main@piefed.ca post after I confirmed it was working and made that comment. It's the other one from main@lemmy.ca that is still pinned, but doesn't show up for users that aren't logged in

    read more

  • Welcome!

    read more

  • I mean I get you but the only reason it would be done wrong is if it was intentionally done that way. I mean the system already does notifications and as I said I have gotten them for deleted posts. They don't go away. The devs would practically have to intentionally have it not work that way. I really don't think its a risk at all. Also once something is done its not impossible to undue. The federation and in particular piefed devs are pretty earnest individuals and its not like a coporate system where managment would bury their heads and dig their heels on something bad. There is also no incetive to keep something that does not work out well for some sort of financial gain.

    read more

  • Even logged in and with filters so that the post is visible, I don't see the post in the piefed community as pinned. Could it have been unpinned when you did your db reload?

    If it is pinned on your side, then it must be a bug of some kind.

    read more
Post suggeriti