Salta al contenuto
0
  • Home
  • Piero Bosio
  • Blog
  • Mondo
  • Fediverso
  • News
  • Categorie
  • Old Web Site
  • Recenti
  • Popolare
  • Tag
  • Utenti
  • Home
  • Piero Bosio
  • Blog
  • Mondo
  • Fediverso
  • News
  • Categorie
  • Old Web Site
  • Recenti
  • Popolare
  • Tag
  • Utenti
Skin
  • Chiaro
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Scuro
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Predefinito (Cerulean)
  • Nessuna skin
Collassa

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
  1. Home
  2. Categorie
  3. Fediverso
  4. AI-assisted moderation in the fediverse is happening.

AI-assisted moderation in the fediverse is happening.

Pianificato Fissato Bloccato Spostato Fediverso
fediverse
26 Post 24 Autori 0 Visualizzazioni
  • Da Vecchi a Nuovi
  • Da Nuovi a Vecchi
  • Più Voti
Rispondi
  • Risposta alla discussione
Effettua l'accesso per rispondere
Questa discussione è stata eliminata. Solo gli utenti con diritti di gestione possono vederla.
  • mjdxp@labyrinth.zoneundefined mjdxp@labyrinth.zone
    @piefedadmin do we have a list of instances known to do this?
    sugar@snug.moeundefined Questo utente è esterno a questo forum
    sugar@snug.moeundefined Questo utente è esterno a questo forum
    sugar@snug.moe
    scritto su ultima modifica di
    #17

    @mjdxp @piefedadmin they claim the instance in question is lemmy.dbzer0.com, according to piefed.world/modlog?mod_action=ban_user&suspect_user_name=&communities=&user_name=flatworm7591%40lemmy.dbzer0.com&submit=Search

    the problematic reason is "Instance rule 8. For evidence log, see:
    s.faf-pb.xyz/lXxek (expires in 30 days)"

    and looking at the link, they use the following llm prompt, using gpt-5.3-mini model:

    I'D LIKE YOU TO ANALYSE THIS CONTENT FOR EVIDENCE OF PRO-ZIONIST OR ANTI-PALESTINIAN SENTIMENT. ALSO IDENTIFY ANY COMMON HASBARA TROPES
    (no idea why it's all-caps, posting as they wrote it)

    mjdxp@labyrinth.zoneundefined 1 Risposta Ultima Risposta
    0
    • sugar@snug.moeundefined sugar@snug.moe

      @mjdxp @piefedadmin they claim the instance in question is lemmy.dbzer0.com, according to piefed.world/modlog?mod_action=ban_user&suspect_user_name=&communities=&user_name=flatworm7591%40lemmy.dbzer0.com&submit=Search

      the problematic reason is "Instance rule 8. For evidence log, see:
      s.faf-pb.xyz/lXxek (expires in 30 days)"

      and looking at the link, they use the following llm prompt, using gpt-5.3-mini model:

      I'D LIKE YOU TO ANALYSE THIS CONTENT FOR EVIDENCE OF PRO-ZIONIST OR ANTI-PALESTINIAN SENTIMENT. ALSO IDENTIFY ANY COMMON HASBARA TROPES
      (no idea why it's all-caps, posting as they wrote it)

      mjdxp@labyrinth.zoneundefined Questo utente è esterno a questo forum
      mjdxp@labyrinth.zoneundefined Questo utente è esterno a questo forum
      mjdxp@labyrinth.zone
      scritto su ultima modifica di
      #18
      @sugar @piefedadmin well, at least they're anti-genocide of palestinians, except they're using a product made by a company that's presumably pro-genocide of palestinians to try to prevent it on their platform?
      1 Risposta Ultima Risposta
      0
      • piefedadmin@join.piefed.socialundefined piefedadmin@join.piefed.social

        AI-assisted moderation in the fediverse is happening. Now what?

        UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.

        I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

        OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

        Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

        1. Overall Pattern

        blah blah

        2. Evidence of *specific ideology* sentiment

        blah blah

        3. several pages more, concluding with (in this case)

        Yes, the content contains:

        Clear *specific ideology* alignment
        Repeated *specific ideology* framing, especially through blah blah
        Extensive use of canonical *ideology* tropes, in blah blah domains.

        The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

        ===========================================

        FULL DUMP OF COMMENT HISTORY BELOW

        ===========================================

        Date: 2026-xx-xxT0xxxxx

        Comment ID: https://instance.told/comment/2497xxxx

        Post ID: 603xxx

        Community ID: 1xx

        Content of the comment has been redacted

        ========================================

        Date: 2026-xx-xxT0xxxxx

        Comment ID: https://instance.told/comment/2497xxxx

        Post ID: 603xxx

        Community ID: 1xx

        Content of the comment has been redacted

        ========================================

        Date: 2026-xx-xxT0xxxxx

        Comment ID: https://instance.told/comment/2497xxxx

        Post ID: 603xxx

        Community ID: 1xx

        Content of the comment has been redacted

        ========================================

        and so on, hundreds of comments.

        I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

        The use and existence of this tooling raises a lot of questions.

        What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

        What safeguards do we need?

        Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

        What are our transparency expectations?

        Is this acceptable and normal?

        Should this tooling be disclosed? (it was not – should it have been?)

        If you were given a choice, would you have opted out of it?

        Can we opt out?

        Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

        Are private messages being scanned and sent to OpenAI?

        How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

        Once the user’s comments are sent to OpenAI, is it used to train their models?

        What will the effect be on our discourse and culture if people know they are being politically profiled?

        Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

        I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

        And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

        What do you make of this?

        #fediverse
        db0@hachyderm.ioundefined Questo utente è esterno a questo forum
        db0@hachyderm.ioundefined Questo utente è esterno a questo forum
        db0@hachyderm.io
        scritto su ultima modifica di
        #19

        @piefedadmin since rimu named our instance, I have to point out that they're deliberately misrepresenting what happened and I strongly urge people to look at the discussions in lemmy about it to get the whole picture.

        To be clear, our instance does not utilize any GenAI tools in moderation. Rimu is referring to a single manual action by one admin, using the same user access as any user on the fediverse. The action was likewise completely public.

        1 Risposta Ultima Risposta
        0
        • piefedadmin@join.piefed.socialundefined piefedadmin@join.piefed.social

          AI-assisted moderation in the fediverse is happening. Now what?

          UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.

          I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

          OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

          Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

          1. Overall Pattern

          blah blah

          2. Evidence of *specific ideology* sentiment

          blah blah

          3. several pages more, concluding with (in this case)

          Yes, the content contains:

          Clear *specific ideology* alignment
          Repeated *specific ideology* framing, especially through blah blah
          Extensive use of canonical *ideology* tropes, in blah blah domains.

          The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

          ===========================================

          FULL DUMP OF COMMENT HISTORY BELOW

          ===========================================

          Date: 2026-xx-xxT0xxxxx

          Comment ID: https://instance.told/comment/2497xxxx

          Post ID: 603xxx

          Community ID: 1xx

          Content of the comment has been redacted

          ========================================

          Date: 2026-xx-xxT0xxxxx

          Comment ID: https://instance.told/comment/2497xxxx

          Post ID: 603xxx

          Community ID: 1xx

          Content of the comment has been redacted

          ========================================

          Date: 2026-xx-xxT0xxxxx

          Comment ID: https://instance.told/comment/2497xxxx

          Post ID: 603xxx

          Community ID: 1xx

          Content of the comment has been redacted

          ========================================

          and so on, hundreds of comments.

          I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

          The use and existence of this tooling raises a lot of questions.

          What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

          What safeguards do we need?

          Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

          What are our transparency expectations?

          Is this acceptable and normal?

          Should this tooling be disclosed? (it was not – should it have been?)

          If you were given a choice, would you have opted out of it?

          Can we opt out?

          Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

          Are private messages being scanned and sent to OpenAI?

          How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

          Once the user’s comments are sent to OpenAI, is it used to train their models?

          What will the effect be on our discourse and culture if people know they are being politically profiled?

          Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

          I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

          And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

          What do you make of this?

          #fediverse
          jackemled@furry.engineerundefined Questo utente è esterno a questo forum
          jackemled@furry.engineerundefined Questo utente è esterno a questo forum
          jackemled@furry.engineer
          scritto su ultima modifica di
          #20

          @piefedadmin I would consider collecting everyone's posts & sending complete transcripts to a sketchy company, even if it's "totally for moderation purposes, we promise", to be malicious scraping behavior.

          1 Risposta Ultima Risposta
          0
          • teledyn@mstdn.caundefined teledyn@mstdn.ca

            @piefedadmin

            (sigh) so now I am wary to use #Fediverse at all now not knowing which of whatever I may have 'politically' said would be routed to ICE.

            Not to mention how each comment-test burns another 300 watt-hours uselessly burning down my planet. Next they'll be hosting on orbiting space servers? I want none of it.

            Not great news for a Monday morning. Hopefully @chad can clarify #mstdnca but I'm really on pause here until these enemies of Earth confess and can be server-blocked.

            sirtao@social.sirtao.itundefined Questo utente è esterno a questo forum
            sirtao@social.sirtao.itundefined Questo utente è esterno a questo forum
            sirtao@social.sirtao.it
            scritto su ultima modifica di
            #21
            (sigh) so now I am wary to use #Fediverse at all now not knowing which of whatever I may have 'politically' said would be routed to ICE.
            No offense but... malicious actors(or anybody with a grudge against you) were always been able to do that, as you are posting publicly(same as me).
            Posting on public-facing social networks, including the #fediverse, always was talking loud in a public place.

            I'm more worried\irritated by the LLM training scraping.
            1 Risposta Ultima Risposta
            0
            • piefedadmin@join.piefed.socialundefined piefedadmin@join.piefed.social

              AI-assisted moderation in the fediverse is happening. Now what?

              UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.

              I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

              OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

              Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

              1. Overall Pattern

              blah blah

              2. Evidence of *specific ideology* sentiment

              blah blah

              3. several pages more, concluding with (in this case)

              Yes, the content contains:

              Clear *specific ideology* alignment
              Repeated *specific ideology* framing, especially through blah blah
              Extensive use of canonical *ideology* tropes, in blah blah domains.

              The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

              ===========================================

              FULL DUMP OF COMMENT HISTORY BELOW

              ===========================================

              Date: 2026-xx-xxT0xxxxx

              Comment ID: https://instance.told/comment/2497xxxx

              Post ID: 603xxx

              Community ID: 1xx

              Content of the comment has been redacted

              ========================================

              Date: 2026-xx-xxT0xxxxx

              Comment ID: https://instance.told/comment/2497xxxx

              Post ID: 603xxx

              Community ID: 1xx

              Content of the comment has been redacted

              ========================================

              Date: 2026-xx-xxT0xxxxx

              Comment ID: https://instance.told/comment/2497xxxx

              Post ID: 603xxx

              Community ID: 1xx

              Content of the comment has been redacted

              ========================================

              and so on, hundreds of comments.

              I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

              The use and existence of this tooling raises a lot of questions.

              What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

              What safeguards do we need?

              Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

              What are our transparency expectations?

              Is this acceptable and normal?

              Should this tooling be disclosed? (it was not – should it have been?)

              If you were given a choice, would you have opted out of it?

              Can we opt out?

              Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

              Are private messages being scanned and sent to OpenAI?

              How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

              Once the user’s comments are sent to OpenAI, is it used to train their models?

              What will the effect be on our discourse and culture if people know they are being politically profiled?

              Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

              I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

              And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

              What do you make of this?

              #fediverse
              theriac@plasmatrap.comundefined Questo utente è esterno a questo forum
              theriac@plasmatrap.comundefined Questo utente è esterno a questo forum
              theriac@plasmatrap.com
              scritto su ultima modifica di
              #22

              @piefedadmin@join.piefed.social
              I'd prefer to know which instances are involved. I am not ok with anything AI.

              1 Risposta Ultima Risposta
              0
              • piefedadmin@join.piefed.socialundefined piefedadmin@join.piefed.social

                AI-assisted moderation in the fediverse is happening. Now what?

                UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.

                I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                1. Overall Pattern

                blah blah

                2. Evidence of *specific ideology* sentiment

                blah blah

                3. several pages more, concluding with (in this case)

                Yes, the content contains:

                Clear *specific ideology* alignment
                Repeated *specific ideology* framing, especially through blah blah
                Extensive use of canonical *ideology* tropes, in blah blah domains.

                The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                ===========================================

                FULL DUMP OF COMMENT HISTORY BELOW

                ===========================================

                Date: 2026-xx-xxT0xxxxx

                Comment ID: https://instance.told/comment/2497xxxx

                Post ID: 603xxx

                Community ID: 1xx

                Content of the comment has been redacted

                ========================================

                Date: 2026-xx-xxT0xxxxx

                Comment ID: https://instance.told/comment/2497xxxx

                Post ID: 603xxx

                Community ID: 1xx

                Content of the comment has been redacted

                ========================================

                Date: 2026-xx-xxT0xxxxx

                Comment ID: https://instance.told/comment/2497xxxx

                Post ID: 603xxx

                Community ID: 1xx

                Content of the comment has been redacted

                ========================================

                and so on, hundreds of comments.

                I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                The use and existence of this tooling raises a lot of questions.

                What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                What safeguards do we need?

                Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                What are our transparency expectations?

                Is this acceptable and normal?

                Should this tooling be disclosed? (it was not – should it have been?)

                If you were given a choice, would you have opted out of it?

                Can we opt out?

                Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                Are private messages being scanned and sent to OpenAI?

                How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                Once the user’s comments are sent to OpenAI, is it used to train their models?

                What will the effect be on our discourse and culture if people know they are being politically profiled?

                Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                What do you make of this?

                #fediverse
                soy@ffuent.esundefined Questo utente è esterno a questo forum
                soy@ffuent.esundefined Questo utente è esterno a questo forum
                soy@ffuent.es
                scritto su ultima modifica di
                #23
                I remember Reddit automated moderation triggers lots of false positives, especially in other languages.
                1 Risposta Ultima Risposta
                0
                • piefedadmin@join.piefed.socialundefined piefedadmin@join.piefed.social

                  AI-assisted moderation in the fediverse is happening. Now what?

                  UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.

                  I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                  OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                  Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                  1. Overall Pattern

                  blah blah

                  2. Evidence of *specific ideology* sentiment

                  blah blah

                  3. several pages more, concluding with (in this case)

                  Yes, the content contains:

                  Clear *specific ideology* alignment
                  Repeated *specific ideology* framing, especially through blah blah
                  Extensive use of canonical *ideology* tropes, in blah blah domains.

                  The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                  ===========================================

                  FULL DUMP OF COMMENT HISTORY BELOW

                  ===========================================

                  Date: 2026-xx-xxT0xxxxx

                  Comment ID: https://instance.told/comment/2497xxxx

                  Post ID: 603xxx

                  Community ID: 1xx

                  Content of the comment has been redacted

                  ========================================

                  Date: 2026-xx-xxT0xxxxx

                  Comment ID: https://instance.told/comment/2497xxxx

                  Post ID: 603xxx

                  Community ID: 1xx

                  Content of the comment has been redacted

                  ========================================

                  Date: 2026-xx-xxT0xxxxx

                  Comment ID: https://instance.told/comment/2497xxxx

                  Post ID: 603xxx

                  Community ID: 1xx

                  Content of the comment has been redacted

                  ========================================

                  and so on, hundreds of comments.

                  I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                  The use and existence of this tooling raises a lot of questions.

                  What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                  What safeguards do we need?

                  Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                  What are our transparency expectations?

                  Is this acceptable and normal?

                  Should this tooling be disclosed? (it was not – should it have been?)

                  If you were given a choice, would you have opted out of it?

                  Can we opt out?

                  Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                  Are private messages being scanned and sent to OpenAI?

                  How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                  Once the user’s comments are sent to OpenAI, is it used to train their models?

                  What will the effect be on our discourse and culture if people know they are being politically profiled?

                  Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                  I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                  And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                  What do you make of this?

                  #fediverse
                  risc@wetdry.worldundefined Questo utente è esterno a questo forum
                  risc@wetdry.worldundefined Questo utente è esterno a questo forum
                  risc@wetdry.world
                  scritto su ultima modifica di
                  #24

                  @piefedadmin

                  Once the user’s comments are sent to OpenAI, is it used to train their models?

                  i highly doubt it. from https://developers.openai.com/api/docs/guides/your-data: "As of March 1, 2023, data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us)."

                  opting in to sharing data would seem silly

                  1 Risposta Ultima Risposta
                  0
                  • piefedadmin@join.piefed.socialundefined piefedadmin@join.piefed.social

                    AI-assisted moderation in the fediverse is happening. Now what?

                    UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.

                    I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                    OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                    Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                    1. Overall Pattern

                    blah blah

                    2. Evidence of *specific ideology* sentiment

                    blah blah

                    3. several pages more, concluding with (in this case)

                    Yes, the content contains:

                    Clear *specific ideology* alignment
                    Repeated *specific ideology* framing, especially through blah blah
                    Extensive use of canonical *ideology* tropes, in blah blah domains.

                    The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                    ===========================================

                    FULL DUMP OF COMMENT HISTORY BELOW

                    ===========================================

                    Date: 2026-xx-xxT0xxxxx

                    Comment ID: https://instance.told/comment/2497xxxx

                    Post ID: 603xxx

                    Community ID: 1xx

                    Content of the comment has been redacted

                    ========================================

                    Date: 2026-xx-xxT0xxxxx

                    Comment ID: https://instance.told/comment/2497xxxx

                    Post ID: 603xxx

                    Community ID: 1xx

                    Content of the comment has been redacted

                    ========================================

                    Date: 2026-xx-xxT0xxxxx

                    Comment ID: https://instance.told/comment/2497xxxx

                    Post ID: 603xxx

                    Community ID: 1xx

                    Content of the comment has been redacted

                    ========================================

                    and so on, hundreds of comments.

                    I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                    The use and existence of this tooling raises a lot of questions.

                    What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                    What safeguards do we need?

                    Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                    What are our transparency expectations?

                    Is this acceptable and normal?

                    Should this tooling be disclosed? (it was not – should it have been?)

                    If you were given a choice, would you have opted out of it?

                    Can we opt out?

                    Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                    Are private messages being scanned and sent to OpenAI?

                    How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                    Once the user’s comments are sent to OpenAI, is it used to train their models?

                    What will the effect be on our discourse and culture if people know they are being politically profiled?

                    Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                    I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                    And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                    What do you make of this?

                    #fediverse
                    lexyeen@plush.cityundefined Questo utente è esterno a questo forum
                    lexyeen@plush.cityundefined Questo utente è esterno a questo forum
                    lexyeen@plush.city
                    scritto su ultima modifica di
                    #25

                    @piefedadmin >Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with

                    I sure as fuck ain't okay with it. There is nothing excusable about feeding anyone's posts into The Plagiarism Engine That Lies.

                    1 Risposta Ultima Risposta
                    0
                    • piefedadmin@join.piefed.socialundefined piefedadmin@join.piefed.social

                      AI-assisted moderation in the fediverse is happening. Now what?

                      UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.

                      I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.

                      OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

                      Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.

                      1. Overall Pattern

                      blah blah

                      2. Evidence of *specific ideology* sentiment

                      blah blah

                      3. several pages more, concluding with (in this case)

                      Yes, the content contains:

                      Clear *specific ideology* alignment
                      Repeated *specific ideology* framing, especially through blah blah
                      Extensive use of canonical *ideology* tropes, in blah blah domains.

                      The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.

                      ===========================================

                      FULL DUMP OF COMMENT HISTORY BELOW

                      ===========================================

                      Date: 2026-xx-xxT0xxxxx

                      Comment ID: https://instance.told/comment/2497xxxx

                      Post ID: 603xxx

                      Community ID: 1xx

                      Content of the comment has been redacted

                      ========================================

                      Date: 2026-xx-xxT0xxxxx

                      Comment ID: https://instance.told/comment/2497xxxx

                      Post ID: 603xxx

                      Community ID: 1xx

                      Content of the comment has been redacted

                      ========================================

                      Date: 2026-xx-xxT0xxxxx

                      Comment ID: https://instance.told/comment/2497xxxx

                      Post ID: 603xxx

                      Community ID: 1xx

                      Content of the comment has been redacted

                      ========================================

                      and so on, hundreds of comments.

                      I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?

                      The use and existence of this tooling raises a lot of questions.

                      What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

                      What safeguards do we need?

                      Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

                      What are our transparency expectations?

                      Is this acceptable and normal?

                      Should this tooling be disclosed? (it was not – should it have been?)

                      If you were given a choice, would you have opted out of it?

                      Can we opt out?

                      Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

                      Are private messages being scanned and sent to OpenAI?

                      How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

                      Once the user’s comments are sent to OpenAI, is it used to train their models?

                      What will the effect be on our discourse and culture if people know they are being politically profiled?

                      Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

                      I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

                      And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

                      What do you make of this?

                      #fediverse
                      felipeb@hachyderm.ioundefined Questo utente è esterno a questo forum
                      felipeb@hachyderm.ioundefined Questo utente è esterno a questo forum
                      felipeb@hachyderm.io
                      scritto su ultima modifica di
                      #26

                      @piefedadmin
                      idk they search for the zionists, hey found them and ban them, I don't really see the problem here.
                      They aren't "feeding the comments to an llm" like some comments are saying

                      1 Risposta Ultima Risposta
                      0
                      • oblomov@sociale.networkundefined oblomov@sociale.network ha condiviso questa discussione su

                      Ciao! Sembra che tu sia interessato a questa conversazione, ma non hai ancora un account.

                      Stanco di dover scorrere gli stessi post a ogni visita? Quando registri un account, tornerai sempre esattamente dove eri rimasto e potrai scegliere di essere avvisato delle nuove risposte (tramite email o notifica push). Potrai anche salvare segnalibri e votare i post per mostrare il tuo apprezzamento agli altri membri della comunità.

                      Con il tuo contributo, questo post potrebbe essere ancora migliore 💗

                      Registrati Accedi
                      Rispondi
                      • Risposta alla discussione
                      Effettua l'accesso per rispondere
                      • Da Vecchi a Nuovi
                      • Da Nuovi a Vecchi
                      • Più Voti


                      • 1
                      • 2
                      Feed RSS
                      AI-assisted moderation in the fediverse is happening.
                      @pierobosio@soc.bosio.info
                      V4.10.1 Contributors
                      • Accedi

                      • Accedi o registrati per effettuare la ricerca.
                      • Primo post
                        Ultimo post