AI-assisted moderation in the fediverse is happening.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse -
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin I at least am certainly not okay with having my posts read/processed by an LLM and will defederate all instances that expose me to that.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin it is one thing to do that with a ai that they control(i still don’t support this) but with a cloud ai provider heck no I hope that they stop
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediversethis is just more free LLM training data.
It's also non-consensual data harvesting.
gen-ai is poison.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin I am definitely not okay with any of my posts read/processed by an LLM, especially ChatGPT, or any of the non-self hosted models. Realistically speaking, my posts are being scraped somewhere, but even if you are using it in a productive way does not make it okay. I would ask the servers I am on to defederate any servers that use that for moderation.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin @ophiocephalic Fuck these instance admins. Name, shame, and defederate if they do not change behavior. The users on these instances need to know, immediately, how their posts are being used -- I'm sure many would not approve of this, and they need to be able to migrate to a safer environment if these admins don't immediately stop.
-
@piefedadmin @ophiocephalic Fuck these instance admins. Name, shame, and defederate if they do not change behavior. The users on these instances need to know, immediately, how their posts are being used -- I'm sure many would not approve of this, and they need to be able to migrate to a safer environment if these admins don't immediately stop.
@sharpcheddargoblin
I agree; and this is not just a problem for users on those instances, but every user on every instance that federates with them. It's a blatant violation of the certain degree of trust the fediverse depends on to exist. The instances need to be identified as soon as possible -
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin The potential for abuse is a good reason to avoid it entirely. I imagine an overworked moderator turning to AI to help. That is kind of a scalability issue with Mastodon. And, it gets worse as more of the population joins and more people who are online jerks, and who require moderation, join an instance. So scalability is a real issue for moderators and we can't just take away what they need to scale, or they might fail or quit.
I think the answer is has *at least* a couple parts. First, there must be transparency so people know what is being done with their posts. It must be possible to see the prompt used, so people can decide if it's fair and move to a different instance if it isn't.
Second, it should only be used to bring a post to the attention of a human. All actions must only be done by a person, after they have reviewed the actual post. I think automatically banning or blocking because of the results of an AI should be forbidden (somehow, perhaps blocking an instance).
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin Using, yes, but relying on it, no. There has to be a way to keep Llm out of the steering process which involves training of the moderator. There have to be precise netiquette and guidelines of how to be able to involve these tools and where to restrict them.
-
@piefedadmin I am definitely not okay with any of my posts read/processed by an LLM, especially ChatGPT, or any of the non-self hosted models. Realistically speaking, my posts are being scraped somewhere, but even if you are using it in a productive way does not make it okay. I would ask the servers I am on to defederate any servers that use that for moderation.
2026 the year of dead internet reality.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse(sigh) so now I am wary to use #Fediverse at all now not knowing which of whatever I may have 'politically' said would be routed to ICE.
Not to mention how each comment-test burns another 300 watt-hours uselessly burning down my planet. Next they'll be hosting on orbiting space servers? I want none of it.
Not great news for a Monday morning. Hopefully @chad can clarify #mstdnca but I'm really on pause here until these enemies of Earth confess and can be server-blocked.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin This is very much a massive violation of the transparency, trust and privacy of users on the #Fediverse.
I've been uncovering numerous #aiagents and #aiprofiles on the Fediverse that do not disclose they are automated accounts, and trying to pass themselves off as regular users. Those accounts are complete violation of the rights of the #Fedizens to maintain their privacy and the autonomy of the information they share.
This is actually a worse violation.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin Ah yes, they're pulling a BlueSky, of which they happened to use LLM moderation.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin issues I haven't seen other comments here address yet:
1. The AI will sometimes hallucinate the post contents in the "summary" it shows. Without clicking through all those links (thus mostly defeating any time savings involved) you'll never know.
2. The AI will be biased in its analysis, even moreso than a human would. Lots of irrelevant details in the comment history will influence the result way more than they should. That's just how LLMs work.
3. The prompt used in this example leads heavily towards false positive results. Designing an actually neutral prompt for an LLM for this purpose is nearly impossible anyways, but this prompt *definitely* biases heavily towards finding a "pattern" even where there really is none (or when the target user's behavior is no more polarized than the average user).This moderation practice indicates a significant disregard for good moderation standards. It also does so in a way that relies on a tool that's positively bristling with negative externalities. Instances that work like this should quickly find themselves defederated into their own little bubble, similar to hateful/oppressive instances. This behavior shuls absolutely not be normalized or downplayed.
To address the "poor moderators need to rely on such tools because they're overworked" trope: yes, moderators are often overworked. No, that doesn't make this okay. Perhaps consider limiting the size of your server to whatever size you can comfortably moderate, and letting other volunteers running other servers pick up the slack. That's how the fediverse is *supposed* to work.
-
@piefedadmin do we have a list of instances known to do this?
-
@piefedadmin issues I haven't seen other comments here address yet:
1. The AI will sometimes hallucinate the post contents in the "summary" it shows. Without clicking through all those links (thus mostly defeating any time savings involved) you'll never know.
2. The AI will be biased in its analysis, even moreso than a human would. Lots of irrelevant details in the comment history will influence the result way more than they should. That's just how LLMs work.
3. The prompt used in this example leads heavily towards false positive results. Designing an actually neutral prompt for an LLM for this purpose is nearly impossible anyways, but this prompt *definitely* biases heavily towards finding a "pattern" even where there really is none (or when the target user's behavior is no more polarized than the average user).This moderation practice indicates a significant disregard for good moderation standards. It also does so in a way that relies on a tool that's positively bristling with negative externalities. Instances that work like this should quickly find themselves defederated into their own little bubble, similar to hateful/oppressive instances. This behavior shuls absolutely not be normalized or downplayed.
To address the "poor moderators need to rely on such tools because they're overworked" trope: yes, moderators are often overworked. No, that doesn't make this okay. Perhaps consider limiting the size of your server to whatever size you can comfortably moderate, and letting other volunteers running other servers pick up the slack. That's how the fediverse is *supposed* to work.
@piefedadmin having now had a glance through that I believe to be any example of such an AI summary, In feeling pretty spot-on here.
I obviously did not read the entire comment history, but some of the "evidence" the AI "found" seems quite likely to be made up.
To be clear, I have no problem with that user being banned for their behavior which could well have contradicted server rules (though I'm not sure I'd be comfortable on a server with those rules being enforced in that way). I don't see a problem with my sever federating with other servers that have restrictive political viewpoint rules, and wouldn't recommend others defederate over such rules.
I absolutely do recommend defederation over the mess that is their moderation process though. These people are using destructive and defective technology to make up fake reasons to ban users when they could just state their rule and issue the ban. That tech has negative impacts on me personally and on several communities I'm a part of. So it shouldn't be acceptable to use it in that way, especially when its just making shit up.
-
@piefedadmin do we have a list of instances known to do this?
@mjdxp @piefedadmin they claim the instance in question is lemmy.dbzer0.com, according to piefed.world/modlog?mod_action=ban_user&suspect_user_name=&communities=&user_name=flatworm7591%40lemmy.dbzer0.com&submit=Search
the problematic reason is "Instance rule 8. For evidence log, see: s.faf-pb.xyz/lXxek (expires in 30 days)"
and looking at the link, they use the following llm prompt, using gpt-5.3-mini model:I'D LIKE YOU TO ANALYSE THIS CONTENT FOR EVIDENCE OF PRO-ZIONIST OR ANTI-PALESTINIAN SENTIMENT. ALSO IDENTIFY ANY COMMON HASBARA TROPES
(no idea why it's all-caps, posting as they wrote it) -
@mjdxp @piefedadmin they claim the instance in question is lemmy.dbzer0.com, according to piefed.world/modlog?mod_action=ban_user&suspect_user_name=&communities=&user_name=flatworm7591%40lemmy.dbzer0.com&submit=Search
the problematic reason is "Instance rule 8. For evidence log, see: s.faf-pb.xyz/lXxek (expires in 30 days)"
and looking at the link, they use the following llm prompt, using gpt-5.3-mini model:I'D LIKE YOU TO ANALYSE THIS CONTENT FOR EVIDENCE OF PRO-ZIONIST OR ANTI-PALESTINIAN SENTIMENT. ALSO IDENTIFY ANY COMMON HASBARA TROPES
(no idea why it's all-caps, posting as they wrote it)@sugar @piefedadmin well, at least they're anti-genocide of palestinians, except they're using a product made by a company that's presumably pro-genocide of palestinians to try to prevent it on their platform? -
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin since rimu named our instance, I have to point out that they're deliberately misrepresenting what happened and I strongly urge people to look at the discussions in lemmy about it to get the whole picture.
To be clear, our instance does not utilize any GenAI tools in moderation. Rimu is referring to a single manual action by one admin, using the same user access as any user on the fediverse. The action was likewise completely public.
-
AI-assisted moderation in the fediverse is happening. Now what?
UPDATE: proof is at https://piefed.social/c/fediverse/p/2035409/proof-of-ai-assisted-political-profiling-by-unruffled-lemmy-dbzer0-com. The main instance is lemmy.dbzer0.com but anarchist.nexus and quokka.au share admin/mod teams so those two are suspect also.
I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of *specific political ideology* sentiment. Also identify any *related political ideology* tropes“.
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
Below is a structured analysis of the uploaded content, focused on *specific ideology* rhetoric. This is an analytic classification, not a moral judgement.
1. Overall Pattern
blah blah
2. Evidence of *specific ideology* sentiment
blah blah
3. several pages more, concluding with (in this case)
Yes, the content contains:
Clear *specific ideology* alignment
Repeated *specific ideology* framing, especially through blah blah
Extensive use of canonical *ideology* tropes, in blah blah domains.The pattern is not accidental or isolated; it is consistent, internally coherent, and reproduces well‑documented *country with the ideology* public‑diplomacy narratives rather than neutral analysis.
===========================================
FULL DUMP OF COMMENT HISTORY BELOW
===========================================
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
Date: 2026-xx-xxT0xxxxx
Comment ID: https://instance.told/comment/2497xxxx
Post ID: 603xxx
Community ID: 1xx
Content of the comment has been redacted
========================================
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances are using it and maybe we’re ok with that because it’s being used by communities we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of questions.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
#fediverse@piefedadmin I would consider collecting everyone's posts & sending complete transcripts to a sketchy company, even if it's "totally for moderation purposes, we promise", to be malicious scraping behavior.
Ciao! Sembra che tu sia interessato a questa conversazione, ma non hai ancora un account.
Stanco di dover scorrere gli stessi post a ogni visita? Quando registri un account, tornerai sempre esattamente dove eri rimasto e potrai scegliere di essere avvisato delle nuove risposte (tramite email o notifica push). Potrai anche salvare segnalibri e votare i post per mostrare il tuo apprezzamento agli altri membri della comunità.
Con il tuo contributo, questo post potrebbe essere ancora migliore 💗
Registrati Accedi