Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

From Bruce Schneier: "All it takes to poison AI training data is to create a website:

Uncategorized
20 20 0
  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer we should start drawing more penises then...

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer

    Ah, but have you actually tested this out? Maybe your hot-dog eating skills are real! (heh)

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer It's on the Internetz, so it must be true!

    AI is able to replace about half of humanity if making the same errors counts.

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    Este ejemplo muestra cómo la data sesgada o falsa puede entrenar a los LLMs. ¿Qué mecanismos podrían implementarse para validar la fuente de los datos de entrenamiento?

  • @Yendolosch @emacsomancer The use of "hacked" in that headline is a bit self-aggrandizing?

    @tml @Yendolosch @emacsomancer

    Broadly fair usage. Got someone else's computer system to behave in a way they didn't want it to. The only stretch is that there's an implication in "hacked" that some safeguards had to be bypassed, and there weren't any in the first place. But that's worse, right?

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    This is a genuinely scary insight from Schneier. The implications for AI reliability go way beyond just training data quality. What happens when adversarial training becomes industrialized?

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer

    "Ned Ludd's in your datacentre, poisoning your training sets!"

    https://ravenation.club/@bearsong/116104233823870563

  • @tml @Yendolosch @emacsomancer

    Broadly fair usage. Got someone else's computer system to behave in a way they didn't want it to. The only stretch is that there's an implication in "hacked" that some safeguards had to be bypassed, and there weren't any in the first place. But that's worse, right?

    @petealexharris @tml @Yendolosch @emacsomancer It's rather close to the original usage of the word "hacked". Some still use it like that.

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer they aren't trustworthy. Take up a lot of time trying to get a reasoned answer and there's always a phrase or wording out of place that needs correction. Almost as it the AI is trying to engage longer and longer than necessary.

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer to be honest i am not well-informed enough to definitively judge the accuracy of this, but it seems wrong for 2 main reasons.

    1. models dont train on the fly, typically, yet, so for models to behave as such in such a short period of time seems inaccurate and would require web search enabled and explicitly directed to disregard other search results.

    2. people training these models know conflicting info is everywhere and the source of truth is prioritized in training algorithms.

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer How is this a news story, beyond "ai bad"? In the dial up days people falsely believed everyone ate 9 spiders a year in their sleep due to chain emails.

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer
    Shall we have an algorithmic bullshit generator?

    And pass around multiple copies of it, identical and with small changes, omissions and additions?

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer in less than 24 hours the chatbots fell for the experiment, and less than 24 hours after it was revealed what the experiment was about, that information has ALSO become part of the training data

    are they constantly scrapping websites for training data or why does this appear here so fast??? no wonder those datacenters consume so much electricity if they dont take a single break from scrapping the internet

  • @petealexharris @tml @Yendolosch @emacsomancer It's rather close to the original usage of the word "hacked". Some still use it like that.

    @larsbrinkhoff @petealexharris @tml @Yendolosch @emacsomancer in the sense of life hacks or food hacks this is an AI hack. So the AI has been hacked.

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer it's not really a new thing Russians are already using this technique to poison training data:

    https://thebulletin.org/2025/03/russian-networks-flood-the-internet-with-propaganda-aiming-to-corrupt-ai-chatbots/

    Edit: there is some newer reporting on that matter, but I can't find it right now/don't have it anywhere at hand

  • From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

    @emacsomancer He also poisoned the data for everyone who searches for hot dog eating competetitors online in other ways. I'm not sure what he accomplished.

  • oblomov@sociale.networkundefined oblomov@sociale.network shared this topic

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • mi sento male

    Uncategorized lotr chatgpt llm itameme
    1
    1
    0 Votes
    1 Posts
    7 Views
    mi sento male#lotr #chatgpt #llm #itameme
  • 0 Votes
    5 Posts
    13 Views
    @bgl@hackers.pub 흠, 생각해 보니 그렇네요. 근데 그렇게 가다 보면 LangGraph나 Mastra 같은 것에 가까워 지는 것 같기도 하고요…? 🤔
  • 0 Votes
    2 Posts
    15 Views
    @ianrogers nice
  • 0 Votes
    1 Posts
    12 Views
    Over 40 years, we were collectively told to give tax cuts to rich people.And we were told that if we did that, wealth would trickle down and everyone would be better off.Over 40 years, pretty much everything got cut to fund these tax cuts.Schools. Hospitals. Public housing. Public transport. Universities. Roads projects. Mental health services. Welfare payments.People literally went homeless or starved, so rich people could get tax cuts.Because the wealth would trickle down.Eventually the eroding of public goods caused social dislocation.So governments further cut those public goods to fund more police and prisons. To continue giving tax cuts to rich people.But they said the wealth would trickle down.Eventually the climate started changing because of the amount of toxic fossil fuel pollution in the atmosphere.So governments chose to keep the tax cuts rather than fund infrastructure to reduce emissions.(Many of those billionaires getting tax cuts made their money selling toxic fossil fuels.)And as the oceans and atmosphere warmed, the bushfires, droughts, hurricanes, cyclones, floods, and droughts got worse.But they said the wealth would trickle down.Eventually people were getting pissed off at the dire state of the world.The rich misdirected that anger at immigrants!And First Nations!And trans people!And neurodivergent people!Anyone but the billionaires who got the tax cuts.So governments chose to keep the tax cuts. (For the rich. Everyone else got new tariff taxes.)But they said the wealth would trickle down.So did the wealth trickle down?Well...A group of billionaires saw this kinda cool tech demo.It predicted the next pixel of an image, based on the colour patterns of every image on the internet.It also predicted the next word in a sentence, based on an analysis of every piece of writing on the internet.The rich decided that this clearly showed that a sentient computer was just around the corner.The problem was these tech demos needed servers with a lot of GPUs to work.So the rich took all the money they got from those tax cuts.And they bought GPUs.Millions and millions and millions and millions of GPUs.All the tax cuts? All the underfunded schools? All the draconian welfare cuts? All the public housing shortages? The delays in funding clean energy.In the end, it didn't trickle down.And instead of all the public goods it could have bought......We'll be left with millions and millions and millions of GPUs in a landfill.#ChatGPT #Claude #AI #LLM #capitalism #socialism #business #politics #Nvidia