Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

At this point, open-source development itself is being DDoS'ed by LLMs and their human users.

Uncategorized
13 4 3

Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    42 Posts
    3 Views
    @MugsysRapSheet @Lazarou GIGOWALOE(Garage In, Garbage Out, Wasting A Lot of Energy)
  • 0 Votes
    1 Posts
    7 Views
    I've just unsubscribed from the Mozilla newsletter. That 2025/26 AI Future trash website and it's messaging is disgusting and patronising to all the people that have supported them over the years and do not wish to have anything to do with gen AI.#mozilla #firefox #genAI #AI #AISlop #slopzilla
  • 0 Votes
    1 Posts
    8 Views
    Language models cannot reliably distinguish belief from knowledge and factAbstract-----------«As language models (LMs) increasingly infiltrate into high-stakes domains such as law, medicine, journalism and science, their ability to distinguish belief from knowledge, and fact from fiction, becomes imperative. Failure to make such distinctions can mislead diagnoses, distort judicial judgments and amplify misinformation. Here we evaluate 24 cutting-edge LMs using a new KaBLE benchmark of 13,000 questions across 13 epistemic tasks. Our findings reveal crucial limitations. In particular, all models tested systematically fail to acknowledge first-person false beliefs, with GPT-4o dropping from 98.2% to 64.4% accuracy and DeepSeek R1 plummeting from over 90% to 14.4%. Further, models process third-person false beliefs with substantially higher accuracy (95% for newer models; 79% for older ones) than first-person false beliefs (62.6% for newer; 52.5% for older), revealing a troubling attribution bias. We also find that, while recent models show competence in recursive knowledge tasks, they still rely on inconsistent reasoning strategies, suggesting superficial pattern matching rather than robust epistemic understanding. Most models lack a robust understanding of the factive nature of knowledge, that knowledge inherently requires truth. These limitations necessitate urgent improvements before deploying LMs in high-stakes domains where epistemic distinctions are crucial.»#ai #LLMs #epistemology #knowledgehttps://www.nature.com/articles/s42256-025-01113-8
  • 0 Votes
    2 Posts
    17 Views
    @ai6yr This is why AI doesn't work. It forgot to recommend wine pairings.