Skip to content
0
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
munin@infosec.exchangeundefined

Fi 🏳️‍⚧️

@munin@infosec.exchange
About
Posts
38
Topics
15
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • These folks are helping evacuate trans people from unsafe situations.
    munin@infosec.exchangeundefined munin@infosec.exchange

    These folks are helping evacuate trans people from unsafe situations.

    I would appreciate if you could offer them support in doing so.

    https://tcpipeline.org/

    Uncategorized

  • Question.
    munin@infosec.exchangeundefined munin@infosec.exchange

    @mcc

    oh for fuck's sake

    Uncategorized

  • Goddamn "private mention" is hard to use safely.
    munin@infosec.exchangeundefined munin@infosec.exchange

    @adamshostack @alice

    This is also why I have a signal username in my profile; I expect to be contacted for anything actually sensitive on that channel.

    Uncategorized

  • If your business cannot handle a 40-or-fewer hour workweek for the workforce,
    munin@infosec.exchangeundefined munin@infosec.exchange

    If your business cannot handle a 40-or-fewer hour workweek for the workforce,

    then that business is inadequately staffed,

    and it will exhibit death-spiral behaviors.

    https://www.theguardian.com/technology/ng-interactive/2026/feb/17/ai-startups-work-culture-san-francisco

    Uncategorized

  • Goddamn "private mention" is hard to use safely.
    munin@infosec.exchangeundefined munin@infosec.exchange

    @adamshostack @alice

    I don't use it for anything nontrivial; any conversations with anything I'm not OK ending up in public go to Signal.

    Not only does this create an actually end-to-end assured container, but it changes the context from Masto's assumed-public to Signal's assumed-private, thus allowing for a separation of content cognitively as well as logistically.

    Uncategorized

  • Poll: Would you vote for a Satanist for elected office?
    munin@infosec.exchangeundefined munin@infosec.exchange

    @mcc

    thank you for anticipating the need to differentiate

    Uncategorized

  • ME: I do not need "AI code assistants" because I have a keyboard with a functioning "control" key.
    munin@infosec.exchangeundefined munin@infosec.exchange

    @mcc

    I see now why the adoption of these agents is out of control

    Uncategorized

  • hey everyone remember therac-25
    munin@infosec.exchangeundefined munin@infosec.exchange

    hey everyone remember therac-25

    https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/

    Uncategorized

  • Reminding y'all that:
    munin@infosec.exchangeundefined munin@infosec.exchange

    Reminding y'all that:

    Discord is not documentation.

    ID laws disproportionately harm minorities.

    Leaking current ID details constitutes harm.

    Data is radioactive.

    Complying proactively with fascism is perpetration of harm.

    "Legal" and "right" are two very different qualities.

    Uncategorized

  • Welp, it's all kicking off this week in AI!
    munin@infosec.exchangeundefined munin@infosec.exchange

    @cstross

    the rocket motor's burned out and the vehicle is coasting; we'll have to wait a bit before we see how hard it comes down I expect.

    Uncategorized

  • Friend sent me a link to a document from the government's Epstein archive release.
    munin@infosec.exchangeundefined munin@infosec.exchange

    @mcc

    there's a particularly horrific adjunct to this that I've been sitting with for a while,

    that such victims are effectively systemically disenfranchised from being able to even -find out- the depth of how they were wronged until potentially years later,

    because any discussion of sexuality or consent is - especially by those victimizers - gated as an "adult" matter and deliberately kept away from those of the age they choose to victimize.

    Uncategorized

  • wow folks really hate stale bots, huh
    munin@infosec.exchangeundefined munin@infosec.exchange

    @kouhai

    having your contributions dismissed without human intervention is kind of a slap in the face, yes.

    Uncategorized

  • LLMs do not "think"
    munin@infosec.exchangeundefined munin@infosec.exchange

    I give praise to people whose behaviors I want to reinforce; that is the purpose of praise.

    Yes, that's an extremely cold-blooded way of putting it, but that framing should indicate why the LLM preemptive praise model is fucking ass-backwards and creates perverse thought patterns in the users.

    Uncategorized

  • LLMs do not "think"
    munin@infosec.exchangeundefined munin@infosec.exchange

    Had a conversation with a coworker today who was expressing frustration over having to deal with, as they put it, "gassing up" MS copilot to make it give appropriate results.

    They expressed confusion over why this was necessary.

    The reason - in part - is because the training corpus used to build MS copilot's database of correlations included github, stackoverflow, and other coding fora.

    Adding words that are strongly associated with good-quality results - words that would appear close to the well-written code in the corpus - will be more likely to retrieve the better-quality submissions; e.g. "this is a great solution; you're a real expert!" type comments.

    Frankly, I find this to be intensely dehumanizing and unpleasant to force people into giving out preemptive praise to a fucking machine in order to retrieve, frankly, barely adequate scripts out of the database, but then I remember having to actually learn shit in order to do it.

    Uncategorized

  • LLMs do not "think"
    munin@infosec.exchangeundefined munin@infosec.exchange

    @larsmb

    Crappy databases are still databases.

    Elasticsearch also usually exists in a lossy state; their description of it as "eventual consistency" is of like fashion - a term to excuse the lack of determinacy in the query.

    Uncategorized

  • LLMs do not "think"
    munin@infosec.exchangeundefined munin@infosec.exchange

    @davidgerard

    I noted "phrases" specifically due to the way in which larger-scope correlations are made.

    It's also worth noting that computer code is best understood as 'phrases' in this context; there's a number of phrase-equivalent patterns that exist within code repositories that are used in like fashion.

    Uncategorized

  • LLMs do not "think"
    munin@infosec.exchangeundefined munin@infosec.exchange

    @pikhq

    I think part of this is that the LLM operators have worked out how to filter the text chain generation through a 'style' filter that adjusts it into a consistent fist.

    No doubt you've noticed that the fist from openai's product is different than the one from anthropic or google or microsoft; having that consistency of 'style' is something that gives many people the perception of there being a singular authorial voice behind the words.

    (However, given my neurodivergency, it hits square in the uncanny valley for me, and resembles a specific distressing situation I've encountered in the past, which means that I find it fucking unreadable)

    Uncategorized

  • LLMs do not "think"
    munin@infosec.exchangeundefined munin@infosec.exchange

    None of this is magic, and I'm fucking sick and tired of people treating it as such.

    It's a very complicated system, and yes, it's impressive that they've managed to kludge this into providing something that fools the rubes.

    But my fucking gods - fuck Feynman for being a misogynist creep, but his point about being able to explain things in plain language to non-experts does actually fucking hold up; if you actually know what you're doing, you should be able to eliminate all the godsdamned jargon and tell someone what the fuck is happening.

    And by inference, if you cannot explain what you're doing in plain language, you definitely should not the fuck be releasing it in public, let alone charging actual money for this.

    Uncategorized

  • LLMs do not "think"
    munin@infosec.exchangeundefined munin@infosec.exchange

    LLMs do not "think"

    The LLM instantiation methodology* correlates patterns in the data that the developers provide to build a database** of linkages between collections of words and phrases*** that appear in that corpus.

    The way in which this database is used is to inform a probabilistic selector process by seeding it with a set of probabilities**** associated with a given word or phrase; that set of probabilities has pointers to related words or phrases.

    If a given word or phrase is found in close proximity in the original data consistently, then those probabilities will be higher.

    When a query***** is made to this database, a randomization process is used to drop certain parts****** of the query being sent into the lookup process. The remainder is divided into segments† and passed into the database for query.

    So.

    With all this in mind, it should be -screamingly obvious- why this story, of how it's entirely feasable to get an LLM to rederive copyrighted works out of the database that was seeded with those works, happens: https://futurism.com/artificial-intelligence/ai-industry-recall-copyright-books

    * I am deliberately not using the word 'training'. You can train dogs; you can train employees; you can train chimpanzees; what you do to an LLM is not training - it is building a database to feed into another process.

    ** I am deliberately not using the word "model" here, so as to restate the process in plain language absent the jargon these dipshits insist on using to obfuscate their techniques.

    *** "Tokens" is another jargon word here.

    **** "weights" is less objectionable as jargon, given it's used for a number of things with this approximate conceptual shape, but it's fucking annoying to me in this context.

    ***** "prompt" is their fucking bullshit term for a natural-language database query

    ****** "zero weighting" is jargon for "we drop it on the floor" - this is why I keep referring to people doing "prompt engineering" as playing games instead of doing actual security; if the fucking thing drops random parts of your shit on the ground, then inherently you have no way to enforce a policy that is subject to that process.

    † "tokenized", see ***

    Uncategorized

  • my husband's new Windows 11 computer suddenly stopped outputting audio to the 3.5mm headphones.
    munin@infosec.exchangeundefined munin@infosec.exchange

    @0xabad1dea

    if they make you install a whole-ass other application to do troubleshooting, then it best be parsing logs for you to identify precisely where the issue is.

    this is where I would go into a rant about having safe defaults and guiding user experience if I were more caffeinated.

    Uncategorized
  • 1 / 1
  • Login

  • Login or register to search.
  • First post
    Last post