Salta al contenuto
0
  • Home
  • Piero Bosio
  • Blog
  • Mondo
  • Fediverso
  • News
  • Categorie
  • Recenti
  • Popolare
  • Tag
  • Utenti
  • Home
  • Piero Bosio
  • Blog
  • Mondo
  • Fediverso
  • News
  • Categorie
  • Recenti
  • Popolare
  • Tag
  • Utenti
Skin
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Predefinito (Nessuna skin)
  • Nessuna skin
Collassa

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
Epic Nullundefined

Epic Null

@epic_null@infosec.exchange
Informazioni
Post
1
Discussioni
0
Condivisioni
0
Gruppi
0
Da chi è seguito
0
Chi segue
0

Visualizza l'originale

Post

Recenti

  • Spent many (many!) hours pulling legal explanations and apologies from lawyers who were caught using AI that hallucinated in which they explained to a judge why they used AI.
    Epic Nullundefined Epic Null

    @winterschon @jasonkoebler @neil

    and we never hear about the millions of success stories where people used LLMs for the same reasons, without issues, validating factual information, and were as a result more productive.

    We never hear about the word processors these people use for better or il for one very simple reason:

    Good technology tends to either not fail, or fail safe.

    A pen failing is obvious. You can see the ink stop on the page. Very very very rarely does it spill on your work. So rare in fact, that people don't think about it.

    A word processor is similar. While failures are more common, it's still extremely rare to lose a document for reasons other than "I forgot to save" (and even then, there are failsafes that can recover your work.)

    LLMs REGULARLY output things that are not true. You need not be an expert to get them to output something like the wrong number of letters in a word.

    it's fairly easy to point out people who don't understand the tech they use, because they tend to fail, and the media calls those people failures instead of overworked underpaid and under-trained. the media loves blathering on about negative news and failures, not successful people who move through life eloquently.

    In the tech world, if you are offering something to an end user, IT. MUST. NOT. FAIL. DANGEROUS.

    It becomes not about the user, but the tech. Your phone? Must never explode on you, even when it gets hot and banged around. You drop it in water? It doesn't electrocute you.

    You ever notice those extra confirmation windows around desructive actions like deleting a file, or how every system has a trash bin? Yeah, that's just required to meet basic software standards.

    maybe stop listening to the corporate interests pushing this narrative — they are the core organizations who have the most to lose in the current economic structure.

    Yes but no.

    AI is being pushed entirely by corporate interests. If that were not the case, then introduction of the technology would be on a FAR more consent-driven basis. There would be far fewer surprise applications of it, and far more options to configure features to use a prefered model.

    Ai helps people when those people know how to use it as the set of advanced tools which it offers,

    Would you support a company that makes steak knives with handles optimised for babies? I would not.

    Taken in the best of light, tools for experts are always designed in ways that make the tool look... undesirable to those who should not be using them.

    These LLMs are not designed to encourage expert use, but are instead designed to be inviting to a user.

    It's like adjusting a car to be comfortably driven by a six year old, advertising the car to parents of young children, being surprised when children are terrible drivers, and then... blaming the children for being terrible drivers.

    Yeah. Eliminate the baby car!

    especially disabled people (but we never hear about that being important).

    Correct.

    Anything that theoretically allows people to do things could theoretically be used by disabled people to do things.

    But.

    This was not made for disabled people. It is not advertised to them. It was not designed with reasonable safeguards. It fails dangerous instead of safe. It is designed to encourage people to use it at terrible times.

    It is not safe for ANYONE to use, let alone the more vulnerable individuals in our society.

    And I fucking mean that.

    Senza categoria
  • 1 / 1
  • Accedi

  • Accedi o registrati per effettuare la ricerca.
  • Primo post
    Ultimo post