@jacqueline 100%
-
i feel like i don't have the words to properly describe how it feels to see people who opinions i respected and valued slowly fall into ai psychosis. it's so slow and so subtle at first. "i'm just experimenting! i'm not an ai booster!"
then wait a few months, and they start explaining with the usual flawed, incoherent reasoning how actually it's all very interesting and thought-provoking, whilst pointing at an LLM that is so obviously just a reflection of their own ego.
I read this and i understand.
Deeply
I have used AI myself and i write tools for automated language translation (PL/SQL -> Java) for a paid consulting job.
During that period i learned a lot about LLM (im)possibilities.
I personally think this is a dangerous path we are exploring there for all the obvious reasons like energy overcosumption, loss of truth/trust, psychosis, loss of problemsolving skills,..1/..
-
i feel like i don't have the words to properly describe how it feels to see people who opinions i respected and valued slowly fall into ai psychosis. it's so slow and so subtle at first. "i'm just experimenting! i'm not an ai booster!"
then wait a few months, and they start explaining with the usual flawed, incoherent reasoning how actually it's all very interesting and thought-provoking, whilst pointing at an LLM that is so obviously just a reflection of their own ego.
@jacqueline i feel this in my bones
-
I read this and i understand.
Deeply
I have used AI myself and i write tools for automated language translation (PL/SQL -> Java) for a paid consulting job.
During that period i learned a lot about LLM (im)possibilities.
I personally think this is a dangerous path we are exploring there for all the obvious reasons like energy overcosumption, loss of truth/trust, psychosis, loss of problemsolving skills,..1/..
@jacqueline
I also see why some people like this technology.
A team of 3 people (and a powerplant) ported millions of lines oft legacy code within weeks from one EOL language to another.A task that i deemed impossible to do, before we did it.
Am I FOR AI? Not really.
Am I AGAINST AI? Depends.Its similar to asking myself wether i am for computers or not.
They have benefits.
They also made life worse for a lot of creatures on this planet.On the other hand they CAN be useful tools.
2/..
-
@jacqueline
I also see why some people like this technology.
A team of 3 people (and a powerplant) ported millions of lines oft legacy code within weeks from one EOL language to another.A task that i deemed impossible to do, before we did it.
Am I FOR AI? Not really.
Am I AGAINST AI? Depends.Its similar to asking myself wether i am for computers or not.
They have benefits.
They also made life worse for a lot of creatures on this planet.On the other hand they CAN be useful tools.
2/..
Some Ecoanarchists in the 1960ies said we should abolish the computers and get back to nature.
Maybe they were right.
Otoh you and i communicate because of computers.
My mom gets a CRT because of computers.
The cancer is found because of machime learning trained on those scans.
The treatment is developed be analyzing millions oft DNA/RNA samples.The same machines guide bombs and are used to manage Auschwitz.
Its really not black and white to me, but I see what you see.
3/3
-
@jacqueline our attention is already hijacked, and it’s costing us dearly. But the dependence on LLMs is also hijacking our attachments, our deep need to be understood through human connection. Here’s a machine that will tell you exactly what you want to hear, it’s enabling psychosis and isolating its users.
@jacqueline it's an addictive relationship that uses abusive tactics, this is a feature for the makers, not a bug
-
Some Ecoanarchists in the 1960ies said we should abolish the computers and get back to nature.
Maybe they were right.
Otoh you and i communicate because of computers.
My mom gets a CRT because of computers.
The cancer is found because of machime learning trained on those scans.
The treatment is developed be analyzing millions oft DNA/RNA samples.The same machines guide bombs and are used to manage Auschwitz.
Its really not black and white to me, but I see what you see.
3/3
And seeing people that you love/respect doing things that you do not agree with hurts a lot.
Especially when you see how the sickness creeps in.I have been there too.
Like this old friends of mine turned "conservative" ....
Brrrrr. -
i feel like i don't have the words to properly describe how it feels to see people who opinions i respected and valued slowly fall into ai psychosis. it's so slow and so subtle at first. "i'm just experimenting! i'm not an ai booster!"
then wait a few months, and they start explaining with the usual flawed, incoherent reasoning how actually it's all very interesting and thought-provoking, whilst pointing at an LLM that is so obviously just a reflection of their own ego.
@jacqueline this is very sad and disappointing. I saw it unroll exactly like that in one of the dude I respected most and least expected to embrace this crap. He feels like "ai is like a beginner that I can teach and it will write code faster than I can do it, and most of the time its's actually good".
This feels like laziness to move your fingers over the keyboard. Like, wtf. If you hate boilerplate that much, save some templates, write or find useful libs, IDK, but do it yourself, ffs.
-
@jacqueline this is very sad and disappointing. I saw it unroll exactly like that in one of the dude I respected most and least expected to embrace this crap. He feels like "ai is like a beginner that I can teach and it will write code faster than I can do it, and most of the time its's actually good".
This feels like laziness to move your fingers over the keyboard. Like, wtf. If you hate boilerplate that much, save some templates, write or find useful libs, IDK, but do it yourself, ffs.
@f4grx @jacqueline I can think of many criticisms of LLMs, but "laziness" seems like a weird thing to criticize. Computers were invented to do work for humans. I have spent a twenty year career making computers do what I tell them to. It has always been a goal to maximize the work vs effort ratio.
-
@jacqueline if you lose respect because someone is excited about something, did you respect them in the first place?
I can relate up to some degree, except that I think I'm less for-or-against in this. I think the tech is mildly useful and I don't really understand why people are excited about it, but that's probably also the reason why some are so hyped about it: If it does snap in place for you, why wouldn't you be excited about it?
Not treading in the murky environmental and infringe args.
@dynom you have not understood what i’ve said
-
@dynom you have not understood what i’ve said
@jacqueline Well you did mention you couldn't describe it properly, so that was a risk I considered while replying.
Can you elaborate on the "unusual flawed, incoherent reasoning" ?
-
i feel like i don't have the words to properly describe how it feels to see people who opinions i respected and valued slowly fall into ai psychosis. it's so slow and so subtle at first. "i'm just experimenting! i'm not an ai booster!"
then wait a few months, and they start explaining with the usual flawed, incoherent reasoning how actually it's all very interesting and thought-provoking, whilst pointing at an LLM that is so obviously just a reflection of their own ego.
@jacqueline Hmmm....... @ErikJonker
-
@jacqueline this is very sad and disappointing. I saw it unroll exactly like that in one of the dude I respected most and least expected to embrace this crap. He feels like "ai is like a beginner that I can teach and it will write code faster than I can do it, and most of the time its's actually good".
This feels like laziness to move your fingers over the keyboard. Like, wtf. If you hate boilerplate that much, save some templates, write or find useful libs, IDK, but do it yourself, ffs.
@f4grx @jacqueline > If you hate boilerplate that much, save some templates, write or find useful libs, IDK, but do it yourself, ffs.
Use actually good libraries and languages that don't generate spurious boilerplate and that provide tools to mitigate what truly is needed.
RIP >50% of the Java ecosystem (assuming everyone uses an IDE leads to an outsized tolerance of boilerplate that is self-reinforcing).
> This feels like laziness to move your fingers over the keyboard.
Too lazy and disrespectful to bother writing something remotely reliable. "Just slather it in slop, no one will look at it." -
@jacqueline if you lose respect because someone is excited about something, did you respect them in the first place?
I can relate up to some degree, except that I think I'm less for-or-against in this. I think the tech is mildly useful and I don't really understand why people are excited about it, but that's probably also the reason why some are so hyped about it: If it does snap in place for you, why wouldn't you be excited about it?
Not treading in the murky environmental and infringe args.
@dynom @jacqueline Their opinions were respected. Then they joined a cult and started to go completely off the rails and into constant boundless delusion.
The individual can still be respected, one can still feel compassion for them or like them. But their opinions on technical matters or anything that requires being somewhat in touch with reality cannot be, because their opinions no longer relate to reality.
They bought the hype and actively refuse to recognize any criticism that refutes cult doctrine.
They need help, and such deprogramming isn't trivial. -
@jacqueline this is very sad and disappointing. I saw it unroll exactly like that in one of the dude I respected most and least expected to embrace this crap. He feels like "ai is like a beginner that I can teach and it will write code faster than I can do it, and most of the time its's actually good".
This feels like laziness to move your fingers over the keyboard. Like, wtf. If you hate boilerplate that much, save some templates, write or find useful libs, IDK, but do it yourself, ffs.
@f4grx @jacqueline
I'm starting to believe the other way around: those people don't want to code, they want to check boxes. They don't care about the code, about languages, about the actual tech. They want a result. They want the attention of being told they are a genius. They are not coders, they are managers/bosses. There's nothing good to expect from them -
@f4grx @jacqueline
I'm starting to believe the other way around: those people don't want to code, they want to check boxes. They don't care about the code, about languages, about the actual tech. They want a result. They want the attention of being told they are a genius. They are not coders, they are managers/bosses. There's nothing good to expect from them@rakoo @jacqueline I fully agree.
But also, the person I see using more and more slop, has always been passionate about technology and cares deeply about explanations and detailed science. I dont understand it. When you care about the depth of things that much, how can you accept and trust a tool that makes everything badly and magically?
-
@f4grx @jacqueline I can think of many criticisms of LLMs, but "laziness" seems like a weird thing to criticize. Computers were invented to do work for humans. I have spent a twenty year career making computers do what I tell them to. It has always been a goal to maximize the work vs effort ratio.
@kthy @jacqueline okay good for you.
-
@AngelaScholder @jacqueline ... a strange way to describe people experimenting with a new and groundbreaking technology, ofcourse those people share their experiences, that is in a way promoting but that goes for any IT people are enthusiastic about. And it's complete nonsense to call an LLM a reflection of my own ego if i use it in a RAG configuration for analysing large numbers of documents...
-
@AngelaScholder @jacqueline ... a strange way to describe people experimenting with a new and groundbreaking technology, ofcourse those people share their experiences, that is in a way promoting but that goes for any IT people are enthusiastic about. And it's complete nonsense to call an LLM a reflection of my own ego if i use it in a RAG configuration for analysing large numbers of documents...
@AngelaScholder @jacqueline ...playing and experimenting is a good way to learn about (new) technology, it is also very human, the way we develop, find out what works and what does not.
-
@AngelaScholder @jacqueline ...playing and experimenting is a good way to learn about (new) technology, it is also very human, the way we develop, find out what works and what does not.
@ErikJonker @jacqueline Well, with the ways I've seen these sites reacting to people, even just praising the writing and thoughts of people about articles they uploaded/feeded where it later came out the AI somehow couldn't read the article and just hallucinated superlatives.
Basically, an AI working like that is basically only geared to work using people their ego.
That in the end will result in the AI mirroring the ego of the 'user' (user, or abused is an interesting discussion).
And, as >2 -
@ErikJonker @jacqueline Well, with the ways I've seen these sites reacting to people, even just praising the writing and thoughts of people about articles they uploaded/feeded where it later came out the AI somehow couldn't read the article and just hallucinated superlatives.
Basically, an AI working like that is basically only geared to work using people their ego.
That in the end will result in the AI mirroring the ego of the 'user' (user, or abused is an interesting discussion).
And, as >2@ErikJonker @jacqueline 2) people often are very easy influenced, they will just as much become like their chatbot as well as the clatbot reflecting on them.
The worst outcome of that is that the people basically become zombies of their chatbot.
Obviously we are all so strong that this will never happen to us...