This blogpost makes an astoundingly good case about LLMs I hadn't considered before.
-
@cwebber Sure didn't help that Stack Overflow was killing itself too.
@mayintoronto @cwebber
True, that decline was on it's way. But at least the information was publicly available.
This is also why I was ranting about it two years ago when people were encouraging others to delete their stackoverflow contributions (as if that meant that the AI companies wouldn't have access to it, SO made a deal for direct share vs scraping). You're just denying it to other human beings at this point. Build a better web, post in personal blogs, community forums, don't lock up knowledge in opaque systems! -
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber Extra painful: bigger tech companies can afford to pay for plans that limit use of context data to train future versions of an LLM service's models so THEIR work is "protected" while their employees consume the commons. But smaller companies and individual users will be giving up their data.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber enclosure strikes again
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber this article have some good prediction on the knowledge silos LLM might be able to create, but it does not address the fact that the business model is not profitable. When the price hike hit to make LLM generating profits, people will have to balance the price of the subscription with the real value provided... just like any purchase. We might loose a few years of knowledge down the LLM silos before the collapse but personally, I think it is ok, we have plenty of good documented 1/2
-
@cwebber this article have some good prediction on the knowledge silos LLM might be able to create, but it does not address the fact that the business model is not profitable. When the price hike hit to make LLM generating profits, people will have to balance the price of the subscription with the real value provided... just like any purchase. We might loose a few years of knowledge down the LLM silos before the collapse but personally, I think it is ok, we have plenty of good documented 1/2
@cwebber tech from 2010 to 2023 that will still be fully usable. People choosing to use said tech will decrease cost and might fast-forward the downfall of LLM everywhere trend we are currently in.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber I keep repeating this in such contexts, apologies if I sound like a broken record: AI is a fascist project.
The purpose isn't merely enclosure of the commons. Making public stuff private is more of a means to an end.
There is centuries of historic precedent that shows that when a state has natural resources, it needs fewer people to extract wealth from that, and so pay for what keeps rulers in power.
If a state has fewer resources, it has to rely on a large, healthy population's...
-
@cwebber I keep repeating this in such contexts, apologies if I sound like a broken record: AI is a fascist project.
The purpose isn't merely enclosure of the commons. Making public stuff private is more of a means to an end.
There is centuries of historic precedent that shows that when a state has natural resources, it needs fewer people to extract wealth from that, and so pay for what keeps rulers in power.
If a state has fewer resources, it has to rely on a large, healthy population's...
@cwebber ... labour. A population that needs to be educated and mobile enough to fulfil their task. Such a population tends to demand more say in the affairs of state.
So natural resources lead to tyrannies, and lack thereof to democracies.
Privatisation of knowledge is a way of creating artificial resources to extract with fewer labourers. Plus, the more that extraction is automated, the smaller the population a ruler needs - or the more precarious their existence.
That is the goal.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber
One thing I've noticed in the same aspect as this, is that I talk much less code with colleagues now and have much less interactions with them for working through problems, and thus limit my exposure to alternate problem solving.When ever I want to discuss a problem, it's more often than not boiled down to some LLM answer, meaning, I might as well 'cut out' the middle and ask the LLM itself if all I get are LLM answers anyway.
That truly sucks.
"Have you asked Claude/Copilot/Chatgpt"..... -
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber of course guys - it was never about the LLM, it was about crowd-sourcing intelligence at an epic-scale. Every piece of code a developer writes and fixes becomes training data. Same with every conversation. I'm surprised people don't see the danger in having one single overlord and gatekeeper of all information in the world. Its crazy.
People seem to have forgotten what the real meaning of democracy and multi-lateralism are.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber this isn’t really new. For all the things on StackOverflow there was a huge domain of knowledge that was just not on there.
For most of my corporate developer life the knowledge/bug fixes would not be found on public forums but in internal collective knowledge, documentation or simply knowing a person in the same field. Most of this was not public domain.
The biggest issue now is that those firms commit their group knowledge to LLMs and we will not get it back.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber you calling it an 'astoundingly good case' makes me feel insightful in a way no LLM has been able to accomplish. I'm going to be insufferably smug for the rest of the day :)
-
@cwebber Already the forums for VOIP software and embedded stuff (Arduino etc) are fully enshittified - they were toxic enough pre-AI) and folk have simply stopped contributing there (I think this happened just *before* LLMs became popular, so the quality of whatever does go to the training sets isn't going to be much good).
I suspect another factor is when people are getting paid for their work *and* depending on their employers upselling SaaS or other commercial services, they are less inclined to share stuff with the competition (I had to figure out my PJSIP trunk and securing it for myself, most of my findings are tooted here on Fedi as I'm not even sure where else to put them)
@vfrmedia make a small static site blog to share your experience? Put so ethibg like Anubis to stop/slow scraping
-
@cwebber ... labour. A population that needs to be educated and mobile enough to fulfil their task. Such a population tends to demand more say in the affairs of state.
So natural resources lead to tyrannies, and lack thereof to democracies.
Privatisation of knowledge is a way of creating artificial resources to extract with fewer labourers. Plus, the more that extraction is automated, the smaller the population a ruler needs - or the more precarious their existence.
That is the goal.
-
@futzle @cwebber
When newbies encounter toxicity for asking their question on a public forum, cannot really blame them for turning to a LLM.
https://youtu.be/N7v0yvdkIHg -
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber I agree. There is less public information for future training for anyone as well as similar code due to more AI written software also in the open space. I expect an input standstill until someone invents non-LLM AI for coding.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber Well, people went to StackOverflow with a question and looked forward to answers based on the experience of others. While one can still ask an LLM and give a rubber-duck training session for its provider, I still fail to see the influx of answers based on experience.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber
Wait, if AI caused the collapse of wrong-answers-only sites like stackoverflow, doesn't that mean they have positive uses? -
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber but also, as uninviting as the stack overflow culture may have been, the moderators were there to try to get people to ask better questions. I doubt llms will handle things like x/y problem issues, so to me it seems things will get worse for people able/willing to pay as well.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber Maybe it's because I'm a bit long in the tooth. In 35 years of programming I have never hesitated to turn to others, including online forums.
But I will never turn to LLMs. LLMs are machines that regurgitate answers out of huge amounts of data. What LLMs lack is understanding. So they cannot justify their answers, pick the best answer for you out of their data, or meaningfully engage with you to help you adapt answers to your needs. You know... like humans can.
-
@futzle @cwebber
When newbies encounter toxicity for asking their question on a public forum, cannot really blame them for turning to a LLM.
https://youtu.be/N7v0yvdkIHg@bornach @futzle @cwebber not just newbies. I'm 35 years in this work so while this is not "my first rodeo" I regularly have to work on something completely new to me. What a lot of these pricks don't understand is that many of us don't have the time to deep dive into their pet platform, framework, tool, or language, and we don't know how to ask the "right" questions. But still, they're at least human and with a little patience you might just tease the right answer out of them.