@glyph Did you quote post something?
-
@glyph I gotta be better about speaking up when I'm hanging out in your chat. My lack of relevant insights has, thus far, made it feel like I'd be lobbing a distraction at you
@wuest tbh hearing chat *trying* to understand what I'm doing is really motivating. Best case scenario, sometimes the explaining prompts me to realize that I didn't actually know what I was doing as well as I thought I did, and change strategy. In the average case the rest of the chat starts to understand it a bit better and can lob in more useful suggestions. Even the worst case is just a more entertaining stream
-
@glyph Also, learning implies the ability to generalize? LLMs cannot do that, so they have to be trained on O(the entire internet) to make sure there's something to plagarize for every situation. That's not learning, it's leaning over another kid's shoulder and copying all the answers.
-
> “learning” is inherently mutual
@glyph as a part-time teacher I often think about something a teacher of mine once said: I love teaching because I learn so much.
-
@glyph Also, learning implies the ability to generalize? LLMs cannot do that, so they have to be trained on O(the entire internet) to make sure there's something to plagarize for every situation. That's not learning, it's leaning over another kid's shoulder and copying all the answers.
@xgranade I tend to agree but this veers directly into that frustrating philosophical territory. Worse, actually, because this invites a conversation about what exactly "generalizing" is, and whether the (objectively impressive) pseudosemantic word extrusion that they do counts, and then we have to talk about benchmarks and "quals" and I'm already tired
-
> “learning” is inherently mutual
@glyph as a part-time teacher I often think about something a teacher of mine once said: I love teaching because I learn so much.
@justvanrossum as I said! a cliché! I've done like… probably a few hundred hours of teaching in my entire life, but even I know this!
-
@glyph the story in four acts we are about to see (just one example, but could be Wikipedia, or anything else as well)
-
@glyph All great points!
In case you are interested, here is my philosophical take (also arguing that AI training is different from teaching but in a different way).
https://listed.to/@24601/61247/public-domain-business-models-and-teaching-ai
-
@justvanrossum as I said! a cliché! I've done like… probably a few hundred hours of teaching in my entire life, but even I know this!
@glyph I missed the post where you did say that: I now see my response was rather redundant…
-
@glyph the story in four acts we are about to see (just one example, but could be Wikipedia, or anything else as well)
@tymwol @glyph what I'm more concerned about from the Wikipedia side of things is sources that used to be reliable becoming untrustworthy LLM generated garbage (fake citations are popping up in peer reviewed papers a lot more nowadays, for example). There's some news sites that have decided meh who needs human writers. Stuff like that.
-
@glyph the story in four acts we are about to see (just one example, but could be Wikipedia, or anything else as well)
Call it a conspiracy, but I'm still convinced that #StackOverflow is dying primarily because the marketing for CodeGen AI demotivated a lot of people to the point that they do not even want to program anything anymore.
I've seen a lot of (what could have been) Juniors avoid doing anything with code because AI will be able to do it in 5 years anyway so there is no point in learning it.And then the large layoffs, so the remaining ones are probably too busy to write answers there
-
Call it a conspiracy, but I'm still convinced that #StackOverflow is dying primarily because the marketing for CodeGen AI demotivated a lot of people to the point that they do not even want to program anything anymore.
I've seen a lot of (what could have been) Juniors avoid doing anything with code because AI will be able to do it in 5 years anyway so there is no point in learning it.And then the large layoffs, so the remaining ones are probably too busy to write answers there
@agowa338 @glyph Well, yeah, kind of. Also, StackExchange company themselves pushed AI to SO, so they helped it happen. But... same argument can be made about any other service, for example, if we stop using Wikipedia and switch to LLMs, same thing will happen. If we stop using our brains and switch to LLMs, we would end brain dead.
-
@agowa338 @glyph Well, yeah, kind of. Also, StackExchange company themselves pushed AI to SO, so they helped it happen. But... same argument can be made about any other service, for example, if we stop using Wikipedia and switch to LLMs, same thing will happen. If we stop using our brains and switch to LLMs, we would end brain dead.
You missed my point. My point was that people aren't moving towards LLMs but instead stopping to code all together and do other things instead...
If people were actually moving towards LLMs you'd see a lot of threads with people not understanding their own code and having to figure out the complex issues that LLM generated code causes (aka you'd see an uptick in people having to troubleshoot their own AI generated code hallucinations)...
-
@glyph I missed the post where you did say that: I now see my response was rather redundant…
@justvanrossum no worries, just enthusiastically agreeing :)
-
You missed my point. My point was that people aren't moving towards LLMs but instead stopping to code all together and do other things instead...
If people were actually moving towards LLMs you'd see a lot of threads with people not understanding their own code and having to figure out the complex issues that LLM generated code causes (aka you'd see an uptick in people having to troubleshoot their own AI generated code hallucinations)...
@agowa338 @tymwol @glyph I have been trying to understand this area with precision. I think the kinds of problems the code assistants tend to fail at are the kind that were not generally answerable on StackOverflow in the first place, and which a less experienced programmer might not even recognize or be able to describe well enough to even ask a question.
-
Oh and my biggest indicator for my assumption being right is looking at all of the "AI bros" and them after a while shit talking all of the codegen stuff themselves with phrases like "what's the point of all of this when I've to hire a programmer to make it work in the end anyway?" for example...
-
Oh and my biggest indicator for my assumption being right is looking at all of the "AI bros" and them after a while shit talking all of the codegen stuff themselves with phrases like "what's the point of all of this when I've to hire a programmer to make it work in the end anyway?" for example...
@agowa338 @tymwol @glyph I've done some experimentation and I found the tools to be extremely powerful in the way having a tracked excavator is more powerful than a hand shovel. Perhaps applying it takes some skill, but it magnifies human effort quite a lot. I've grown quite concerned that many people really underestimate the impact these tools are having and will have on a secular trend of power shifting away from labor.
-
@agowa338 @tymwol @glyph I've done some experimentation and I found the tools to be extremely powerful in the way having a tracked excavator is more powerful than a hand shovel. Perhaps applying it takes some skill, but it magnifies human effort quite a lot. I've grown quite concerned that many people really underestimate the impact these tools are having and will have on a secular trend of power shifting away from labor.
-
@agowa338 @tymwol @glyph I've done some experimentation and I found the tools to be extremely powerful in the way having a tracked excavator is more powerful than a hand shovel. Perhaps applying it takes some skill, but it magnifies human effort quite a lot. I've grown quite concerned that many people really underestimate the impact these tools are having and will have on a secular trend of power shifting away from labor.
I did some experiments too. I was actually quite happy with the very first version of ChatGPT. But any iteration on it just made it less capable and worse. They optimized it for cost which heavily impacted advanced capabilities that I enjoyed.
Same for codegen. I initially thought of it as advanced Intelisense. But then the more I (tried to) use it it just become more and more annoying as it didn't even manage to suggest the right APIs of the project itself.
-
-