I use Claude and Thaura for search in my daily life.
-
@evan hmm, I'm not seeing anything like genuine equivalence between human learning and LLM statistical output based on training data.
@davep That's OK, you don't have to.
-
I generally don't like LLM-generated images or video. I mostly don't like LLM-generated text -- it's usually flat, wordy, and full of business jargon. I don't use LLMs for either of these things.
I think that using LLMs to create text, images, video or code is an empowering technology for a lot of people. I think especially with code, it gives people who don't code and who've never had control over their computing experience more agency in their life and work. I think that's a net positive result.
-
I think that using LLMs to create text, images, video or code is an empowering technology for a lot of people. I think especially with code, it gives people who don't code and who've never had control over their computing experience more agency in their life and work. I think that's a net positive result.
As a software engineer and engineering manager, I am mildly concerned for my employability in the future. However, I've had a good career, with much more luck than I deserve. People in my profession have been notably pampered and paid extremely well. As automation turns its Sauron-like eye to my own industry, it's hard to say with a straight face that we have been unfairly treated. I also think that there are, and will be for the foreseeable future, a lot of options for people who like to code.
-
I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.
@evan I'm not sure how it could not be derivative. If the training data didn't slurp up all the repos on Github, it would not be able to generate something that looked an awful lot like a repo in Github.
If I trained a model on the collected literary works in the public domain and then asked it to produce a for loop written in Zig, no amount of prompting could make that happen.
On what grounds would that not be derivative?
-
As a software engineer and engineering manager, I am mildly concerned for my employability in the future. However, I've had a good career, with much more luck than I deserve. People in my profession have been notably pampered and paid extremely well. As automation turns its Sauron-like eye to my own industry, it's hard to say with a straight face that we have been unfairly treated. I also think that there are, and will be for the foreseeable future, a lot of options for people who like to code.
I see a lot of strong anti-LLM takes on the Fediverse. I like some of them, and I don't like others. I sometimes respond if I think I can contribute to the conversation, but not always.
-
I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.
@evan
There are so many applications for which LLMs are used/useful. I think you make good points about learning based on experience for code production and other uses. OTOH, if I had worked intensively to produce copyrighted content — I am thinking about books, whether fiction, textbooks, etc. — that was then scooped up and regurgitated after a query (e.g., medical advice), I might think otherwise (not to mention the risk that LLMs probably shouldn't be used unsupervised for med advice rn). -
@DrFerrous I've been writing and publishing code on the Web since about the time the Web began. I'm sure that a lot of my original work is used for training data.
-
I generally don't like LLM-generated images or video. I mostly don't like LLM-generated text -- it's usually flat, wordy, and full of business jargon. I don't use LLMs for either of these things.
@evan common GenAI images aren't great, since they're generated by image uneducated users.
But I've seen really interesting work by photographers already (it's not a surprise, since their own art got the same criticisms a century ago "it's not your work, it's the machine's", "it's not art, it's engineering"). But that also implies that a lot of curation and post-production takes place in the process, just like photography.
And of course, using the medium to say something through it.
-
I see a lot of strong anti-LLM takes on the Fediverse. I like some of them, and I don't like others. I sometimes respond if I think I can contribute to the conversation, but not always.
@evan working on a piece on this but broadly in agreement - i think the AI exceptionalism is fever pitch here rn as if everything up to here was somehow fine and dandy and now we've passed an invisible threshold into fascism. like cmon...
-
@NIGHTEN It's OK. I support the project because I appreciate the values. I haven't used it for code; I'm not sure that's easy to do.