I use Claude and Thaura for search in my daily life.
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
@evan yeah. It's good for bouncing design ideas off of. But for producing code it's basically automated stack exchange.
Which explains why Jeff Atwood hates AI.
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
I'd like to setup my own openwebui and ollama server at home sometime, but I haven't gotten around to it. I think it'd be nice to use open weight models instead of cloud services. But I'm doing ok otherwise.
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
@evan Somehow I get the feeling you're trying to farm "OK, boomer" style responses 🤷🏻
-
@evan Somehow I get the feeling you're trying to farm "OK, boomer" style responses 🤷🏻
@Musicaloris I'm not.
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
@evan "this is fine"
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
@evan I guess I feel guilty about it. I agree with the comment; it's automated stack exchange, I did set up some local models a while back, need to revisit that, specially since newer ones are probably better.
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
@evan my employer will pay for pretty much any tools one can think of and yeah. for a quick "hey should this fail and panic!() and crash everything according to the docs" llms are (mostly) great
not so great when the thing i'm trying to do is niche and/or specific to the [in-house] libraries we use, but considering the main offender is mostly a thin api wrapper around an opensource c thing it manages more or less finewould i want the whole "ai" industry to go up in flames without any hope for recovery? yeah
am i gonna take everything my BigTechCorpInc. employer will give me? also yeahtho now that i think about it, maybe i could ask them for a 128gb ram mac mini or smth as a one-time purchase as an ollama server instead of the subscriptions

-
I'd like to setup my own openwebui and ollama server at home sometime, but I haven't gotten around to it. I think it'd be nice to use open weight models instead of cloud services. But I'm doing ok otherwise.
I don't agree with many of the arguments against LLMs I see on the Fediverse.
LLM usage is not a GHG-intensive activity, especially compared to other everyday activities.
It disturbs me when people imply that LLMs are primarily responsible for climate change. This is not remotely true, and it ignores our real responsibility for working on the climate emergency.
-
I use Claude and Thaura for search in my daily life.
I use Claude as a rubber duck for coding. On occasion, I'll let CoPilot or Claude add a few lines of code directly. I don't "vibe code".
I enjoy both uses. I don't feel guilty about either.
Support your local humans
-
I don't agree with many of the arguments against LLMs I see on the Fediverse.
LLM usage is not a GHG-intensive activity, especially compared to other everyday activities.
It disturbs me when people imply that LLMs are primarily responsible for climate change. This is not remotely true, and it ignores our real responsibility for working on the climate emergency.
I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.
-
I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.
I don't think the fact that LLMs make mistakes is a reason to not use them. It is a good reason to get reference links, and to cross-check any output.
-
I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.
@evan hmm, I'm not seeing anything like genuine equivalence between human learning and LLM statistical output based on training data.
-
I don't think the fact that LLMs make mistakes is a reason to not use them. It is a good reason to get reference links, and to cross-check any output.
I generally don't like LLM-generated images or video. I mostly don't like LLM-generated text -- it's usually flat, wordy, and full of business jargon. I don't use LLMs for either of these things.
-
@evan hmm, I'm not seeing anything like genuine equivalence between human learning and LLM statistical output based on training data.
@davep That's OK, you don't have to.
-
I generally don't like LLM-generated images or video. I mostly don't like LLM-generated text -- it's usually flat, wordy, and full of business jargon. I don't use LLMs for either of these things.
I think that using LLMs to create text, images, video or code is an empowering technology for a lot of people. I think especially with code, it gives people who don't code and who've never had control over their computing experience more agency in their life and work. I think that's a net positive result.
-
I think that using LLMs to create text, images, video or code is an empowering technology for a lot of people. I think especially with code, it gives people who don't code and who've never had control over their computing experience more agency in their life and work. I think that's a net positive result.
As a software engineer and engineering manager, I am mildly concerned for my employability in the future. However, I've had a good career, with much more luck than I deserve. People in my profession have been notably pampered and paid extremely well. As automation turns its Sauron-like eye to my own industry, it's hard to say with a straight face that we have been unfairly treated. I also think that there are, and will be for the foreseeable future, a lot of options for people who like to code.
-
I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.
@evan I'm not sure how it could not be derivative. If the training data didn't slurp up all the repos on Github, it would not be able to generate something that looked an awful lot like a repo in Github.
If I trained a model on the collected literary works in the public domain and then asked it to produce a for loop written in Zig, no amount of prompting could make that happen.
On what grounds would that not be derivative?
-
As a software engineer and engineering manager, I am mildly concerned for my employability in the future. However, I've had a good career, with much more luck than I deserve. People in my profession have been notably pampered and paid extremely well. As automation turns its Sauron-like eye to my own industry, it's hard to say with a straight face that we have been unfairly treated. I also think that there are, and will be for the foreseeable future, a lot of options for people who like to code.
I see a lot of strong anti-LLM takes on the Fediverse. I like some of them, and I don't like others. I sometimes respond if I think I can contribute to the conversation, but not always.
-
I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.
@evan
There are so many applications for which LLMs are used/useful. I think you make good points about learning based on experience for code production and other uses. OTOH, if I had worked intensively to produce copyrighted content — I am thinking about books, whether fiction, textbooks, etc. — that was then scooped up and regurgitated after a query (e.g., medical advice), I might think otherwise (not to mention the risk that LLMs probably shouldn't be used unsupervised for med advice rn).