Speaking of AI.
-
Speaking of AI. We're in a tight spot. "We", as in, tech workers, I mean.
A bunch of things are true and they all pull in different directions.
- It works. I'm not really willing to entertain the argument that it doesn't, anymore, after having built several large and small projects with it. For coding, it is a lever that can provide a dramatic productivity increase. I'm comfortable saying 3x+, overall. Maximalists are saying 10x or 100x. Even if it's only 3x, that's industry shaking.
1/? -
Speaking of AI. We're in a tight spot. "We", as in, tech workers, I mean.
A bunch of things are true and they all pull in different directions.
- It works. I'm not really willing to entertain the argument that it doesn't, anymore, after having built several large and small projects with it. For coding, it is a lever that can provide a dramatic productivity increase. I'm comfortable saying 3x+, overall. Maximalists are saying 10x or 100x. Even if it's only 3x, that's industry shaking.
1/?- It's difficult for me to say I "like" it. It smooths out the creativity of what I've always thought of as at least partly an art. Even while I can see my productivity skyrocket when using it.
- It is even more difficult for me to say, "no, I won't use it". I spent the first 25 years of my tech career living on sub-poverty wages trying to make a living in Open Source software and my own businesses. I am aging without notable savings. I need health insurance and a good salary to catch up.
2/? -
- It's difficult for me to say I "like" it. It smooths out the creativity of what I've always thought of as at least partly an art. Even while I can see my productivity skyrocket when using it.
- It is even more difficult for me to say, "no, I won't use it". I spent the first 25 years of my tech career living on sub-poverty wages trying to make a living in Open Source software and my own businesses. I am aging without notable savings. I need health insurance and a good salary to catch up.
2/?- I'm pretty sure the option to say "no" is rapidly fleeing for everyone making a living in tech. It's too powerful a tool for a tech-related business to opt out of...they'll be killed by companies that spend the $100/month on Claude or whatever.
- And, I mean, it's not all bad. Part of my "OK, let me give this an honest try" experiments led to me knocking out a half dozen projects on my todo list. Some of it was finishing projects over a weekend that I'd been tinkering with for months.
3/? -
- I'm pretty sure the option to say "no" is rapidly fleeing for everyone making a living in tech. It's too powerful a tool for a tech-related business to opt out of...they'll be killed by companies that spend the $100/month on Claude or whatever.
- And, I mean, it's not all bad. Part of my "OK, let me give this an honest try" experiments led to me knocking out a half dozen projects on my todo list. Some of it was finishing projects over a weekend that I'd been tinkering with for months.
3/?- It sucks to have a significant portion of the folks I look up to in tech absolutely trashing folks who're using AI. I understand that sentiment, I was trash-talking AI and its users until a few months ago (and I am still repulsed by AI prose, imagery, and video...it's nauseating). But, most of those folks don't have to worry about money...they were millionaires in their 30s by being in the right place at the right time with the right skills. I wasn't in the right place.
4/? -
- It sucks to have a significant portion of the folks I look up to in tech absolutely trashing folks who're using AI. I understand that sentiment, I was trash-talking AI and its users until a few months ago (and I am still repulsed by AI prose, imagery, and video...it's nauseating). But, most of those folks don't have to worry about money...they were millionaires in their 30s by being in the right place at the right time with the right skills. I wasn't in the right place.
4/?- I can't ignore the environmental cost or the cost to our democracy, but I don't see a way out of it. Me being out of work isn't going to stop it.
- Also, it's fun. Sorry, it is. It's not programming, anymore, it's managing...but, it's management with near instant gratification. I can do experiments I've wanted to try, but never had the time or energy for and toss the failures. I can launch a smallish project in a weekend that would have taken a month of weekends before.fin...probably.
-
- I can't ignore the environmental cost or the cost to our democracy, but I don't see a way out of it. Me being out of work isn't going to stop it.
- Also, it's fun. Sorry, it is. It's not programming, anymore, it's managing...but, it's management with near instant gratification. I can do experiments I've wanted to try, but never had the time or energy for and toss the failures. I can launch a smallish project in a weekend that would have taken a month of weekends before.fin...probably.
Oh, also, I have skin in the game. I'm not just randomly dismissing the ethical concerns, I'm right in the middle of them.
A book I wrote was among those pirated by Anthropic. I'm getting ~$1500 (and my publisher is getting the other ~$1500) from the settlement. And, since I have a bunch of code in Open Source projects spanning decades, I'm sure my code is also in the training data for all of them.
I'm not ecstatic about it. But, it's where we are and I don't imagine I can do much about it.
-
Speaking of AI. We're in a tight spot. "We", as in, tech workers, I mean.
A bunch of things are true and they all pull in different directions.
- It works. I'm not really willing to entertain the argument that it doesn't, anymore, after having built several large and small projects with it. For coding, it is a lever that can provide a dramatic productivity increase. I'm comfortable saying 3x+, overall. Maximalists are saying 10x or 100x. Even if it's only 3x, that's industry shaking.
1/? -
Oh, also, I have skin in the game. I'm not just randomly dismissing the ethical concerns, I'm right in the middle of them.
A book I wrote was among those pirated by Anthropic. I'm getting ~$1500 (and my publisher is getting the other ~$1500) from the settlement. And, since I have a bunch of code in Open Source projects spanning decades, I'm sure my code is also in the training data for all of them.
I'm not ecstatic about it. But, it's where we are and I don't imagine I can do much about it.
@swelljoe I learned a weird thing about my stuff and Anthropic https://berryvilleiml.com/2025/12/05/the-anthropic-copyright-settlement-is-telling/
-
Speaking of AI. We're in a tight spot. "We", as in, tech workers, I mean.
A bunch of things are true and they all pull in different directions.
- It works. I'm not really willing to entertain the argument that it doesn't, anymore, after having built several large and small projects with it. For coding, it is a lever that can provide a dramatic productivity increase. I'm comfortable saying 3x+, overall. Maximalists are saying 10x or 100x. Even if it's only 3x, that's industry shaking.
1/?@swelljoe As a non-user of AI (lucky), my impression was that in the areas it works well in -- repetitive codebases that resemble ones in the training dataset -- the productivity increase also incurs technical debt at a rate higher than if you'd gotten some junior coders to do it; is that wrong?
-
@swelljoe I learned a weird thing about my stuff and Anthropic https://berryvilleiml.com/2025/12/05/the-anthropic-copyright-settlement-is-telling/
@noplasticshower yeah, I'm confident that many, if not all, of the major models have ingested the thousands of posts I've made to the forum I maintain for the OSS projects I work on. I feel ambivalent about that. On one hand, if someone is asking ChatGPT for help with my software, I'd rather it give reasonable answers than dangerous ones.
But, also, it sucks that the way it does that kind of thing is DDoSing my websites periodically and blatantly disregarding copyright or licenses.
-
@swelljoe As a non-user of AI (lucky), my impression was that in the areas it works well in -- repetitive codebases that resemble ones in the training dataset -- the productivity increase also incurs technical debt at a rate higher than if you'd gotten some junior coders to do it; is that wrong?
-
@swelljoe As a non-user of AI (lucky), my impression was that in the areas it works well in -- repetitive codebases that resemble ones in the training dataset -- the productivity increase also incurs technical debt at a rate higher than if you'd gotten some junior coders to do it; is that wrong?
@clayote it is quite wrong, as of October of last year, when the current crop of models arrived. As of Opus 4.5, Codex 5.2, and Gemini 3, when used in an agentic context (e.g. Claude Code), they're not limited to simple/repetitive code or code that is prominent in the training data.
The training data is "the entire internet and all of public Github", so it knows every language, every framework. Yeah, it's better at simple CRUD apps in TypeScript, but it also kicks my ass in my best languages.
-
@clayote it is quite wrong, as of October of last year, when the current crop of models arrived. As of Opus 4.5, Codex 5.2, and Gemini 3, when used in an agentic context (e.g. Claude Code), they're not limited to simple/repetitive code or code that is prominent in the training data.
The training data is "the entire internet and all of public Github", so it knows every language, every framework. Yeah, it's better at simple CRUD apps in TypeScript, but it also kicks my ass in my best languages.
@clayote I mean, there are still problems it can't solve, but that set is much smaller than you would think if you last looked at it seriously any time up until a few months ago. The models now can search the web, instrument software so they can test without human intervention, and plan quite large/complicated projects for implementation across several context windows.
When driven by an expert, there is very little it can't do, and it does it all very, very, rapidly.
-
@clayote it is quite wrong, as of October of last year, when the current crop of models arrived. As of Opus 4.5, Codex 5.2, and Gemini 3, when used in an agentic context (e.g. Claude Code), they're not limited to simple/repetitive code or code that is prominent in the training data.
The training data is "the entire internet and all of public Github", so it knows every language, every framework. Yeah, it's better at simple CRUD apps in TypeScript, but it also kicks my ass in my best languages.
@swelljoe That's interesting considering that if I'm not mistaken (based on your work on Webmin/Virtualmin), one of your best languages is Perl. I never seriously got into Perl, but it has a reputation for being quite expressive. So in theory, you should be able to express what you want directly in the language. It feels wrong that giving instructions to an LLM, in ambiguous natural language, and having it grind away, is kicking your ass even in a language like Perl. Like a failure of PL design.
-
@noplasticshower yeah, I'm confident that many, if not all, of the major models have ingested the thousands of posts I've made to the forum I maintain for the OSS projects I work on. I feel ambivalent about that. On one hand, if someone is asking ChatGPT for help with my software, I'd rather it give reasonable answers than dangerous ones.
But, also, it sucks that the way it does that kind of thing is DDoSing my websites periodically and blatantly disregarding copyright or licenses.
@swelljoe yup. But if you use it as a tool to assist you, you can assist yourself. I found that out yesterday.
https://berryvilleiml.com/2026/02/18/using-gemini-in-the-silver-bullet-reboot/
-
@swelljoe That's interesting considering that if I'm not mistaken (based on your work on Webmin/Virtualmin), one of your best languages is Perl. I never seriously got into Perl, but it has a reputation for being quite expressive. So in theory, you should be able to express what you want directly in the language. It feels wrong that giving instructions to an LLM, in ambiguous natural language, and having it grind away, is kicking your ass even in a language like Perl. Like a failure of PL design.
@matt "quantity has a quality all its own". Maybe I can write better code, given sufficient time. I can certainly write more concise code (especially in Perl).
But, the models write code an order of magnitude faster than I can, and they can write code 24/7. And, honestly, it's pretty good code, most of the time.
It's still true that the hardest part is deciding what to make rather than making it, but it's drastically easier to write software now with the AI than doing it myself.