AI is making us write more code.
-
AI is making us write more code. That's the problem.
I analyzed research papers on AI-generated code quality. The findings:
→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few monthsWe're building the wrong thing faster and calling it productivity.
-
The bottleneck was never writing code. It's understanding what to build.
If you're using AI coding tools, focus on:
• Smaller features (if it's 1000 lines, it's too big to review)
• Clear acceptance criteria before you prompt
• Tests first, AI-generated code second
• Security audits (AI can't do this)More code isn't the goal. Solving real problems is.
-
AI is making us write more code. That's the problem.
I analyzed research papers on AI-generated code quality. The findings:
→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few monthsWe're building the wrong thing faster and calling it productivity.
The bottleneck was never writing code. It's understanding what to build.
If you're using AI coding tools, focus on:
• Smaller features (if it's 1000 lines, it's too big to review)
• Clear acceptance criteria before you prompt
• Tests first, AI-generated code second
• Security audits (AI can't do this)More code isn't the goal. Solving real problems is.
-
AI is making us write more code. That's the problem.
I analyzed research papers on AI-generated code quality. The findings:
→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few monthsWe're building the wrong thing faster and calling it productivity.
Amen to that, brother…
-
AI is making us write more code. That's the problem.
I analyzed research papers on AI-generated code quality. The findings:
→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few monthsWe're building the wrong thing faster and calling it productivity.
@mlevison so true, it astounds me people didn't see this coming from the start. Also there's the cognitive deterioration double whammy.
-
AI is making us write more code. That's the problem.
I analyzed research papers on AI-generated code quality. The findings:
→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few monthsWe're building the wrong thing faster and calling it productivity.
That's my experience too.
Dave Farley's. MSE channe, which I usually respect,l recently claimed the opposite though, based on a study they took part in.
-
AI is making us write more code. That's the problem.
I analyzed research papers on AI-generated code quality. The findings:
→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few monthsWe're building the wrong thing faster and calling it productivity.
@mlevison Goodhart's law in action. Actually, I'm not sure it even is an example of Goodhart's law. Raw quantity of code output would never have correlated strongly with quality. What do they think they're doing‽
-
AI is making us write more code. That's the problem.
I analyzed research papers on AI-generated code quality. The findings:
→ 1.7x more issues than human-written code
→ 30-41% increase in technical debt
→ 39% increase in cognitive complexity
→ Initial speed gains disappear within a few monthsWe're building the wrong thing faster and calling it productivity.
@mlevison I use LLMs to help me with basic code writing tasks, generating the structural frameworks, saving me a lot of typing time. However, I never rely on that code out of the box, I always review it thoroughly and often just snip and prune. I would never attempt to give an LLM a complicated set of instructions, it's going to fail every time.
-
@mlevison I use LLMs to help me with basic code writing tasks, generating the structural frameworks, saving me a lot of typing time. However, I never rely on that code out of the box, I always review it thoroughly and often just snip and prune. I would never attempt to give an LLM a complicated set of instructions, it's going to fail every time.
@mlevison Intellisense, pretti, etc. are all just tools for a smart developer.
-
undefined oblomov@sociale.network shared this topic