This is one of the worst takes from LLM enthusiasts.
-
@zeank I like this take even though it’s a little hyperbolic. Probably more like “Now that McDonalds exists…”
On the other hand, the fanboys are equally hyperbolic I suppose. “Hey, I can drive up to this window and people hand me food for money, this has completely obsoleted cooking!”@slyborg I didn’t say McDonald‘s because their key differentiating factor is that you know exactly what you get. No matter if you’re in Seattle, Melbourne, Tokio or Paris.
But then, it’s just a shit take anyway. Don’t overthink it.
-
Oh @colincornaby is on here
That post is great and I have gotten a lot of use out of it.
@gbargoud That microwave post still does numbers.
I’m sort of amazed how everything I wrote about this last year remains absolutely true today.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz yes, mechanistic vs probabilistic
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz Makes one wonder if these people ever read the Dragon Book.
-
@arroz @gudenau our company isn't pushing that hard. But the implicit threat looms... that the old folks who don't embrace more efficient tooling will go the way of John Henry when he lost his job to the steam drill.
A round of layoffs a while back that got rid of several 35+ year veterans makes the implicit threat credible.
-
Vibe coded skyscrapers.
@aspensmonster@tenforward.social @zzt @arroz @Orb2069: I have said even before the slop merchants came up that the quality of software "engineering" is such that if civil engineers were as willfully incompetent, we'd have a lot more Tay Bridge disasters.
-
@nils_ballmann @arroz @binford2k what one faces when doing formal verification of LLM output. However, LLMs might enable us to write larger formally verified systems in practice. LLMs could help with the spec writing and validation as well. We'll see.
LLMs are basically generators in neuro-symbolic hybrid systems. And many people like to use them for productivity. I.e. a component or tool. No reason to get emotional about it. Like humans, LLMs are unreliable but still useful.
@sigismundninja @nils_ballmann @arroz @binford2k Why not roll dice to decide what to do?
-
@aspensmonster@tenforward.social @zzt @arroz @Orb2069: I have said even before the slop merchants came up that the quality of software "engineering" is such that if civil engineers were as willfully incompetent, we'd have a lot more Tay Bridge disasters.
-
And executives. Seeing who are the bandwagon jumpers and who are being thoughtful about things.
@zygmyd @angry_drunk @zzt @arroz Executives were never on your side.
-
@arroz I think you may be overlooking another point here: there is absolutely NO reason LLMs should not build directly into machine code or better yet a chip. Why have a "human readable" interface (that is a programming language or a universal hardware layer) at all?
If we stop creating UTMs and adopt machines farther down the chomsky hierarchy (and identify the inherent security advantages of doing so) we can probably make interesting progress. Especially in security engineering.
If we fab machines directly that don't require software to rebind them ...
Since the '40s we have been building machines that do too much (on purpose) and getting mad when they do parts of what we built them to do...
@noplasticshower @arroz The last time I read about such an attempt (using genetic algorithms) was an excellent showcase of how difficult it is to come up with a scoring function that prohibits weird and practically useless results: https://news.ycombinator.com/item?id=43152407
I'm not saying it's not possible, just that it might not be as easy as one initially thinks.
-
@noplasticshower @arroz The last time I read about such an attempt (using genetic algorithms) was an excellent showcase of how difficult it is to come up with a scoring function that prohibits weird and practically useless results: https://news.ycombinator.com/item?id=43152407
I'm not saying it's not possible, just that it might not be as easy as one initially thinks.
@arroz @soulsource great pointer. Thank you.
It is always easier to build a more powerful machine and suffer the consequences when it misbehaves.
This is the sort of thing we can put to ML.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I still use disassemblers.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz it’d be great if you could also let people know that Steve is really a great developer, with a long time focus on macOS, who knows what he’s doing. As far as AI, he’s just sharing his experiences, and is looking at things through an “everypersons” lens. You’re also totally right here, but some of the replies here are really unwarranted.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz no. No no no no.
Use unreviewed LLM code at your own peril. The s**t it generates, especially if you don't understand the code it generates....you might as well throw away your project.
-
undefined oblomov@sociale.network shared this topic on