This is one of the worst takes from LLM enthusiasts.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz How does he think source-level debuggers will work under that analogy?
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I know who *will* be manually reviewing the generated code: the people in the black hats.
-
@arroz “LLMs are natural language compilers”, brought to you by the same kids insisting their product is “the operating system for the web” because nothing means anything if you ignore all implementation and engineering details
-
Vibe coded skyscrapers.
@Orb2069 @aspensmonster @zzt @arroz There was a preview of that. Search the history of highrises in UK, especially the ones built in the 1960s and 1970s.
You can save so much on tall buildings by not building 2 stories of cellars those silly continental architects added to the design. Or you can just copy paste a building on top of itself to double the number of livable floors from 6 to 12, right? -
@arroz it’s always a bit depressing when I find out about a new pocket of mediocre tech jackasses posting twitter crap on masto. all of the guys posting “LLMs are like compilers for natural language” should have their CS degrees yanked cause they’ve proven they don’t meet the academic requirements for a CS undergrad.
-
@angry_drunk @zzt @arroz I despise all of my coworkers and the company I work for. I'm just going to retire early when I'm finally let go due to slopcoding and then work on limiting my life's contact with software, since it's all going to be buggy and insecure garbage. I guess I'll be a hermit and write a manifesto.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
The trick is to get the LLM to generate a spec and an acceptance test for the change you want to make, and verify the test.
-
And executives. Seeing who are the bandwagon jumpers and who are being thoughtful about things.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz These systems are Dunning-Kruger-as-a-service, and that thread is a textbook example of why.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz Well put. Ambiguity is a well studied topic in the context of compilers. You won't want your code generator to be able to interpret a construct in a dozen different ways. Natural language is nothing but ambiguous.
"Then we'll constraint it accordingly". First, there are even many context free languages for which the elimination of ambiguity is impossible, and the ones for which is possible relies on typical well known techniques for them. At that point you just "innovating" by reinventing regular languages and context free languages.
Furthermore, are gcc or any compiler in llvm part of taking water from the mouths of mexican families? Does ghc put a huge amount of stress in the electrical grid of Ireland? Will a LLM generate code as correct as CompCert? Are rustc or sbcl part of an abject bubble that likely will have catastrophic effects on the economy?
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz @stroughtonsmith Totally off their rockers. Slop machine psychosis really seems to be in the air right now.
You know who is *perfectly cool* with developers continuing to write code for their apps like normal creative people? THE USERS. In fact, putting a slop-free badge on your product *is a selling point* because nobody wants this crap. 😂
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz @stroughtonsmith
Jesus fucking Christ, these people are incompetent idiots. I’m even more glad to be out of the programming business given that these are the morons with whom I’d be interacting. Everything is going to go to shit. -
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz except that LLMs are also deterministic (they just incorporate pseudorandom bits for some variety in the prediction)
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I mean... people still audit the machine code sometimes! It's not the first resort but it's on the list, and in any sufficiently complex system you need people that can chase the program logic all the way to the CPU. It stopped being common precisely because the results became supremely consistently good to the point where it became generally recognized as a bad idea to reflexively second guess the compiler.
That process has not happened with LLMs, they constantly spit out broken code. -
@arroz my boss yesterday just said that if you don't learn to use the LLM tools, you will be fired and replaced by people who do. It's terrifying. Especially if I was allowed to say what I was working on, you would be terrified too.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I desperately want a compiler for natural language and to make traditional languages obsolete. LLMs can't do that
-
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.
Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.
Leaves 20% for review, and the LLMs are faster than humans.
-
@arroz “LLMs are natural language compilers”, brought to you by the same kids insisting their product is “the operating system for the web” because nothing means anything if you ignore all implementation and engineering details
-
@arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.
Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.
Leaves 20% for review, and the LLMs are faster than humans.
@arroz In this case, the LLMs are replacing a boring job to a certain extend.
I wouldn't trust a "90% right" machine a job where people's lives can depend on it, though.
Also, there are traditional OCR based solutions used before and concurrently. In this project the jury is still out. Not certain which is more efficient. The obstacles and issues are bigger than expected. Not all smooth sailing.