This is one of the worst takes from LLM enthusiasts.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
The trick is to get the LLM to generate a spec and an acceptance test for the change you want to make, and verify the test.
-
And executives. Seeing who are the bandwagon jumpers and who are being thoughtful about things.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz These systems are Dunning-Kruger-as-a-service, and that thread is a textbook example of why.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz Well put. Ambiguity is a well studied topic in the context of compilers. You won't want your code generator to be able to interpret a construct in a dozen different ways. Natural language is nothing but ambiguous.
"Then we'll constraint it accordingly". First, there are even many context free languages for which the elimination of ambiguity is impossible, and the ones for which is possible relies on typical well known techniques for them. At that point you just "innovating" by reinventing regular languages and context free languages.
Furthermore, are gcc or any compiler in llvm part of taking water from the mouths of mexican families? Does ghc put a huge amount of stress in the electrical grid of Ireland? Will a LLM generate code as correct as CompCert? Are rustc or sbcl part of an abject bubble that likely will have catastrophic effects on the economy?
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz @stroughtonsmith Totally off their rockers. Slop machine psychosis really seems to be in the air right now.
You know who is *perfectly cool* with developers continuing to write code for their apps like normal creative people? THE USERS. In fact, putting a slop-free badge on your product *is a selling point* because nobody wants this crap. 😂
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz @stroughtonsmith
Jesus fucking Christ, these people are incompetent idiots. I’m even more glad to be out of the programming business given that these are the morons with whom I’d be interacting. Everything is going to go to shit. -
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz except that LLMs are also deterministic (they just incorporate pseudorandom bits for some variety in the prediction)
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I mean... people still audit the machine code sometimes! It's not the first resort but it's on the list, and in any sufficiently complex system you need people that can chase the program logic all the way to the CPU. It stopped being common precisely because the results became supremely consistently good to the point where it became generally recognized as a bad idea to reflexively second guess the compiler.
That process has not happened with LLMs, they constantly spit out broken code. -
@arroz my boss yesterday just said that if you don't learn to use the LLM tools, you will be fired and replaced by people who do. It's terrifying. Especially if I was allowed to say what I was working on, you would be terrified too.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I desperately want a compiler for natural language and to make traditional languages obsolete. LLMs can't do that
-
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.
Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.
Leaves 20% for review, and the LLMs are faster than humans.
-
@arroz “LLMs are natural language compilers”, brought to you by the same kids insisting their product is “the operating system for the web” because nothing means anything if you ignore all implementation and engineering details
-
@arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.
Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.
Leaves 20% for review, and the LLMs are faster than humans.
@arroz In this case, the LLMs are replacing a boring job to a certain extend.
I wouldn't trust a "90% right" machine a job where people's lives can depend on it, though.
Also, there are traditional OCR based solutions used before and concurrently. In this project the jury is still out. Not certain which is more efficient. The obstacles and issues are bigger than expected. Not all smooth sailing.
-
@arroz In this case, the LLMs are replacing a boring job to a certain extend.
I wouldn't trust a "90% right" machine a job where people's lives can depend on it, though.
Also, there are traditional OCR based solutions used before and concurrently. In this project the jury is still out. Not certain which is more efficient. The obstacles and issues are bigger than expected. Not all smooth sailing.
@petros I would need more context to know what we’re talking about here. Scanning and OCRing documents? Manually filled forms? Historical docs? If so, I don’t see how “one word wrong out of 10” is in any way acceptable.*
To me automation means something I can set and forget. If I have to verify the work of the “automation”, it’s not automating anything.
Imagine how successful computing would have been if those 40 year old computers I played with they got 10% of their math operations wrong. 1/2
-
@petros I would need more context to know what we’re talking about here. Scanning and OCRing documents? Manually filled forms? Historical docs? If so, I don’t see how “one word wrong out of 10” is in any way acceptable.*
To me automation means something I can set and forget. If I have to verify the work of the “automation”, it’s not automating anything.
Imagine how successful computing would have been if those 40 year old computers I played with they got 10% of their math operations wrong. 1/2
@petros Of course this doesn’t mean you have a tool that assists you with hard and repetitive work. If someone is scanning documents from the VI century for historical preservation, having a tool that helps identifying characters worn out by time, the several aspects of translation and interpretation, etc, might help. But that’s not something that does the job for itself. The historian is the central piece of that puzzle with the necessary knowledge and context for doing it.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz LLMs are a compiler in the same way that my 3-year old with a bunch of crayons is a camera.
-
@petros Of course this doesn’t mean you have a tool that assists you with hard and repetitive work. If someone is scanning documents from the VI century for historical preservation, having a tool that helps identifying characters worn out by time, the several aspects of translation and interpretation, etc, might help. But that’s not something that does the job for itself. The historian is the central piece of that puzzle with the necessary knowledge and context for doing it.
@arroz In this case there are invoices and purchase orders coming as PDF, unstructured data.
Currently there is OCR software and manual data entry. Both make mistakes, so there is always "double keying". If the result is the same, it is considered right. Otherwise it goes to review.
Now there are 2 LLMs who do the "keying" job. Both get it ça. 90% right.
A difference to compilers: two compilers do not create the same machine code, so one cannot compare two results and decide that's right.
-
@arroz In this case there are invoices and purchase orders coming as PDF, unstructured data.
Currently there is OCR software and manual data entry. Both make mistakes, so there is always "double keying". If the result is the same, it is considered right. Otherwise it goes to review.
Now there are 2 LLMs who do the "keying" job. Both get it ça. 90% right.
A difference to compilers: two compilers do not create the same machine code, so one cannot compare two results and decide that's right.
@arroz Also, if there still is an error in one invoice and purchase order, it is usually not catastrophic. You get 250 screws instead of 25.. that happened even before we had computers. It's annoying but.. well, magic doesn't happen, sh** does ;-)
Given that we work on behalf of customers, we need to have an acceptably low error rate, of course.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz Had a genAI-curious colleague voice this exact take last week.
I pointed out the same things you did, but honestly they're so eager to believe that I don't think they internalized the difference...
Another, koolaid-drinking colleague replied "well sometimes compilers are not deterministic!!!", as if finding a compiler bug every 15 years was the same as an LLM crapping out every prompt.