This is one of the worst takes from LLM enthusiasts.
-
@arroz But why generate code at all. Just execute the prompts directly. Suits me... 😘
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz even if LLMs were comparable, people do review the output of compilers
-
@petros What you need is to get rid of the PDFs and deploy an online store. 😅
What is the failure rate of the traditional OCRs compared to the LLMs? And how modern were those OCRs? Modern OCR in the last 5 years or so have a success rate way higher than 90%. And are the failures on OCR itself or interpreting their context (aka knowing how to read the invoice or order, not just identifying the right characters)?
@arroz I don't have the exact numbers of "traditional" OCR but it will be around 90% as well. And, yes, you are right, the issue is not to get the letters right, it's to make it structured information. With OCR it needs templating which tells the OCR where to find an address, what to do with multiple lines and pages etc. Every new format requires that work again.
LLMs are "smarter" in that regard.
Fun fact rookie error: Sending a T&C page to a LLM. It chews on it forever..
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz @binford2k some people already understood this in 2016: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensive-and-precise-spec/
-
@arroz I don't have the exact numbers of "traditional" OCR but it will be around 90% as well. And, yes, you are right, the issue is not to get the letters right, it's to make it structured information. With OCR it needs templating which tells the OCR where to find an address, what to do with multiple lines and pages etc. Every new format requires that work again.
LLMs are "smarter" in that regard.
Fun fact rookie error: Sending a T&C page to a LLM. It chews on it forever..
@arroz And, yeah, why there are so many companies who send this PDFs. God knows. I worked in the automotive industry until 2015 and they still faxed orders.. And it's not Australia only, e.g. just recently we "OCRed" a big Canadian company's invoices.
-
@arroz I've had a horrible idea... Why are we building LLMs that output C, Python, etc when we could be building LLMs that produce bytecode? More efficient and completely unauditable!
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz he claims to “make apps and break things”...
-
@arroz @binford2k some people already understood this in 2016: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensive-and-precise-spec/
@nils_ballmann @arroz @binford2k what one faces when doing formal verification of LLM output. However, LLMs might enable us to write larger formally verified systems in practice. LLMs could help with the spec writing and validation as well. We'll see.
LLMs are basically generators in neuro-symbolic hybrid systems. And many people like to use them for productivity. I.e. a component or tool. No reason to get emotional about it. Like humans, LLMs are unreliable but still useful.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz well, except gcc -Ofast, obviously
Notable that dynamic code generation has fallen out of favour in database engines (select -> assembly-> machine code) with SIMD opcodes being the replacement because it's a nightmare to debug when a failure happens inside generated code
AVX512 opcodes support breakpoints and debugging if you add them through intrinsics -
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I’d actually hazard a guess that there are more assembly programmers alive today than at any time in history.
-
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz I think you may be overlooking another point here: there is absolutely NO reason LLMs should not build directly into machine code or better yet a chip. Why have a "human readable" interface (that is a programming language or a universal hardware layer) at all?
If we stop creating UTMs and adopt machines farther down the chomsky hierarchy (and identify the inherent security advantages of doing so) we can probably make interesting progress. Especially in security engineering.
If we fab machines directly that don't require software to rebind them ...
Since the '40s we have been building machines that do too much (on purpose) and getting mad when they do parts of what we built them to do...
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz @stroughtonsmith I can even see his point about LLMs being the new compilers (although I don’t agree). But then a compiler doesn’t suffer from the societal, ethical and environmental issues these model do. It seems like looking away from the screen is not a very worked on skill by programmers and CSs in general. In that sense it’s even funny we may all lose our jobs precisely by our collective lack of empathy and global perspective.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz
My skip manager tried using this argument why we should adopt LLMs. It was too absurd to reply to, though maybe I should have.There are cases where correctness isn't as critical and maybe it is ok to use something vibe coded (I recently met someone vibe coding algorithmic art, treating some bugs as happy accidents).
But my day job is a case where the whole point of what we build is to avoid human mistakes.
-
RE: https://mastodon.social/@stroughtonsmith/116030136026775832
This is one of the worst takes from LLM enthusiasts.
Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.
LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.
Comparing both is a display of ignorance and dishonesty.
@arroz LLMs are NOT random content generators. That is false. The LLM output is based the user prompt. Seems to me you don't know how to prompt correctly.
-
Oh @colincornaby is on here
That post is great and I have gotten a lot of use out of it.
-
@zeank I like this take even though it’s a little hyperbolic. Probably more like “Now that McDonalds exists…”
On the other hand, the fanboys are equally hyperbolic I suppose. “Hey, I can drive up to this window and people hand me food for money, this has completely obsoleted cooking!” -
@zeank I like this take even though it’s a little hyperbolic. Probably more like “Now that McDonalds exists…”
On the other hand, the fanboys are equally hyperbolic I suppose. “Hey, I can drive up to this window and people hand me food for money, this has completely obsoleted cooking!”@slyborg I didn’t say McDonald‘s because their key differentiating factor is that you know exactly what you get. No matter if you’re in Seattle, Melbourne, Tokio or Paris.
But then, it’s just a shit take anyway. Don’t overthink it.
-
Oh @colincornaby is on here
That post is great and I have gotten a lot of use out of it.
@gbargoud That microwave post still does numbers.
I’m sort of amazed how everything I wrote about this last year remains absolutely true today.