Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon?
-
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-tooBut today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! š¤”
-
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-tooBut today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! š¤”
If anyone ever tries to tell you LLMs are just as good (or better!) in generating text (or code) as humans are in creating text (or code), ask them about "dogfooding".
Dogfooding means training LLMs on their own output. It is absolutely disastrous to such models:
https://www.nature.com/articles/s41586-024-07566-yEvery "AI" company will have layers upon layers of defenses against LLM-generated text ending up in training data.
Which is why they desperately seek out any and all human-created text out there.
-
If anyone ever tries to tell you LLMs are just as good (or better!) in generating text (or code) as humans are in creating text (or code), ask them about "dogfooding".
Dogfooding means training LLMs on their own output. It is absolutely disastrous to such models:
https://www.nature.com/articles/s41586-024-07566-yEvery "AI" company will have layers upon layers of defenses against LLM-generated text ending up in training data.
Which is why they desperately seek out any and all human-created text out there.
Let me spell this out so that even a techbro could understand:
If LLMs are supposedly as good as ā or, as some claim, better than! ā humans as far as text output is concerned, why is dogfooding a problem in the first place?
Why all these defenses? Why seek out, and pay serious cash money for, real human-created text?
It's because they are not. On the most basic level. Anyone claiming they are is either bullshitting you, or has no clue about what they're talking about. Or both.

-
If anyone ever tries to tell you LLMs are just as good (or better!) in generating text (or code) as humans are in creating text (or code), ask them about "dogfooding".
Dogfooding means training LLMs on their own output. It is absolutely disastrous to such models:
https://www.nature.com/articles/s41586-024-07566-yEvery "AI" company will have layers upon layers of defenses against LLM-generated text ending up in training data.
Which is why they desperately seek out any and all human-created text out there.
@rysiek Regarding coding, there is a lot of documentations, examples of code etc. They often follow the same patterns (think: REST API). LLM will generate skeleton of the program faster, with execption handling etc. so programmer can focus on parts requiring creativity, not repeatable ones.
Regarding dogfooding - it's probably just a matter of stucking in local optimum/cost. No point in wasting CPU cycles in learning things that model already "knows".
-
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-tooBut today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! š¤”
@rysiek looks like it's Meta and their half assed product that was *puts on sunglasses* š¶ left behind š¶ -
@rysiek Regarding coding, there is a lot of documentations, examples of code etc. They often follow the same patterns (think: REST API). LLM will generate skeleton of the program faster, with execption handling etc. so programmer can focus on parts requiring creativity, not repeatable ones.
Regarding dogfooding - it's probably just a matter of stucking in local optimum/cost. No point in wasting CPU cycles in learning things that model already "knows".
@rozie Customizable code templates for a known pattern don't require scraping the whole Internet at a pace reminiscent of a DDoS, undermining copyright, ignoring lack of consent, the ecological impact of LLMs, etc.
And they can be quality-assured.
For starters.
Just because a problem *can* be solved using a sledgehammer doesn't mean that a sledgehammer is an *appropriate* tool for solving the problem.
-
undefined oblomov@sociale.network shared this topic on
-
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-tooBut today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! š¤”
@rysiek We could have had accessible, virtual conferences for the current and coming pandemic era by giving a fraction of that budget to furries.
-
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-tooBut today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! š¤”
@rysiek Oh, I'm sorry, I'm still trying not to get left behind the blockchain bandwagon.