A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
-
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
@jzb LOL at the idea that getting your work done means you can go home at noon or have a four-day weekend, rather than more work appearing on your desk.
-
“Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
This is already happening. So what’s the point? ;)
Back to business: You are right. Every person who is "just doing the job" is endangered losing exactly this job, as AI will do it better and more efficiently. So the solution is to have a society of individuals who are smart enough to cope with it in an intelligent way. If not, the tech bros might win for a while, before all collapses.
@_RyekDarkener_ buncha nonsense but hey at least your bio acknowledges you're specializing in science *fiction*
-
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
@jzb What are you saying, that parts of the establishment defend other parts of the establishment? Yes, we know.
-
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
@jzb
Also, if they were to be any good for workers to use, they'd have to... you know... *actually work*. -
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
the AFL-CIO is doing this and they're widely considered to have lost the mandate of heaven long ago.
-
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
the AFL-CIO is doing this and they're widely considered to have lost the mandate of heaven long ago.
@jzb i think many of your examples are in fact very much how people are being sold these products (the phrasing i've heard is "boutique" used to describe "code that someone wrote to solve a problem"). the idea of getting rich quick is commonly employed by capital to defang revolutionary movements that would otherwise band together in groups like unions, understanding there is no shortcut to success
-
@jzb i think many of your examples are in fact very much how people are being sold these products (the phrasing i've heard is "boutique" used to describe "code that someone wrote to solve a problem"). the idea of getting rich quick is commonly employed by capital to defang revolutionary movements that would otherwise band together in groups like unions, understanding there is no shortcut to success
@jzb "The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output." fox news has stuff like this and that's because its purpose is to inspire fear and distrust of your peers and the idea that you're being left behind if you have any sort of moral principles
-
@jzb "The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output." fox news has stuff like this and that's because its purpose is to inspire fear and distrust of your peers and the idea that you're being left behind if you have any sort of moral principles
@jzb i don't think people should be trying to cheat their employers. i think employers think that because they're constantly trying to cheat their employees. if your employer isn't going to pay you enough, it's a waste of your time not to leave instead of trying to engage in fraud
-
@jzb i don't think people should be trying to cheat their employers. i think employers think that because they're constantly trying to cheat their employees. if your employer isn't going to pay you enough, it's a waste of your time not to leave instead of trying to engage in fraud
@jzb Microsoft SlopGuard took 5 seconds to find with a web search because i know for a damn fact they create the problem so they can profit off appearing to have solved it https://www.microsoft.com/en-us/microsoft-365-life-hacks/everyday-ai/what-is-an-ai-detector
-
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
@jzb an interesting comparison is a 1970s show about the rise of the microprocessor ue 8080 that then had a discussion. The one person arguing it was good was the unions rep who correctly argued it would automate a load of tedious stuff and enable other work.
The difference this time is that generative AI doesn't do useful work Neural nets do and boring uses of the tech but not LLMs. -
@jzb an interesting comparison is a 1970s show about the rise of the microprocessor ue 8080 that then had a discussion. The one person arguing it was good was the unions rep who correctly argued it would automate a load of tedious stuff and enable other work.
The difference this time is that generative AI doesn't do useful work Neural nets do and boring uses of the tech but not LLMs.@etchedpixels @jzb I'm inclined to slightly disagree and think this is denialism (understandably, given how bad their ethics are; it'd be much easier if indeed they had no useful function).
The problem is that, despite all the scenarios where they're inappropriate and wrong, they do.
And we're unwilling (as a society) to fully consider their risks and costs, because "there's no glory in prevention".
That's the challenge we need to overcome.
-
@etchedpixels @jzb I'm inclined to slightly disagree and think this is denialism (understandably, given how bad their ethics are; it'd be much easier if indeed they had no useful function).
The problem is that, despite all the scenarios where they're inappropriate and wrong, they do.
And we're unwilling (as a society) to fully consider their risks and costs, because "there's no glory in prevention".
That's the challenge we need to overcome.
-
@etchedpixels @jzb I'm inclined to slightly disagree and think this is denialism (understandably, given how bad their ethics are; it'd be much easier if indeed they had no useful function).
The problem is that, despite all the scenarios where they're inappropriate and wrong, they do.
And we're unwilling (as a society) to fully consider their risks and costs, because "there's no glory in prevention".
That's the challenge we need to overcome.
@larsmb @etchedpixels @jzb in a $work context I've found llms quite good at automating what would otherwise be "find the plausible stack overflow answer and copy-paste it, changing the names" or "write a shit load of boilerplate" or "explain the awful mess that this module is and work out what it was supposed to be for" or even "do a refactor in less time than it would take me to figure out the LSP support in this language and do it myself".
All things that should not be useful if we'd collectively made better choices, but given where we are now have value in context
-
@larsmb @etchedpixels @jzb in a $work context I've found llms quite good at automating what would otherwise be "find the plausible stack overflow answer and copy-paste it, changing the names" or "write a shit load of boilerplate" or "explain the awful mess that this module is and work out what it was supposed to be for" or even "do a refactor in less time than it would take me to figure out the LSP support in this language and do it myself".
All things that should not be useful if we'd collectively made better choices, but given where we are now have value in context
@dan @larsmb @jzb That's not really an LLM problem though - that's a very targetted problem being solved using an LLM as a large hammer, and a hammer that makes mistakes where formal methods and formal method dervied tools do not in general do
As to "find the plausible stack overflow answer and copy-paste it, changing the names", part of my job at Intel was catching people doing this and dropping them in the shit. Automated versus wilful human copyright violation 8)
-
@jzb i think the problem is more that workers have much greater work ethics than generally acknowledged, and if a tool allow them to work faster, they'll do more work, not not reclaim more time.
but more than une explanation can be true at the same time.
@tshirtman @jzb
it's also that if people who send work your way learn that you get it done quickly and reliably, they'll send work your way more often -
@tshirtman @jzb
it's also that if people who send work your way learn that you get it done quickly and reliably, they'll send work your way more often@wolf480pl @jzb yeah, but they do mostly rely on workers honesty to learn that.
And there is always more work to do, in my experience as a dev.
-
@dan @larsmb @jzb That's not really an LLM problem though - that's a very targetted problem being solved using an LLM as a large hammer, and a hammer that makes mistakes where formal methods and formal method dervied tools do not in general do
As to "find the plausible stack overflow answer and copy-paste it, changing the names", part of my job at Intel was catching people doing this and dropping them in the shit. Automated versus wilful human copyright violation 8)
@etchedpixels @larsmb @jzb like I said, things that should not be useful but are.
-
@wolf480pl @jzb yeah, but they do mostly rely on workers honesty to learn that.
And there is always more work to do, in my experience as a dev.
@tshirtman @jzb They rely on shit getting done to learn that.
If you finish a task early, that'll unblock your coworker who's been waiting for you to finish it, so they'll know you did it.
If you start a task late, sure, you spend less time doing things, but do you get to relax for the first half of day, knowing that you have a backlog of things to do?
-
@etchedpixels @larsmb @jzb like I said, things that should not be useful but are.
@etchedpixels @larsmb @jzb as an industry we've spent this many decades failing.to "sharpen the saw", is it surprising we're now all gung ho about the enchanted broadswords we've just been gifted? They're so much better at opening bottles than the old way!
-
@jzb Is is an inherent limitation of how LLMs currently exist and are implemented.
They do strive to minimize it through scale, but it's also a reason why they do get "creative" in their answers.
Like with any stochastic algorithm, they perform best if you can (cheaply) validate the result. e.g., does a program pass the tests still?This is much harder for complex questions about the real world.
Side note: I'd call them anything but 'creative'.
If anything, the behavior is better described as 'evasive', since the model effectively keeps talking, without any substantial data backing up what's being conveyed.
Or, as Hicks, Humphries and Slater put it: They're bullshitting.