Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

Uncategorized
66 35 0
  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    @jzb Interesting thought experiment. Thanks for sharing that!

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    @jzb It's important to note though that you also paint a different possible reality: one in which we don't have to fear automation, but would get to welcome it.

    Automation shouldn't be bad.

    Wouldn't change the inane energy requirements as implemented currently. But it's not the tech that's necessarily evil: it is the people driving it.

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    @jzb I agree with you, but I also say, "Why not both?" While one is not good, the two can coexist.

    What I think the AI Boosters and AI Detractors forget is that there is a group of folks quietly using this as a tool to retake their lives. Take a look at the various "over employed" Reddits as an example.

    This doesn't get rid of the valid criticisms of the AI companies and resource concerns.

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    This basically?
  • @matthewcroughan No. That's "wow, look at how you can overload your workers when they say 'I can't be in 3 places at once'."

    They're not trying to sell Copilot to the person in the picture, they're trying to sell it to their bosses and trying to sell the productivity glory story to go along with it.

  • @matthewcroughan No. That's "wow, look at how you can overload your workers when they say 'I can't be in 3 places at once'."

    They're not trying to sell Copilot to the person in the picture, they're trying to sell it to their bosses and trying to sell the productivity glory story to go along with it.

    True, I guess that's why she isn't smiling
  • @jzb I agree with you, but I also say, "Why not both?" While one is not good, the two can coexist.

    What I think the AI Boosters and AI Detractors forget is that there is a group of folks quietly using this as a tool to retake their lives. Take a look at the various "over employed" Reddits as an example.

    This doesn't get rid of the valid criticisms of the AI companies and resource concerns.

    @bexelbie From my POV the answer to "why not both?" is that you can't really separate them right now.

    Adoption of the commercial tools for whatever purpose does more to pave the way to the negative outcomes than any positive ones.

    I think the "overemployed" thing is more of a statistical anomaly than a real thing.

    Perhaps I'm just old and inflexible, though. Ideologically, I mean. I know I'm not very flexible physically these days...

  • @jzb It's important to note though that you also paint a different possible reality: one in which we don't have to fear automation, but would get to welcome it.

    Automation shouldn't be bad.

    Wouldn't change the inane energy requirements as implemented currently. But it's not the tech that's necessarily evil: it is the people driving it.

    @larsmb @jzb
    I keep having this conversation with my husband. He is of the opinion that a”useful tool” should be used at its capability, not pushing it too far. For example, he is writing a thesis, he has all the information together and now he wants to use Ai to take his sources and find where in his document he has derived the information from. He likes to use the example of “a personal libraian”

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    @jzb you had me at "Microsoft SlopGuard" :)

  • @larsmb @jzb
    I keep having this conversation with my husband. He is of the opinion that a”useful tool” should be used at its capability, not pushing it too far. For example, he is writing a thesis, he has all the information together and now he wants to use Ai to take his sources and find where in his document he has derived the information from. He likes to use the example of “a personal libraian”

    @larsmb @jzb
    I think if it’s a “closed system” where you feed it information and tell it to only use the information it has and say when it “comes up empty” it should be okay. And to speed up the process of citations that does seem useful. (He would also double check on the accuracy of it , like say if it says something is on page 34 it should be there otherwise it’s not valid)

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    @jzb Your 4am sleep deprived state was bang on 💥💯

  • @larsmb @jzb
    I think if it’s a “closed system” where you feed it information and tell it to only use the information it has and say when it “comes up empty” it should be okay. And to speed up the process of citations that does seem useful. (He would also double check on the accuracy of it , like say if it says something is on page 34 it should be there otherwise it’s not valid)

    @em_and_future_cats @jzb LLMs are notoriously bad at both using only the information they have, all the information they have, and telling whether they did.

    So yes, can be useful, but every answer needs to be validated against facts.

  • @larsmb @jzb
    I think if it’s a “closed system” where you feed it information and tell it to only use the information it has and say when it “comes up empty” it should be okay. And to speed up the process of citations that does seem useful. (He would also double check on the accuracy of it , like say if it says something is on page 34 it should be there otherwise it’s not valid)

    @larsmb @jzb
    I just worry that the younger generation will not understand how & when to use it & when to use their brains 🫩 . In the science realm, Ai can be useful for a lot of data heavy and number crunching stuff, but there must be a limit so that students can understand what & why these things happen. (And regulations so hallucinations are limited)
    The humanities are a no-go zone imo, the use of a closed system citation mechanism is probably the only exception for this.

  • @em_and_future_cats @jzb LLMs are notoriously bad at both using only the information they have, all the information they have, and telling whether they did.

    So yes, can be useful, but every answer needs to be validated against facts.

    @larsmb @em_and_future_cats Well, as designed, they are -- I'm not sure that's a built-in limitation of LLMs or not. To be fair, I am not an expert on the tech.

    As something of an aside...

    It would be really interesting if you could pair the natural language instruction input with predictable output.

    That is, for example -- if I could query, say, all the data in Wikipedia but get only accurate output. Or if you had something like Ansible with natural-language playbook creation.

    "Hey, Ansible -- I want a playbook that will install all of the packages I have currently installed and retain my dotfiles" (or something) and be guaranteed accurate output... that would be amazing.

    Except that I also worry about losing skills to do those things. I worry about the loss of incidental knowledge when researching if a computer can return *only* what you ask for and sacrifice accidental discovery.

    (I also still think search engines were something of a mistake and miss Internet directories. Yeah, I'm fun at parties....)

  • @em_and_future_cats @jzb LLMs are notoriously bad at both using only the information they have, all the information they have, and telling whether they did.

    So yes, can be useful, but every answer needs to be validated against facts.

    @larsmb @jzb 💯 this is what I keep stressing to my husband about! Validate the heck out of it if you are.. and then, is it *actually* saving you time in the end?

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    @jzb that’s a whole lot of text to say the problem is capitalism

  • @larsmb @em_and_future_cats Well, as designed, they are -- I'm not sure that's a built-in limitation of LLMs or not. To be fair, I am not an expert on the tech.

    As something of an aside...

    It would be really interesting if you could pair the natural language instruction input with predictable output.

    That is, for example -- if I could query, say, all the data in Wikipedia but get only accurate output. Or if you had something like Ansible with natural-language playbook creation.

    "Hey, Ansible -- I want a playbook that will install all of the packages I have currently installed and retain my dotfiles" (or something) and be guaranteed accurate output... that would be amazing.

    Except that I also worry about losing skills to do those things. I worry about the loss of incidental knowledge when researching if a computer can return *only* what you ask for and sacrifice accidental discovery.

    (I also still think search engines were something of a mistake and miss Internet directories. Yeah, I'm fun at parties....)

    @jzb @larsmb
    This too! Granted, if you’ve got to the phd level through education before llms you are probably okay with using it to “finish up” but I really worry about younger generations (even myself) when it comes to all of this

  • @larsmb @em_and_future_cats Well, as designed, they are -- I'm not sure that's a built-in limitation of LLMs or not. To be fair, I am not an expert on the tech.

    As something of an aside...

    It would be really interesting if you could pair the natural language instruction input with predictable output.

    That is, for example -- if I could query, say, all the data in Wikipedia but get only accurate output. Or if you had something like Ansible with natural-language playbook creation.

    "Hey, Ansible -- I want a playbook that will install all of the packages I have currently installed and retain my dotfiles" (or something) and be guaranteed accurate output... that would be amazing.

    Except that I also worry about losing skills to do those things. I worry about the loss of incidental knowledge when researching if a computer can return *only* what you ask for and sacrifice accidental discovery.

    (I also still think search engines were something of a mistake and miss Internet directories. Yeah, I'm fun at parties....)

    @jzb Is is an inherent limitation of how LLMs currently exist and are implemented.
    They do strive to minimize it through scale, but it's also a reason why they do get "creative" in their answers.
    Like with any stochastic algorithm, they perform best if you can (cheaply) validate the result. e.g., does a program pass the tests still?

    This is much harder for complex questions about the real world.

    @em_and_future_cats

  • A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

    Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

    “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

    Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

    Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

    CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

    The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

    What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

    You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

    Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

    @jzb
    Yeah, no. It's the same theme Marx recognized some 150 years ago:

    John Stuart Mill says in his “Principles of Political Economy":
    “It is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being.”
    That is, however, by no means the aim of the capitalistic application of machinery. Like every other increase in the productiveness of labour, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day, in which the labourer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist. In short, it is a means for producing surplus-value.

    [Capital, IV.15]


Gli ultimi otto messaggi ricevuti dalla Federazione
Post suggeriti
  • 0 Votes
    1 Posts
    2 Views
    How LLMs Learn from the Internet: The Training Process#LLMs https://share.google/R7w0FBUiIByrclGmk
  • 0 Votes
    1 Posts
    9 Views
    Considerazione MOOOLTO curiosa... 🤔https://www.youtube.com/shorts/3FrCeqNDhok#OpenAI #GenerativeAI #ChatGPT #AI
  • 0 Votes
    1 Posts
    11 Views
    Over 40 years, we were collectively told to give tax cuts to rich people.And we were told that if we did that, wealth would trickle down and everyone would be better off.Over 40 years, pretty much everything got cut to fund these tax cuts.Schools. Hospitals. Public housing. Public transport. Universities. Roads projects. Mental health services. Welfare payments.People literally went homeless or starved, so rich people could get tax cuts.Because the wealth would trickle down.Eventually the eroding of public goods caused social dislocation.So governments further cut those public goods to fund more police and prisons. To continue giving tax cuts to rich people.But they said the wealth would trickle down.Eventually the climate started changing because of the amount of toxic fossil fuel pollution in the atmosphere.So governments chose to keep the tax cuts rather than fund infrastructure to reduce emissions.(Many of those billionaires getting tax cuts made their money selling toxic fossil fuels.)And as the oceans and atmosphere warmed, the bushfires, droughts, hurricanes, cyclones, floods, and droughts got worse.But they said the wealth would trickle down.Eventually people were getting pissed off at the dire state of the world.The rich misdirected that anger at immigrants!And First Nations!And trans people!And neurodivergent people!Anyone but the billionaires who got the tax cuts.So governments chose to keep the tax cuts. (For the rich. Everyone else got new tariff taxes.)But they said the wealth would trickle down.So did the wealth trickle down?Well...A group of billionaires saw this kinda cool tech demo.It predicted the next pixel of an image, based on the colour patterns of every image on the internet.It also predicted the next word in a sentence, based on an analysis of every piece of writing on the internet.The rich decided that this clearly showed that a sentient computer was just around the corner.The problem was these tech demos needed servers with a lot of GPUs to work.So the rich took all the money they got from those tax cuts.And they bought GPUs.Millions and millions and millions and millions of GPUs.All the tax cuts? All the underfunded schools? All the draconian welfare cuts? All the public housing shortages? The delays in funding clean energy.In the end, it didn't trickle down.And instead of all the public goods it could have bought......We'll be left with millions and millions and millions of GPUs in a landfill.#ChatGPT #Claude #AI #LLM #capitalism #socialism #business #politics #Nvidia
  • 0 Votes
    1 Posts
    14 Views
    Pair programming in #NeoVIM with #Claude cli. I liked pair programming back in the day when people actually paired together, trying to recreate the similar flow with my neovim plugin.https://www.reddit.com/r/neovim/comments/1nanw39/pairupnvim_realtime_ai_pair_programming_with/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_buttonhttps://github.com/Piotr1215/pairup.nvim#programming #ai