Spend the day talking to workers council members about "AI".
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.
-
But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.
Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.
-
But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.
@tante is that framework public?
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante ceo's never were okay. I always found senior management a narcissistic bunch of assholes, always looking for the next cool project to burnish their cvs. Many were totally scared of tech, easily fooled. And many more were full on tech cultists, because the tech bros were always promising them how they could cut costs and fire people.
-
@tante is that framework public?
@cm I have presented it in talks but have not fully formalized it yet.
-
@cm I have presented it in talks but have not fully formalized it yet.
@cm I should write it down, I know but that takes a lot of time and time is currently my most limited resource :(
-
@cm I have presented it in talks but have not fully formalized it yet.
-
Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.
But: If you have any chance to speak to unions/workers from different domains and organizations do so.
It's fascinating how
a) different organizations are and operate
b) they all end up with the same handful of structural problems -
-
Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.
@tante 9 times out of 10 (yes that's an anecdotal stat), the people most resistant to AI-all-the-things are the most talented, most dedicated workers. Orgs who penalize or fire those people are committing self-sabotage. 🙁
-
Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.
@tante It is a weird time to be alive. I wrote The Futzing Fraction functionally *for free* to help CEOs do their own cost modeling. And they don't even read it themselves — employees read it, and carefully create customized internal presentations to make its framing *even gentler* to their orgs, and it still only works to help soften AI mandates like half the time (at least based on the feedback I have received).
-
@tante It is a weird time to be alive. I wrote The Futzing Fraction functionally *for free* to help CEOs do their own cost modeling. And they don't even read it themselves — employees read it, and carefully create customized internal presentations to make its framing *even gentler* to their orgs, and it still only works to help soften AI mandates like half the time (at least based on the feedback I have received).
@tante Critics are characterized as surly bomb-throwers when we are trying SO HARD to help corporations succeed, just so they won't make our world *even worse*. It's a literal win win they are trying to avoid
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante For many founders and CEOs the one thing that irritated them about starting a tech company was having to build an expensive and often ungovernable engineering team. Some (ime) reluctantly embraced the idea by wearing hoodies and paying attention to tech journalism. Others maintained a resentful distance. Most try to micromanage it regardless.
The fact that AI is being embraced enthusiastically top-down is frankly one of the least surprising developments of my career thus far.
-
@tante Critics are characterized as surly bomb-throwers when we are trying SO HARD to help corporations succeed, just so they won't make our world *even worse*. It's a literal win win they are trying to avoid
@glyph you can only help people if they are willing to accept help I guess. But it's tragic.
-
@glyph you can only help people if they are willing to accept help I guess. But it's tragic.
@glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.
Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?
-
@glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.
Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?
@tante yeah it's a real "YOU HAD ONE JOB" situation
-
@tante yeah it's a real "YOU HAD ONE JOB" situation
@glyph@mastodon.social @tante@tldr.nettime.org this is the thing that drives me a little batty: "AI", or (mis)applied statistics, is just... well, statistics. And all these "AI experts" never even try to use any sort of metric, much less a statistically rigorous method, to gauge if the damn thing works or not...
-
@glyph@mastodon.social @tante@tldr.nettime.org this is the thing that drives me a little batty: "AI", or (mis)applied statistics, is just... well, statistics. And all these "AI experts" never even try to use any sort of metric, much less a statistically rigorous method, to gauge if the damn thing works or not...
@aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics
and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.
arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante unfortunately and increasingly, management is most interested in whatever looks good in PowerPoint rather than their product in the real world.