Turns out Amazon had two outages in December caused by their IaaS management slop generator:
-
Turns out Amazon had two outages in December caused by their IaaS management slop generator:
Amazon’s cloud ‘hit by two outages caused by AI tools last year’
https://www.theguardian.com/technology/2026/feb/20/amazon-cloud-outages-ai-tools-amazon-web-services-aws> Reported issues at Amazon Web Services raise questions about firm’s use of artificial intelligence as it cuts staff
Sounds like things are not going well over at AWS.
-
Turns out Amazon had two outages in December caused by their IaaS management slop generator:
Amazon’s cloud ‘hit by two outages caused by AI tools last year’
https://www.theguardian.com/technology/2026/feb/20/amazon-cloud-outages-ai-tools-amazon-web-services-aws> Reported issues at Amazon Web Services raise questions about firm’s use of artificial intelligence as it cuts staff
Sounds like things are not going well over at AWS.
> Michał Woźniak, a cybersecurity expert, said it would be nearly impossible for Amazon to completely prevent internal AI agents from making errors in future, because AI systemsmake unexpected choices and are extremely complex.
> “Amazon never misses a chance to point to “AI” when it is useful to them – like in the case of mass layoffs that are being framed as replacing engineers with AI. But when a slop generator is involved in an outage, suddenly that’s just ‘coincidence’,” he added.
Henlo.
-
> Michał Woźniak, a cybersecurity expert, said it would be nearly impossible for Amazon to completely prevent internal AI agents from making errors in future, because AI systemsmake unexpected choices and are extremely complex.
> “Amazon never misses a chance to point to “AI” when it is useful to them – like in the case of mass layoffs that are being framed as replacing engineers with AI. But when a slop generator is involved in an outage, suddenly that’s just ‘coincidence’,” he added.
Henlo.
@rysiek That's you! :3
(And I agree 100% with your statements...)
-
> Michał Woźniak, a cybersecurity expert, said it would be nearly impossible for Amazon to completely prevent internal AI agents from making errors in future, because AI systemsmake unexpected choices and are extremely complex.
> “Amazon never misses a chance to point to “AI” when it is useful to them – like in the case of mass layoffs that are being framed as replacing engineers with AI. But when a slop generator is involved in an outage, suddenly that’s just ‘coincidence’,” he added.
Henlo.
@rysiek at the end of the day:
- llms are just 'talktotransformer.com' but massive
- all they do is autocomplete
- no mcp server, agents.md, skills.md or any other layer of abstraction fixes the core problem
- once the input makes it through all the abstraction layer it still lands on the same llm core model
- those core models were trained to fellate the user and tell people what they want to hear, not "the truth"this is cloud and containers and virtualization all over again.
-
@rysiek at the end of the day:
- llms are just 'talktotransformer.com' but massive
- all they do is autocomplete
- no mcp server, agents.md, skills.md or any other layer of abstraction fixes the core problem
- once the input makes it through all the abstraction layer it still lands on the same llm core model
- those core models were trained to fellate the user and tell people what they want to hear, not "the truth"this is cloud and containers and virtualization all over again.
@rysiek this also means that no cirque de soleil trapeze act of mcp servers, llms checking other llms output, ralph wiggum model, openclaw bots or other 'stuff on top' fixes the core issue.
im actively working on a con talk on this topic, so your good press today is getting a hat tip in my slides :D
-
undefined oblomov@sociale.network shared this topic