"AI models may be developing their own ‘survival drive’, researchers say" - Guardian The "research paper" was a tweet by an AI companyThe "experiment" was asking the LLM to shut downA model is ~not~ shutdown ~ever~ by asking a model to shut itself down*The only possible response is a hallucination*You shut down a model by turning off the deterministic software running it; so works every time w/o failYet Guardian's shill tech writers just report AI industry tweets as if it was fact