I don't use the term "stealing" for ideas, and I don't think it applies for LLM output.
I think that training LLMs on content to produce a model, then using that model to generate new content, is more akin to human learning than to verbatim copying.
I don't think that LLM output is a "derivative work" of the training data.