@cwebber This might actually be subject to change though.
Njoy: https://arxiv.org/abs/2510.22954
Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
tl;dr: LLMs are coming closer and closer to conveying reproducible outputs. One could be under the impression that if trained on the same data and towards a certain size asymtotic behaviour would be a resonable expectation, becaus that happens with large numbers in statistics.
What a ... surprise.