No, my point has nothing to do with creativity. It's about the fact that their output is taylored to look and sound in a certain way in the later stages of model training, it's not representative of the original text data the base model was trained on.
At some point people got this idea that LLMs just repeat or imitate their training data, and that’s completely false for today’s models.