It is, but partly because it is a common form in the training data. LLM output seems to use the form more than people, presumably either due to some bias in the training data (or the way it is tokenised) or due to other common token sequences leading into it (remember: it isn't an official acronym but Glorified Predictive Text is an accurate description). While it is a smell, it certainly isn't a reliable marker, there needs to be more evidence than that.
It must be, but any given article is likely to not be the average of the training material, and thus has a different expectedness of such a construction.
> A production-grade WAL isn't just code, it's a contract.
I hate that I'm now suspicious of this formulation.