LLMs have no concept of facts or lies, or right or wrong.
And contrary to your rather morbid view of society, in general most WP editors try hard to get it right. And support rather than undermine each other (don't let the more contentious topics distract you from the much larger pool of contributions)
More to the point, with humans you can demand they provide a source and at scale the iterative process should get to the right answers for a good percentage of content. That won't work for LLMs because none of that has meaning.
And contrary to your rather morbid view of society, in general most WP editors try hard to get it right. And support rather than undermine each other (don't let the more contentious topics distract you from the much larger pool of contributions)
More to the point, with humans you can demand they provide a source and at scale the iterative process should get to the right answers for a good percentage of content. That won't work for LLMs because none of that has meaning.