My favorite science writer regularly consults ChatGPT about word history and it really, really bums me out.

I heard from one of my sources in the past year that LLMs are not reliable as "databases of word usage in the wild" but I can't remember who said or what the reason was. I should probably find out because this seems like another common misconception about LLMs.