You do have a point here, but due to the way LLMs work, they tend to produce answers that sound highly reasonable while containing hallucinations, and that's where I see
Brandolini's law incoming, possibly needing more than a simple order of magnitude. The above Wikipedia article quotes
In fast-changing fields, like information technology, refutations lag nonsense production to a greater degree than in fields with less rapid change.
Try to dig for gold in
locked_user sundialsvc4's posts - there
are gems, but buried in misleading crap that to the newbie's eyes can look like diamonds. ChatGPT's training data certainly contains valuable insights, but also contaminated crap such that it is irresponsible to leave the task of sorting this out "as an exercise for the reader".