Anyway, TL;DR, impressed and fascinated as I am by LLMs and GPTs as technology, the things we're using them for are just dumb and, in many cases, dangerous.
The more of these artificial liars get rolled out, the more we have to choose between believing everything the machine says -- which is lunacy -- or believing nothing that it says -- which is exhausting.
6/
I don't understand why you would deploy something that looks like an information retrieval tool, but simply invents things when it doesn't know the answer.
No one would design a database system that returned random data if a query failed to match any records. (Except perhaps in a specialized application such as a game).
The argument “but it gets it right MOST of the time" cuts no ice. That actually makes it WORSE, because it's harder to tell when the robot is fabricating answers.
5/