“OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws – Computerworld”

computerworld.com/article/4059

> OpenAI’s own advanced reasoning models actually hallucinated more frequently than simpler systems.

An observation I made in my book, even in the first edition two years ago, was that research gave us reason to believe that hallucinations got worse with increasing model size

So, y'know... I literally told you so

Follow

@baldur the thing about more advanced models hallucinating more, I suppose it's kind of the same effect as when you fit a polynomial to a set of points? If you try to get fancy with a higher degree polynomial you can make it fit a larger number of points, but the curve will shoot off even more crazily in between.

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml