”AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months – ideally to make them better at giving us answers we can trust, but recent testing suggests they are sometimes doing worse than previous models. The errors made by chatbots, known as “hallucinations”, have been a problem from the start, and it is becoming clear we may never get rid of them.”
#AI
newscientist.com/article/24795

Follow

@marcusosterberg I get confused about the LLM AI status and usage. I am surprised I hear how much it appears to be used, and people also say it is good and useful. I think it was today in news about that normal search engines are less used than before. And on the other hand I hear about the issues with LLM.
Engineer as I am, I can be surprised how slow I'm at adopting. I haven't even tried any LLM as far as I know (but I have used eg Google translate). I stay sceptic to LLM for time being.

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml