This article takes evidence-based approach to understanding LLMs and suggests it's a category error to apply terms like "hallucination" or consider LLMs to have personalities

LLMs are just following statistically probably pathways through a training-defined space of possible text outputs

It's us humans who are hallucinating. Our evolution has predisposed us to see agency where none exists, to see Thor in a thunderstorm, to see a person in a language generator

ai-cosmos.hashnode.dev/underst

Stop deep sea mining before it starts - greenpeace.org/international/a this is hugely important, and a major coming battle

Your ongoing support helps radical independent publishing alive, which is critically important in these turbulent times. Join via the link in our bio.

kolektiva.media/w/7q2vzbFUvTNX

Our comrades from @catl have released the first episode of a brand new anarchist news show, based out of the land of tacos. Here's their first episode, now with English subs.

Para la versión original en español:
kolektiva.media/w/anmXzq7hRgKh

Seriously, don’t ever tell me “conservatives” are good with the economy again. There’s been a ton of evidence for a really long time that it’s obviously not true, enough that people who’ve claimed it at any point should be embarrassed — but it’s so horribly clear now that the claim should make you angry.

Show more
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml