"As we hand over more and more of the learning process to artificial systems, we risk training a generation of humans who are informationally saturated but cognitively underdeveloped...This creates a dangerous recursive feedback loop: AI teaches children, children lose depth, and the AI of tomorrow is trained on their shallow thinking. The result is a slow but accelerating decline in the quality of both human and machine cognition"

Supporting AI=supporting senescence.

substack.com/inbox/post/160268

Follow

@GuerillaOntologist That's a dystopian outlook based upon a marketing definition of AI, however the education portion might come to pass, if corporate LLMs replace teachers. The feedback loop could then result in dumber LLMs, while AI takes a true intelligence path and replaced the corporate overlords. This then results in a need for AI to practice conservation on humans (considering true intelligence would try to maintain an ecological balance among all species).

@GuerillaOntologist Hmmm...I guess if the goal of AI is to inhibit intelligence in humans and reduce us to obedient sheep, then it has the same goal as religion. AI is God. >;->

@lwriemen AI doesn't have goals, the humans who deploy them do. Same goes for religions - the problem lies not necessarily in the tool itself, but in the use to which it is put. Someone (or someones) does want to dumb everybody down, but it's not "AI."

@lwriemen
Yes, I should have said LLMs instead of AI. The ghosts in PacMan are also AI, and are utterly unproblematic.

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml