Did Turing realize his standards for computer intelligence made incentives to develop AI that are judged first on how well they deceive?

Whether it's lying about being human, lying about the quality/correctness of your work, or creating false images/audio/video presented as real, many (most?) current applications for "AI" and their metrics for success, seem to be founded on intentional deception.

It seems risky & unethical to create intelligence where lying is the first lesson.

@kyle it's an interesting question. the turing test is appealing because it's so simple, but i wonder what other benchmark would distinguish between a really effective tool, and a possibly thinking machine. i also wonder if deception is a hallmark of all kinds of intelligence, or just maybe ours...

@kyle He never introduced it though as a 'standard for computer intelligence' though did he - I thought it was just a device to be able to ask the question about whether machines can think.

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml