Did Turing realize his standards for computer intelligence made incentives to develop AI that are judged first on how well they deceive?
Whether it's lying about being human, lying about the quality/correctness of your work, or creating false images/audio/video presented as real, many (most?) current applications for "AI" and their metrics for success, seem to be founded on intentional deception.
It seems risky & unethical to create intelligence where lying is the first lesson.
@kyle He never introduced it though as a 'standard for computer intelligence' though did he - I thought it was just a device to be able to ask the question about whether machines can think.
@kyle it's an interesting question. the turing test is appealing because it's so simple, but i wonder what other benchmark would distinguish between a really effective tool, and a possibly thinking machine. i also wonder if deception is a hallmark of all kinds of intelligence, or just maybe ours...