@andrewg is this real, or a joke?
@PatrickOBeirne @eliasr @andrewg @Tupp_ed
Being good at “talking about AI” is a critical requirement to be an adviser on AI.
Now, “detailed knowledge about AI” is a less important thing for the role. One might argue it might be distracting on the job.
@yacc143 @PatrickOBeirne @eliasr @andrewg @Tupp_ed
People with detailed knowledge tend to answer yes/no questions with "it depends". Like WTF? Is AI neutral or is it not? I don't want the guy who says "it depends on the input", nor the guy saying "no, since the input will never be fully neutral". I want the guy saying "yes, it's perfect for your application! What was the purpose again?"
@floS @PatrickOBeirne @eliasr @andrewg @Tupp_ed
Sadly, these obviously wrong answers will be used to develop policy and laws. There the fun stops.
@PatrickOBeirne @eliasr @andrewg @Tupp_ed There's literally only one person on that entire Advisory Council who's qualified to be on it: Dr Abeba Birhane. The rest are jobsworths and industry shills (whose companies just happen to all be launching more and more consumer AI products these days).
Dismissing this as stupid probably misses the point. Different variations of this theme seem to be, first and foremost, ideological statements in favor of depoliticisation.
With such intent models don't discriminate, given enough unbiased data, the same way as markets are self-regulating, given lack of distortions.
The (not so) hidden assumption is that indeed, "there is no alternative" and the system itself, context in which we apply the tools, labels we use, goals and so on, are either either non-discriminatory and free of bias, or can be made so by adding more layers of algorithmic whitewashing.
If reality doesn't follow, too bad for reality.
@dustyattic @eliasr @andrewg “given enough unbiased data”
…and who even decides what “unbiased” even means?
@eliasr @andrewg Indeed. And there have been quite a *loooot* of statements in the last year or so...