@andrewg oh. Wow. It may be one of the most stupid statements I have seen on the topic.

@eliasr @andrewg Indeed. And there have been quite a *loooot* of statements in the last year or so...

@PatrickOBeirne @eliasr @andrewg @Tupp_ed

Being good at “talking about AI” is a critical requirement to be an adviser on AI.

Now, “detailed knowledge about AI” is a less important thing for the role. One might argue it might be distracting on the job.

@yacc143 @eliasr @andrewg @Tupp_ed
That's appropriate for pundits or entertainers. Not IMO for advisers on policy decisions that affect citizens.

@yacc143 @PatrickOBeirne @eliasr @andrewg @Tupp_ed
People with detailed knowledge tend to answer yes/no questions with "it depends". Like WTF? Is AI neutral or is it not? I don't want the guy who says "it depends on the input", nor the guy saying "no, since the input will never be fully neutral". I want the guy saying "yes, it's perfect for your application! What was the purpose again?"

@floS @PatrickOBeirne @eliasr @andrewg @Tupp_ed

Sadly, these obviously wrong answers will be used to develop policy and laws. There the fun stops.

@PatrickOBeirne @eliasr @andrewg @Tupp_ed There's literally only one person on that entire Advisory Council who's qualified to be on it: Dr Abeba Birhane. The rest are jobsworths and industry shills (whose companies just happen to all be launching more and more consumer AI products these days).

@eliasr @andrewg

Dismissing this as stupid probably misses the point. Different variations of this theme seem to be, first and foremost, ideological statements in favor of depoliticisation.

With such intent models don't discriminate, given enough unbiased data, the same way as markets are self-regulating, given lack of distortions.

The (not so) hidden assumption is that indeed, "there is no alternative" and the system itself, context in which we apply the tools, labels we use, goals and so on, are either either non-discriminatory and free of bias, or can be made so by adding more layers of algorithmic whitewashing.

If reality doesn't follow, too bad for reality.

@dustyattic @eliasr @andrewg “given enough unbiased data”

…and who even decides what “unbiased” even means?

@andrewg @eliasr

> the interesting thing is that you can take that a step further and say that if you thoughtfully build AI systems and ensure a lack of bias in how you build it, you can actually create objective systems...

Ahahaha haha ha ha haah oh fsck. :blobcat0_0:

@rysiek @andrewg @eliasr I had to use this and share on LinkedIn.

The author of the piece has accused me of lying and has shut down any conversation. Quite hilarious

linkedin.com/posts/tanepiper_a

@tanepiper I am with the author on the headline thing, my pieces also get headlined in ways that I am not entirely crazy about.

But I stand by my "hand-waving" comment above. At this point saying "build it thoughtfully" does not cut it in the context of all the reporting and science around how difficult (if not impossible) it is to do it right…

@andrewg @eliasr

@rysiek @andrewg @eliasr Oh I agree too, the headline is not in his control - but it is in the control of the editorial team, and as I was pointing out headlines can be mis-used. But yes the whole piece is weak.

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml