Follow

@feld
I work for a company creating a recommendations algorithm. Eventually, we'll be training it to respond to user's past choices, but until the users have made any choices, we're feeding it our own recommendations. So, the whole thing is fundamentally based on our biases.
Is this wrong? What, do you think, are our obligations to our users?
@alex

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml