@feld
I work for a company creating a recommendations algorithm. Eventually, we'll be training it to respond to user's past choices, but until the users have made any choices, we're feeding it our own recommendations. So, the whole thing is fundamentally based on our biases.
Is this wrong? What, do you think, are our obligations to our users?
@alex