Giving users the power to moderate their own feeds is the key. Centralized moderation will always be flawed--a company can never represent your sensitivities as well as you and your peers (and will likely bow to outside pressure to censor, whether it's China or groups of users).
https://www.vice.com/en_us/article/a35yke/tech-companies-didnt-plan-for-chinese-censorship
@kyle I completely agree that censorship is bad I agree. Is there a way to limited the number of posts that you see from some accounts because I had to unfollow a few because they post 50 times per day.
@kyle While I agree with this stance in general, the entire point of moderation is to reduce the need to deal with unwanted content. Decentralization is important, but practical open world systems need support for delegating moderation responsibilities to one (or more) trusted (and decentralized) authorities.
@okennedy The approach I'd like to take is something I documented in this feature request: https://source.puri.sm/liberty/smilodon/issues/6
In essence, allow users to add custom hashtags to posts, allow their followers (optionally) to see them, search/filter on them.
@kyle An interesting approach to distributed moderation. I'm concerned it replaces a centralized tyrant with the tyrany of the crowd, but its a step in the right direction. I wonder how one might design it to limit peer bullying or tribal behavior. It's also an interesting exercise in query language/UI design: You'd want thresholding (hide posts tagged [rude] by at least 3 of my friends) and client-specific tests (hide posts tagged [nsfw] on my work computer).
@okennedy It's a crowd that you explicitly choose (like how you can choose whether you see boosts from someone you follow). You won't see bullying or tribal behavior unless you follow a bully or tribe and choose to see their tags. I give examples in that link.
@kyle A few points here: First, take your #hatespeech example. Yes you get to choose which of your peers tags your client will respect, but what if one of your peers starts abusing the privilege? How would you know? (maybe add a UI indication of hidden toots + why they're hidden). Second, I'm wary of any infrastructure that makes it easy to create a filter bubble... A peer-created filter bubble is almost as bad as a centrally-mandated one (maybe allow peers to downvote tags?).
@kyle 2/2: There's an argument to be made for distributed authorities performing moderation duties rather than direct peers: Any such authority needs to first build trust from the community, which serves as a safeguard against abuse. Any such authority also must balance the needs of many participants. Admittedly, this won't eliminate filter bubbles, but at least makes them harder to create.
@okennedy This problem already exists, you have a central mastodon sysadmin who isn't an expert in these areas and likely doesn't share your views as the one abusing the #hatespeech hashtag. With my approach the user at least has a recourse if they discover someone in their feed is abusing tagging--one click and it's resolved.
Phase one is to get the tagging feature in place. Phase two is to expand it to give individuals even more control over what they filter and more visibility into it.
@kyle Agreed that moving away from centralization is the first step in the right direction. Glad to hear that there are plans for the next steps as well!
@kyle I don't think what we want will ever satisfy the advertisers.