Show more

"Feel free to use, study, modify, distribute and share free software. Happy , and may the source be with you!"

puri.sm/posts/software-freedom

Innocent Users Have the Most to Lose in the Rush to Address Extremist Speech Online

Internet Companies Must Adopt Consistent Rules and Transparent Moderation Practices

Big online platforms tend to brag about their ability to filter out violent and extremist content at scale, but those same platforms refuse to provide even basic information about the substance of those removals. How do these platforms define terrorist content? What safeguards do they put in place to ensure that they don’t over-censor innocent people in the process? Again and again, social media companies are unable or unwilling to answer the questions.

A recent Senate Commerce Committee hearing regarding violent extremism online illustrated this problem. Representatives from Google, Facebook, and Twitter each made claims about their companies’ efficacy at finding and removing terrorist content, but offered very little real transparency into their moderation processes.

Facebook Head of Global Policy Management Monika Bickert claimed that more than 99% of terrorist content posted on Facebook is deleted by the platform’s automated tools, but the company has consistently failed to say how it determines what constitutes a terrorist⁠—or what types of speech constitute terrorist speech.

This isn’t new. When it comes to extremist content, companies have been keeping users in the dark for years. EFF recently published a paper outlining the unintended consequences of this opaque approach to screening extremist content—measures intended to curb extremist speech online have repeatedly been used to censor those attempting to document human rights abuses. For example, YouTube regularly removes violent videos coming out of Syria—videos that human rights groups say could provide essential evidence for future war crimes tribunals. In his testimony to the Commerce Committee, Google Director of Information Policy Derek Slater mentioned that more than 80% of the videos the company deletes using its automated tools are down before a single person views them, but didn’t discuss what happens when the company takes down a benign video.

Unclear rules are just part of the problem. Hostile state actors have learned how to take advantage of platforms’ opaque enforcement measures in order to silence their enemies. For example, Kurdish activists have alleged that Facebook cooperates with the Turkish government’s efforts to stifle dissent. It’s essential that platforms consider the ways in which their enforcement measures can be exploited as tools of government censorship.

That’s why EFF and several other human rights organizations and experts have crafted and endorsed the Santa Clara Principles, a simple set of guidelines that social media companies should follow when they remove their users’ speech. The Principles say that platforms should:

provide transparent data about how many posts and accounts they remove;
give notice to users who’ve had something removed about what was removed, under what rules; and
give those users a meaningful opportunity to appeal the decision.

While Facebook, Google, and Twitter have all publicly endorsed the Santa Clara Principles, they all have a long way to go before they fully live up to them. Until then, their opaque policies and inconsistent enforcement measures will lead to innocent people being silenced—especially those whose voices we need most in the fight against violent extremism.

trialing as an replacement

"We're eager to see the results of this trial, the feedback will be very valuable for everyone regardless of the final outcome."

Well, it certainly works for us 😜

matrix.org/blog/2019/09/13/thi

"But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem."

eff.org/deeplinks/2019/09/effs

" is a set of protocols that sincerely implement Principle of Least Authority in services with ... No plain text on a server... No unnecessary metadata on a server... Nothing to steal from the server"

github.com/3nsoft/3nweb-protoc

I wrote a blog post that's a fairly detailed how-to on conducting usability testing for free software: samuelhewitt.com/blog/2019-08-

It's gonna take a lot to drag us away from you
There's nothing that a hundred nodes on Tor could ever do
I wish domains weren't all trackin' ya
Gonna take some time to build a `net without those ads

Show thread

"The truth is that a motivated mob can target anyone, marginalized or not. We would all benefit from effective anti-harassment tools... We suggest that via client-side features is a more robust and safer approach."

puri.sm/posts/curbing-harassme

Prepaid SIM cards & mandatory #SIMcardregistration are especially widespread in Africa, allowing for a more pervasive #masssurveillance system of people using prepaid SIM cards, as well as exclusion of people who can't

Want to know more? 👉🏼 privacyinternational.org/long-

"Milosevic's well-researched study... points towards new policy solutions... [The author] argues that cyberbullying should be viewed... as part of the larger social problem of the culture of humiliation."

mitpress.mit.edu/books/protect

Very much enjoying Nicky Case's explorable explanations and thought-provoking minigames!

ncase.itch.io/wbwwb

"Moving forward, we aim to make simple security the default. Security features are enabled and cannot be disabled; enhancements are applied when you update. Experimental security features are disabled by default, but you can enable them at any time."

puri.sm/posts/librem-one-desig

@davidrevoy Your illustrations bring the user personas in our recent blog post to life! Thank you 😺

"In this post we will outline the touchstones we have used to do just that–engineer trustworthy services that everyone can use... We hope it will facilitate communication with friends and colleagues as we hack towards a common goal…"

puri.sm/posts/librem-one-desig

I wrote a piece on the @purism blog on why consent is critical for , the tech industry's failure to get consent, and as a result how "Privacy has become the tattoo removal of the information age". puri.sm/posts/consent-matters-

Client-side heuristics beat human-maintained lists in - perhaps they could be useful elsewhere?

eff.org/deeplinks/2019/07/shar

"The techniques used by trackers are always evolving, so Privacy Badger’s countermeasures have to evolve, too. In the process of developing the new cookie-sharing heuristic, we learned more about how to evaluate and iterate on our detection metrics."

This is a fantastic long read from Valentina Pavel via @privacyint

"If we keep our focus primarily on figuring out data ownership, we face the risk of sidetracking the discussion onto a very questionable path. This is an open invitation to develop new language for clearer conversations and to better shape our demands for the future we want to see."

privacyinternational.org/long-

Do you like and pleistocene megafauna? Then you might be interested in this position!

Purism is a very progressive team, we encourage all interested people to apply, regardless of location, income, gender, age, race, religion, skin, height, weight, sexual orientation, or any other personal trait(s). We do not discriminate and are proud to operate a safe-work-place. More details in the link.

puri.sm/job/ruby-application-d

Just revisited "Encrypt All Sites Eligible (EASE) Mode" in

Great workflow that 1) warns you when visit an HTTP-no-S domain and 2) allows you to disable the warning for that single domain, if you trust it... and all intermediaries. 😲

I tested with internetbadguys.com since example.com uses HTTPS these days. 🔒

Read more here: eff.org/deeplinks/2018/12/how-

"WebRTC WG has asked for privacy and security considerations around the disclosure of a user's local IP address in "

w3.org/wiki/Privacy/IPAddresse

You can prevent this with, for example, or - see github.com/gorhill/uBlock/wiki for some discussion.

Show more
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml