Show more

Hmmm... priorities 🤔 "Here we reveal, rank, and applaud the homeservers with the lowest ping, as measured by pingbot, a that you can host on your own server. Join to experience the fun live, and to find out how to add YOUR server to the game."

"Rather than reading the Git history... or having one single file which developers all write to, reads "news fragments" which contain information useful to end users."

@Framasoft It's sad to see services go, but thank you for sharing your lessons learned. Regarding and ease-of-maintenance, we're making slow but steady progress on packaging, so that one day you'll just `apt install ldh-middleware` and launch some Frama Un services ;) Keep on federatin' and vive la Framasoft! 🇫🇷

"Feel free to use, study, modify, distribute and share free software. Happy , and may the source be with you!"

Innocent Users Have the Most to Lose in the Rush to Address Extremist Speech Online

Internet Companies Must Adopt Consistent Rules and Transparent Moderation Practices

Big online platforms tend to brag about their ability to filter out violent and extremist content at scale, but those same platforms refuse to provide even basic information about the substance of those removals. How do these platforms define terrorist content? What safeguards do they put in place to ensure that they don’t over-censor innocent people in the process? Again and again, social media companies are unable or unwilling to answer the questions.

A recent Senate Commerce Committee hearing regarding violent extremism online illustrated this problem. Representatives from Google, Facebook, and Twitter each made claims about their companies’ efficacy at finding and removing terrorist content, but offered very little real transparency into their moderation processes.

Facebook Head of Global Policy Management Monika Bickert claimed that more than 99% of terrorist content posted on Facebook is deleted by the platform’s automated tools, but the company has consistently failed to say how it determines what constitutes a terrorist⁠—or what types of speech constitute terrorist speech.

This isn’t new. When it comes to extremist content, companies have been keeping users in the dark for years. EFF recently published a paper outlining the unintended consequences of this opaque approach to screening extremist content—measures intended to curb extremist speech online have repeatedly been used to censor those attempting to document human rights abuses. For example, YouTube regularly removes violent videos coming out of Syria—videos that human rights groups say could provide essential evidence for future war crimes tribunals. In his testimony to the Commerce Committee, Google Director of Information Policy Derek Slater mentioned that more than 80% of the videos the company deletes using its automated tools are down before a single person views them, but didn’t discuss what happens when the company takes down a benign video.

Unclear rules are just part of the problem. Hostile state actors have learned how to take advantage of platforms’ opaque enforcement measures in order to silence their enemies. For example, Kurdish activists have alleged that Facebook cooperates with the Turkish government’s efforts to stifle dissent. It’s essential that platforms consider the ways in which their enforcement measures can be exploited as tools of government censorship.

That’s why EFF and several other human rights organizations and experts have crafted and endorsed the Santa Clara Principles, a simple set of guidelines that social media companies should follow when they remove their users’ speech. The Principles say that platforms should:

provide transparent data about how many posts and accounts they remove;
give notice to users who’ve had something removed about what was removed, under what rules; and
give those users a meaningful opportunity to appeal the decision.

While Facebook, Google, and Twitter have all publicly endorsed the Santa Clara Principles, they all have a long way to go before they fully live up to them. Until then, their opaque policies and inconsistent enforcement measures will lead to innocent people being silenced—especially those whose voices we need most in the fight against violent extremism.

trialing as an replacement

"We're eager to see the results of this trial, the feedback will be very valuable for everyone regardless of the final outcome."

Well, it certainly works for us 😜

"But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem."

" is a set of protocols that sincerely implement Principle of Least Authority in services with ... No plain text on a server... No unnecessary metadata on a server... Nothing to steal from the server"

I wrote a blog post that's a fairly detailed how-to on conducting usability testing for free software:

It's gonna take a lot to drag us away from you
There's nothing that a hundred nodes on Tor could ever do
I wish domains weren't all trackin' ya
Gonna take some time to build a `net without those ads

Show thread

"The truth is that a motivated mob can target anyone, marginalized or not. We would all benefit from effective anti-harassment tools... We suggest that via client-side features is a more robust and safer approach."

Prepaid SIM cards & mandatory #SIMcardregistration are especially widespread in Africa, allowing for a more pervasive #masssurveillance system of people using prepaid SIM cards, as well as exclusion of people who can't

Want to know more? 👉🏼

"Milosevic's well-researched study... points towards new policy solutions... [The author] argues that cyberbullying should be viewed... as part of the larger social problem of the culture of humiliation."

Very much enjoying Nicky Case's explorable explanations and thought-provoking minigames!

"Moving forward, we aim to make simple security the default. Security features are enabled and cannot be disabled; enhancements are applied when you update. Experimental security features are disabled by default, but you can enable them at any time."

@davidrevoy Your illustrations bring the user personas in our recent blog post to life! Thank you 😺

"In this post we will outline the touchstones we have used to do just that–engineer trustworthy services that everyone can use... We hope it will facilitate communication with friends and colleagues as we hack towards a common goal…"

I wrote a piece on the @purism blog on why consent is critical for , the tech industry's failure to get consent, and as a result how "Privacy has become the tattoo removal of the information age".

Client-side heuristics beat human-maintained lists in - perhaps they could be useful elsewhere?

"The techniques used by trackers are always evolving, so Privacy Badger’s countermeasures have to evolve, too. In the process of developing the new cookie-sharing heuristic, we learned more about how to evaluate and iterate on our detection metrics."

This is a fantastic long read from Valentina Pavel via @privacyint

"If we keep our focus primarily on figuring out data ownership, we face the risk of sidetracking the discussion onto a very questionable path. This is an open invitation to develop new language for clearer conversations and to better shape our demands for the future we want to see."

Show more
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml