Show more

Time to install some more software on my Librem 5.

This is fun. :)

This is among the reasons I never post pictures of my son. I understand and accept risks to my own identity, but I don't own his online identity--I'm merely a steward of it until he's an adult. I hope at that point I can hand it off to him untarnished and unexploited.
nytimes.com/interactive/2019/1

Innocent Users Have the Most to Lose in the Rush to Address Extremist Speech Online

Internet Companies Must Adopt Consistent Rules and Transparent Moderation Practices

Big online platforms tend to brag about their ability to filter out violent and extremist content at scale, but those same platforms refuse to provide even basic information about the substance of those removals. How do these platforms define terrorist content? What safeguards do they put in place to ensure that they don’t over-censor innocent people in the process? Again and again, social media companies are unable or unwilling to answer the questions.

A recent Senate Commerce Committee hearing regarding violent extremism online illustrated this problem. Representatives from Google, Facebook, and Twitter each made claims about their companies’ efficacy at finding and removing terrorist content, but offered very little real transparency into their moderation processes.

Facebook Head of Global Policy Management Monika Bickert claimed that more than 99% of terrorist content posted on Facebook is deleted by the platform’s automated tools, but the company has consistently failed to say how it determines what constitutes a terrorist⁠—or what types of speech constitute terrorist speech.

This isn’t new. When it comes to extremist content, companies have been keeping users in the dark for years. EFF recently published a paper outlining the unintended consequences of this opaque approach to screening extremist content—measures intended to curb extremist speech online have repeatedly been used to censor those attempting to document human rights abuses. For example, YouTube regularly removes violent videos coming out of Syria—videos that human rights groups say could provide essential evidence for future war crimes tribunals. In his testimony to the Commerce Committee, Google Director of Information Policy Derek Slater mentioned that more than 80% of the videos the company deletes using its automated tools are down before a single person views them, but didn’t discuss what happens when the company takes down a benign video.

Unclear rules are just part of the problem. Hostile state actors have learned how to take advantage of platforms’ opaque enforcement measures in order to silence their enemies. For example, Kurdish activists have alleged that Facebook cooperates with the Turkish government’s efforts to stifle dissent. It’s essential that platforms consider the ways in which their enforcement measures can be exploited as tools of government censorship.

That’s why EFF and several other human rights organizations and experts have crafted and endorsed the Santa Clara Principles, a simple set of guidelines that social media companies should follow when they remove their users’ speech. The Principles say that platforms should:

provide transparent data about how many posts and accounts they remove;
give notice to users who’ve had something removed about what was removed, under what rules; and
give those users a meaningful opportunity to appeal the decision.

While Facebook, Google, and Twitter have all publicly endorsed the Santa Clara Principles, they all have a long way to go before they fully live up to them. Until then, their opaque policies and inconsistent enforcement measures will lead to innocent people being silenced—especially those whose voices we need most in the fight against violent extremism.

I bought and plugged Echo in tonight
And she hears every whisper of each quiet conversation
She streams a song, then books a flight
Her LEDs reflect the stars that guide me toward salvation
I stopped an old man along the way
Hoping to find some old forgotten words to reclaim privacy
He turned to me as if to say
"Foolish boy, it's listening to you!"

The Register covered my article announcing @linuxjournal closing and included a number of quotes from my first goodbye that add extra context. theregister.co.uk/AMP/2019/08/

I can't speak for the rest of the @linuxjournal archive, but I own the decade+ worth of articles I wrote. A lot of them are just as relevant today (I refer to them myself quite often). Would anyone be interested in some kind of "Best of Hack and /" polished and updated compilation?

I was just reading through the toots and read one that made no sense whatsoever, not even grammatically.

That's when I realized it was from a account.

Crazies were easier to deal with when they yelled at people from street corners.

So excited for from @purism the hype is so good!!

I see the hardware kill switches on the right, possible power button top right, possible headphones jack top left. Just trying to see if the volume is software only, or is it below the kill switches?

Please STOP, stop, stop using #Google #reCaptcha on your websites!
You are giving away your visitors' #privacy and they cannot even opt-out and avoid it if they want to reach your contents.
fastcompany.com/90369697/googl
#privacyMatters #webdevelopment

As an existing user I am confused by . Why is the instance empty? Why do I need to search for everything? Feels empty in here!

Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml