"Take a moment to think before you dive in. That’s the best advice for Google Photos users, as the company confirms its latest update can scan all your photos to “use actual images of you and your loved ones” in AI image generation. That means Gemini seeing who you know and what you do. You likely have tens or hundreds of thousands of photos. They’re all exposed if you update.

We’re talking Personal Intelligence, Google’s latest AI upgrade path which lets users opt-in to connecting Google apps to Gemini. Why search for a doctor’s appointment when Google has access to all your calendar events. Why search for a party invite when it reads all your emails. And why search for a specific photo of you and your loved ones to create an image, when it sees all your photos.

This is the latest iteration in the ongoing battle between convenience and privacy playing out on our phones and computers. “Previously, to get a result that felt truly personal, you had to write long, detailed descriptions and manually upload a reference photo just to give Gemini the right context.” Not any more, Google says. Its AI can scan everything to form its own views of you and everyone you know."

forbes.com/sites/zakdoffman/20

#AI #GenerativeAI #Google #Gemini #Privacy #DataProtection

#FossilFuels - Our World in Data

This is a metric often missed in the west’s attempt to blame #China and other #AsianCountires for having he temerity to want to develop. #Australia is often overlooked as being the second biggest culprit behind the #US. #Europe isn’t innocent either.

ourworldindata.org/fossil-fuel

#Environment #GlobalWarming #ClimateChange #GreenhouseGases

California lawmakers are fast-tracking AB 1709—a sweeping bill that would ban anyone under 16 from using social media and force every user, regardless of age, to verify their identity before accessing social platforms. eff.org/deeplinks/2026/04/act-

please don't tell me to read that great essay on why AI is so bad, when it features:

> I use AI tools sparingly for assistance while refactoring code in languages I understand. I occasionally use it to help compose command line arguments for tools like ffmpeg.

straight after why not to use it

YOU CAN LITERALLY NOT DO ANY OF THAT AND NOT MISS A THING

and directly before

> The explosion of AI has played a significant role in my own burnout.

i mean,

Nukes, CCS, AI for climate, as hollow false promises designed to excuse a carbon bomb of fossil fuelled data centres exploding in our faces.

FRIENDS - please sign up for this panel that'll be held tomorrow at the same time this post has been published

Registration link: us06web.zoom.us/meeting/regist

Become a Friend of PM, and support independent radical publishing. We’ll send you books monthly, and you'll get 50% off of everything on our website year-round. You’ll also receive a PM Press Sockin' Suckas 20th Anniversary mug free. Offer ends 5/1. pmpress.org/index.php?l=produc

RE: masto.ai/@phoronix/11647612006

Okay gotta migrate off to stock Debian then I guess.
Microsoft kills its OS with all the copilot garbage users do not want and Canonical has to go: "Yeah, that's where we need to go."

Strategic genius.

Please don't describe the boot on your neck with heated rhetoric. It's divisive.

I wonder how much productivity is being lost by people using LLMs to write long things where the meaningful content remains very small in comparison.

I've noticed that looking up how to do $THING with a command-line $TOOL now almost always gives me an LLM-generated page with pages of boilerplate nonsense (what is $TOOL? How to install $TOOL on Ubuntu, how to install $TOOL on macOS, and so on), with the actual two sentences of content right at the end. These are obviously generated to provide more space for ads, but there's a lot of this cropping up in other contexts.

Saving a few seconds of writing time in exchange for wasting a few minutes of reading time for each of your readers is a staggering drop in overall efficiency.

“In this work, we conduct a large-scale simulation of how users might delegate work to LLMs across 52 professional domains. We find that current LLMs are unreliable delegates: even frontier models corrupt an average of 25% of document content over long workflows, with sparse but severe errors that silently compound over time.”

Good to see the issue addressed explicitly, even though the results aren’t surprising—why would anyone expect LLMs to be reliable!?

arxiv.org/abs/2604.15597

Hi! We are Koumbit, a self-managed, not-for-profit collective based in Montreal/Tio'tia:ke. We aim to be an ethical and human alternative to big tech. We've been offering web hosting, web dev, and sysadmin services since 2004!

We work with open source software, and prioritize technological autonomy and data privacy. Our servers are in Montreal and belong to us.

If you are looking for local, socially conscious, ecological and ethical web hosting, please think of us!

Show thread

@signalapp It would probably help if Signal itself didn't use what looks like a real conversation or story to communicate to the user. It legitimizes phishing attacks like these. And they're annoying features regardless.

the thing about “never attribute to malice what can be easily explained by incompetence” is that it’s rat-fuckable

when there is functionally no difference between the two, engaging with someone as if they’re incompetent means accepting their frame, that what they’re ultimately trying to accomplish isn’t *bad*, they’re just going about it in a way with bad side-effects, and people use in bad-faith our good-faith willingness to treat them as incompetent to push their agendas

engaging with someone as if they’re malicious, on the other hand, means rejecting the harmful frame, recasting the argument in terms of “why are you trying to do this bad thing?”, and not quibbling about the details of why the thing is bad

these age-verification laws whose implementations are a form of category error is a good example; if you engage with a proponent of them with “well here’s why your implementation is bad” you’re tacitly approving the larger idea that surveliance is good, and you just disagree with the techniques; bad-faith actors use this

If instead you come back with “why are you trying to surveil everyone’s computer use? Why are you laying the groundwork to prevent people from using their own computers?”, you re-cast the frame. Sure, there are probably incompetent people who don’t realize the results of what they’re going to do, but casting the larger idea into question AND KEEPING IT IN QUESTION is the only effective path I’ve found to debating people on things like this

so, instead:

don’t ascribe to incompetence something that is functionally malicious

So, you want to start a tech co-op?

Monthly videocall in the internationalist union hall for folks like you, coming up in 20 minutes!

📆 Last Sunday of each month 17:00 UTC
📃 meeting notes: pad.data.coop/56HJK2TYTvSeFx-y
🤙 jitsi link: meet.jit.si/techcoops

#tech #coop #cooperatives #cooperativa #democracy #freedom #solidarity #anarchism

As proof that LLM generated code is easy to audit & secure, I notice that anytime I let my smart friends look at it, they immediately spot an exploitable security vulnerability. Try getting feedback that fast on old style human code.

Show more
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml