If you’ve ever clicked an explanation of why you’re seeing a particular online ad, the information is typically so vague that you have no clue. “When they hide this information, you develop ideas of how they got to you,” EFF’s @jgkelley told @washingtonpost. washingtonpost.com/technology/

@eff @jgkelley maybe this piece would be more convincing if it wasn’t in the #washingtonpost which just this week has been in the spotlight for silencing criticism of #bezos by their cartoonist… anntelnaes.substack.com/p/why-

And one explanation by an expert isn’t accurate, saying that if your phone was constantly listening “you’d notice that your battery was running down quickly, and that companies wouldn’t waste the gobs of money it would cost to constantly listen in”. The problem is that this is exactly how all voice assistants work, they constantly listen and analyse what you say looking for the activation phrase (hey #Siri, ok #Google etc). And while yes, they probably aren’t recording and uploading *everything* you say, they could record, analyze and upload selected phrases (as text) without having much impact on the device battery or cost much money.

In theory, a company that’s built on selling #advertising data may feel compelled to listen in for a second activation phrase like “I want to buy” and then analyze the speech that follows into text and upload it. Which they could still claim “no one listens to your conversations”. Of course this is pure speculation but it’s to demonstrate that it wouldn’t be noticeable to the end user or expensive for the company.

#Apple paying a huge #settlement doesn’t mean they’re guilty but probably means they don’t have much to gain by proving their innocence and probably a lot more to lose by being put under the microscope.

Follow

@samirx @eff @jgkelley Phrase detection is very specialized, often even run on a DSP to be low-power. Regular recording and upload would use much more power than that. Analyzing speech on-device would use even more power.

Alternative activation phrase approach would use a more modest amount of power, but is not useful for analyzing speech. There's plenty of ways to talk about a product that don't begin with “I want to buy” multiplied by the number of languages.

@elgregor @eff @jgkelley your points are valid and I probably over simplified it, but I still think from an engineering perspective its not an impossible task (iOS already does offline transcription). I'd love to know more about the inner workings of Siri and other voice assistants if you have any resources please share but in the meantime given the $95M settlement I'd rather not give apple the benefit of the doubt

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml