If you’ve ever clicked an explanation of why you’re seeing a particular online ad, the information is typically so vague that you have no clue. “When they hide this information, you develop ideas of how they got to you,” EFF’s @jgkelley told @washingtonpost. https://www.washingtonpost.com/technology/2025/01/07/phone-listening-target-ads-iphone-siri/
@samirx @eff @jgkelley Phrase detection is very specialized, often even run on a DSP to be low-power. Regular recording and upload would use much more power than that. Analyzing speech on-device would use even more power.
Alternative activation phrase approach would use a more modest amount of power, but is not useful for analyzing speech. There's plenty of ways to talk about a product that don't begin with “I want to buy” multiplied by the number of languages.
@elgregor @eff @jgkelley your points are valid and I probably over simplified it, but I still think from an engineering perspective its not an impossible task (iOS already does offline transcription). I'd love to know more about the inner workings of Siri and other voice assistants if you have any resources please share but in the meantime given the $95M settlement I'd rather not give apple the benefit of the doubt