The story here is about Apple paying a settlement because Siri was sending audio snippets surrounding invalid wake word activations back to their servers without users realizing it.
In the story, there was an implication that these audio snippets had been used for targeted ads. This is very clearly not true.
A separate issue is whether Apple take audio from opt-in full hey Siri sessions and sell that advertisers for targeting.
I very much doubt they do that, but even if they did that shouldn't be part of the "your phone is secretly spying on you through your microphone" conspiracy theory because of course your phone is listening to you if you just said "hey Siri" and started talking to it.
None of this is in the realm of "conspiracy theory." We aren't talking about the Earth being flat. Your phone is very clearly listening at all times, and sometimes activates and sends that data to servers. Given that, we can debate all day about what happens with that data - hopefully it's being treated properly - but the point is that it isn't clear, there isn't enough transparency, and there are occasionally scandals.
The problem is that the infrastructure to harvest all of this data clearly exists, and the only reason we shouldn't believe that it's being mishandled is whatever degree of faith we have in these companies to behave ethically, comply with legal requirements (or face a slap on the wrist), and most importantly, have developers that don't inadvertently make any mistakes. I really don't have faith in the latter point in particular.
The point is that maybe this data improperly ends up being used to serve ads, or maybe it doesn't, but in light of all the above, entertaining the idea is in no way akin to thinking that the moon is made of cheese.
The reason the "phones are listening to you and targeting ads based on what you say" thing is a classic conspiracy theory is that it would take a genuine secret conspiracy to pull it off.
Think about how many people would need to be in in the secret and actively lying about it: employees of phone hardware companies, and ad tech engineers, and execs at these companies, and their legal teams, and every contractor they used to help build this secret system.
All of whom passionately deny that this is happening.
If that's not a conspiracy I don't know what is.
Just because something is labeled a conspiracy theory doesn't mean it's not true (this one isn't true though). You're welcome to continue believing in it, but saying "it's not a conspiracy theory" doesn't work for me.
you'd implement it so that each group is compartmentalized and no one can confirm the whole of the story, and only a single digit number of people know the full truth. one exec, one lawyer, one software engineer. and it'd have to be the software engineer dealing with releases, so they can modify the code last minute before it gets submitted. this presumes the CI system is unable to send apps to -Apple/Google itself, so it still has to be run by hand on your laptop (for some mysterious reason). if the code in the repo isn't able to do real-time monitoring, and if the data that gets sent to the ads team is inteltionally delayed and sufficiently anonymous that they can't tell by looking up past reports that theres real time surveillance happening, then everyone else involved could be vehemently asserting what they know, but which doesn't match the reality after the fix is put in.
I don't actually believe this, mind you, but theorizing on how you'd pull something like this off, the answer is compartmentalization.
All it would take on iOS is an innocent looking bug buried somewhere deep in any number of subsystems that make it so that the red dot for recording doesn't go on as often as it should. just a totally accidental buffer overflow that makes it fall to set the recording active flag when called a certain way. The XZ thing was down to a single character, and that's one of the most watched projects in the world. A latent iOS bug that no one's looking for
Again, not saying I believe this is even happening in the first place, just that it's not technically impossible, just highly improbable.
An interesting thing about that compartmentalization approach is that it would open a company that implemented it up to much more severe problems.
If your organization structure allows a tiny number of people to modify your deployed products in that way, the same tricks could be used by agents of foreign powers to inject government spyware.
That's a threat that companies the size of Apple need to be very cognizant of. If I was designing build processes at a company like that I'd be much more concerned about avoiding ways for a tiny group to mess with the build, as opposed to designing in processes like that just so I could do something creepy with the ad targeting.
I don't think any such conspiracy or secret-keeping is required; one need not attribute any of this to malice which can be attributed to incompetence. There already exists a system, on everyone's phones, which is listening to audio at all times, occasionally activates in response to a wake word and runs search queries and such from it. The point of this system, when used properly, is intended to take a recording of your voice, transcribe it into text, send it into a search engine, associate the query with your account and search history and use it to influence ad preferences in response to future queries. The functioning of this system involves sending data through an opaque and complex chain of custody, sometimes involving third parties, which - even if intended to comply with privacy and security protocols - could easily be mishandled either maliciously or accidentally, as happens in software development all the time. This includes but is not limited to:
1. The occasional false positive response to a wake word that wasn't really a wake word, causing search queries to be run that you didn't intend.
2. This data being accidentally mishandled behind the scenes in Apple's servers in some way, such as developer error leading to the data accidentally landing in the wrong folder, being labeled with an incorrect flag in some database somewhere, or otherwise being given the wrong level of sensitivity.
3. This data being deliberately "correctly-handled" behind the scenes in Apple's servers in some way that users wouldn't like, but technically agreed to when they first used the phone.
4. This data being used for valid "QA" purposes that, for all intents and purposes, include situations that user would probably not be comfortable with, but also technically agreed to.
5. An unforeseen security vulnerability affecting any part of this process.
6. Malware on the phone interfering with any of the above.
7. Not-quite-malware that you agreed to install, doing things you're not quite happy with but technically agreed to, which is somehow in the loop of any part of this process.
Again, we can debate all day which of these are true - hopefully none of them are true. But we're talking about software development here, where these sorts of things happen on a daily basis. None of this is "they faked the moon landing" kind of stuff, and all lead to the same result from the standpoint of user experience.
> The occasional false positive response to a wake word that wasn't really a wake word, causing search queries to be run that you didn't intend.
Nobody ever describes that behaviour, though. They don't say "a) we had a conversation, b) our smart speaker suddenly interrupted and gave us some search results, c) I started seeing ads based on the conversation" - b) is never mentioned.
Your points 1-3 are genuinely the best good-faith explanation I've seen of how the "I saw a targeted advert based on something I said with my phone in earshot" thing might happen without it being a deliberate conspiracy between multiple parties.
I still doubt it's actually happening, but I'm not ready to 100% rule out that sequence of events.
In the story, there was an implication that these audio snippets had been used for targeted ads. This is very clearly not true.
A separate issue is whether Apple take audio from opt-in full hey Siri sessions and sell that advertisers for targeting.
I very much doubt they do that, but even if they did that shouldn't be part of the "your phone is secretly spying on you through your microphone" conspiracy theory because of course your phone is listening to you if you just said "hey Siri" and started talking to it.