Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The whole point of the feature is to analyze images that you possess.


No. The whole point is to avoid scanning all your images on their servers and instead only scan when there is a suspicion. Ie Hash match.

You will not have any of your images scanned at all, if you don’t load a certain number of images that’s metadata match known CSAM. That’s a superior position, than: everything you upload is scanned for visual features.


At this point I'm not sure why so many people seem to be intentionally missing this point.

The usual reply is that since this pre-filtering happens on the device it "crossed the line" and it's evil because since the code exists in the device it's one step before midnight as it could be used to scan everything on the device.

I think this argument is not rational for a few reasons:

1.The mere absence of some code doesn't mean much in a platform that is designed to receive system software updates over the air.

2. You already have to trust Apple to do the right thing. The situation was ready pretty grim even before this. Apple may decide at any point in time to go full surveillance on any device. The fact that decide to do it or not is completely orthogonal to whether they ship this particular client side pre-uplpad CSAM fingerprinting tool. If you really don't trust Apple, stay away from their devices and fight them where it matters (e.g. to ensure that comply to interoperability so people who choose free and open devices are not locked out from the world)

3. If Apple didn't want to honestly push for e2e encryption, why on earth would they bother going through this extra hoop? If they had the master keys to all of your data they wouldn't need to run anything on your device before encrypting the data. The fact they are proposing such a system is a good indication they understand the consequences of end to end encryption: that they totally lose control on what they end up hosting on their cloud storages.

4. The time of most discussions here is that of people who already hate Apple and are just looking for a high-profile mishap to unload their guns. See point 2: there are already plenty of reasons to prefer open and free alternatives and stay away from Apple, IMO there is no need to fabricate additional outrage


> You already have to trust Apple to do the right thing.

Yeah, I trust them not to do stuff like this. Society is based on trust, it is normal. If you trust your friend, it doesn’t mean it is okay if they stab you, is it?


That's circular reasoning:

"This feature is dangerous because you cannot trust Apple to only use it for what they say they will use it, hence you cannot trust Apple since they doing something which can be abused"

Using your analogy that would be:

"A knife is dangerous because you cannot trust a criminal wielding a knife to not stab you, hence your friend by definition is a criminal since it wields a knife (she claims she's cutting bread but by definition she's no longer your friend because she broke your trust by wielding a knife which, slippery slope, could be used to stab you if she ever becomes a criminal, but we all know she will because ... the knife...)

all I'm saying is that this whole communication debacle has nothing to do with proving or disproving Apple's trustworthiness: they may be nefarious or not, independently of this feature. This feature doesn't make anything possible that wasn't possible before, and breaks the trust with the users only insofar users misunderstand what it is all about, and apparently that's what's happening.


I don’t care if I won’t be raped everyday. Getting raped only on weekends is not a compromise I would accept. What is so hard to understand about that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: