Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The proposal leaked a few weeks ago[1] is extremely vague on this matter and does not clarify how providers should detect CSAM "prior to transmission". Is anyone aware of any sort of scanning technology that can be implemented purely on the client side? Note that the leaked text says that it should be able to detect known and new abuse material.

[1] https://cdn.netzpolitik.org/wp-upload/2024/05/2024-05-28_Cou...



Microsoft: Replay

Apple: Their CSAM detection system that was lambasted not too long ago[0]

[0] https://www.apple.com/child-safety/pdf/Expanded_Protections_...


Ahh, the Misunderstanding Olympics of 2021

The Apple system was pretty much the best way this could be done short of having a 100% reliable "AI" system on-device detecting bad stuff.


> 100% reliable "AI" system

It wasn't 100% reliable and in fact people quickly found collisions. Which you should expect to be able to with an even more advanced system.

> Ahh, the Misunderstanding Olympics of 2021

There's much nicer ways to say this that is congruent with community standards[0]. If you believe I have misunderstood then try pointing out specifically what I have misunderstood instead of just making an assertion.

But the question was if anyone was aware of any client side scanning technology that could in fact check for stuff such as CSAM. In fact, the Apple system was developed explicitly for this purpose, so yes, this does exist. While Replay doesn't have this explicitly feature stated (that I'm aware of), it is not a big step to think that you can just smash the two things together. As Apple shows a system detecting based on images and Replay is taking images of one's computer.

[0] https://news.ycombinator.com/newsguidelines.html


Yes, they made collisions with cat pictures and complete gibberish. Whoopdedoo.

They forgot that:

a) It required _multiple_ matches before it was mathematically possible for Apple to see any of the photos

b) AFTER the multiple matches to known and verified CSAM signatures there would be an actual human looking at the picture (at reduced resolution or something like that)

c) Only after the human factor they would consider getting law enforcement involved.

Now your fancy collision has slightly inconvenienced a minimum wage CSAM checker for 15 seconds.

Not exactly a master plan for getting people SWATed for CSAM possession =)

--

And the "Misunderstanding Olympics" was referring to the fact that I think I was one of 12 people in the world who actually read the specs of how the system was designed and didn't just imagine in my head how they might've done it and started panicking about "Tim Apple looking at every photo on my phone!!11".


You misunderstand. The objection is not to the methods in which the task is accomplished. The objection is to the principle of the technology. The objection would exist under the best of forms. There is no misunderstanding of the operations you mention; you rebut but a strawman. In fact, the fear was never about "Tim Apple looking at every photo" and has little to do with Apple itself.

It is about how such a technology can enable abuse. It does not matter if the technology is exclusively used for good if the harm it does if abused is too great. And we have plenty of evidence to see that irrespective of which country you reside in, that the scope of such projects typically widens. We also live in a global world and we are not exclusively subjected to the laws of our own governments. We don't have to go far back in history to see examples of the government turning on its own citizens. It is not just the US, it is not just German, not just Russia, not just China, but such actions have been prolific. I do not believe the is sufficient reason to believe your own government is incapable of abusing such power and I'd accuse you of lunacy if you claim that no government would seek to abuse it. Was not the US founded on the explicit principle of treating the government as an adversarial entity? Because if not, well then one of us must be illiterate, since Federalist 10 and 51 famously write this explicitly. Not to mention a litany of quotes by Jefferson.

  experience hath shewn, that even under the best forms, those entrusted with power have, in time, and by slow operations, perverted it into tyranny
So no, it is not a misunderstanding on my part as to the specs of the technology. Because the matter is about how much harm could be caused when such perversion happens. It is the understanding that the road to hell is paved with good intentions. That evil is not solely created by evil men seeking to do evil, but the unfortunate reality is that it is often created when good men are seeking to do good. Even under the best forms.


> The Apple system was pretty much the best way this could be done

This may be true, and yet it's also true that it was still a terrible plan. This is exactly why it simply shouldn't be done at all.


The question is always about the circumstances.

In the "think of the children" scenario the parents are incentivized to consent to some filter. (So they or someone(!!!) gets an alert if the boogeyman is talking to their kids, asking them to send nudes, or sending dick pics.)

See recital 13 on top of page 7 for the definition.

And see 17 on bottom of page 8 for this:

"To allow for innovation and ensure proportionality and technological neutrality, no exhaustive list of the compulsory mitigation measures should be established"

and

(page 46) "... measures shall be ... targeted and proportionate in relation to that risk, taking into account, in particular, the seriousness of the risk as well as the provider’s financial and technological capabilities and the number of users; ..."

This is a framework. It seems to be coming from overly-anxious law nerds who can't stop thinking of the children. (And yes, this usually makes them a problem, because they're nigh unreasonable.)

It seem to be set up as a DIY thing for providers. And, again, for parents it makes sense, let your kids surf on the marked-safe-for-kids part of the Internet. (And nowadays kids really spend most of their time on (in!) certain apps, not in a web browser.)

The ugly part is that there are fines to compel the providers to adjust their risk metrics. (page 104, page 110 mentions max 6% of global turnover)

This clearly seems to be a softish push to assign a cost to internet ecosystems for online child sexual abuse.

On page 45 there are some requirements.

The provider needs to think about risks (but guidelines will come from authorities anyway), have some appropriate budget to actually work on this it the context of its own service, and then if it looks like there are problems it should spend money on remediation. (Ie. spend on content moderation, work with other providers in the industry, have a team and provide UX to notify that team, and allow users to limit what they share with others based on age.)


How does something like this avoid false positives?

A pretty common example in my circle is parents taking pictures of baby rashes/pimples/blisters etc to send to family doctor or doctor friends.

It sounds like a situation where every parent with a toddler will end up on some list.


It doesn't.

One page 17 section 28 says "... constantly assess the performance of the detection technologies and ensure that they are sufficiently reliable, as well as to identify false positives and avoid to the extent erroneous reporting to the EU Centre, providers should ensure human oversight and, where necessary, human intervention, adapted to the type of detection technologies and the type of online child sexual abuse at issue. Such oversight should include regular assessment of the rates of false negatives and positives generated by the technologies, based on an analysis of anonymised representative data sample"

and for the draft law language see page 60 which says that after the user reported something the provider forwards is anonymized to this new EU Centre, where there human verification has to take place.

So supposedly this means our tax will pay for folks to look at a ton of rashes and pimples.


Maybe that's what is needed for the press and other people to pay attention to it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: