Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The question is always about the circumstances.

In the "think of the children" scenario the parents are incentivized to consent to some filter. (So they or someone(!!!) gets an alert if the boogeyman is talking to their kids, asking them to send nudes, or sending dick pics.)

See recital 13 on top of page 7 for the definition.

And see 17 on bottom of page 8 for this:

"To allow for innovation and ensure proportionality and technological neutrality, no exhaustive list of the compulsory mitigation measures should be established"

and

(page 46) "... measures shall be ... targeted and proportionate in relation to that risk, taking into account, in particular, the seriousness of the risk as well as the provider’s financial and technological capabilities and the number of users; ..."

This is a framework. It seems to be coming from overly-anxious law nerds who can't stop thinking of the children. (And yes, this usually makes them a problem, because they're nigh unreasonable.)

It seem to be set up as a DIY thing for providers. And, again, for parents it makes sense, let your kids surf on the marked-safe-for-kids part of the Internet. (And nowadays kids really spend most of their time on (in!) certain apps, not in a web browser.)

The ugly part is that there are fines to compel the providers to adjust their risk metrics. (page 104, page 110 mentions max 6% of global turnover)

This clearly seems to be a softish push to assign a cost to internet ecosystems for online child sexual abuse.

On page 45 there are some requirements.

The provider needs to think about risks (but guidelines will come from authorities anyway), have some appropriate budget to actually work on this it the context of its own service, and then if it looks like there are problems it should spend money on remediation. (Ie. spend on content moderation, work with other providers in the industry, have a team and provide UX to notify that team, and allow users to limit what they share with others based on age.)



How does something like this avoid false positives?

A pretty common example in my circle is parents taking pictures of baby rashes/pimples/blisters etc to send to family doctor or doctor friends.

It sounds like a situation where every parent with a toddler will end up on some list.


It doesn't.

One page 17 section 28 says "... constantly assess the performance of the detection technologies and ensure that they are sufficiently reliable, as well as to identify false positives and avoid to the extent erroneous reporting to the EU Centre, providers should ensure human oversight and, where necessary, human intervention, adapted to the type of detection technologies and the type of online child sexual abuse at issue. Such oversight should include regular assessment of the rates of false negatives and positives generated by the technologies, based on an analysis of anonymised representative data sample"

and for the draft law language see page 60 which says that after the user reported something the provider forwards is anonymized to this new EU Centre, where there human verification has to take place.

So supposedly this means our tax will pay for folks to look at a ton of rashes and pimples.


Maybe that's what is needed for the press and other people to pay attention to it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: