Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've dusted off the old form. Here you are:

  This advocates a:

  ( ) technical
  (*) legislative
  (  ) market-based
  ( ) vigilante

  ...solution to control explicit or controversial content online. It won’t work. Here’s why:

  Why it fails:

  (*) Can be bypassed with basic tools (VPNs, mirrors, alt accounts)
  (*) Users and creators won’t tolerate the restrictions
  (*) Requires unrealistic global cooperation
  (*) Censors legitimate content (art, education, etc.)
  (*) Lawmakers don’t understand the tech they’re regulating
  (*) Platforms may quietly ignore or undermine it
  (*) Trolls and bots will weaponize it

  What you didn’t consider:

  (*) Jurisdiction conflicts across countries
  (*) Encrypted and decentralized content sharing
  (*) Abuse of takedown/reporting systems
  (*) Privacy and free expression concerns
  (*) Content filters are always one step behind

  And finally:

  (*) Sorry, it just doesn’t work.
  ( ) This idea causes more harm than good.
  ( ) You're solving a symptom, not the problem


I think the second to last one should also be checked. Most implementations include Government Sponsored Identity Theft.


The road to hell is paved with good intentions.


I’d say all three in the and “finally” category are relevant. This does cause more harm than good because it is more likely to be weaponized by the government when they start to carve out more exceptions to free speech. It is also solving a symptom (where kids go when they are curious about adult topics) rather than the problem (parenting…not providing a safe space for your kids to ask those questions).


I agree, there already is a lot of leaked IDs, enough to feed into AI to generate any ID you wish with any name on it. The ID verification system via a picture is dead on arrival.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: