Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is trivially easy to filter this with an LLM or even just a basic CLIP model. Will it be 100% foolproof? Not likely. Is it better than doing absolutely nothing and then blaming the users? Obviously. We've had this feature in the image generation tools since the first UI wrappers around Stable Diffusion 1.0.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: