Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One recent example in the news was the AI generated p*rn of Taylor Swift. From what I read, the people who made it used Bing, which is based on OpenAI’s tech.


This is more sensationalism than ethical issue. Whatever they did they could do, and probably do better, using publicly available tools like Stable Diffusion.


or just photoshop. The only thing these tools did was make it easier. I don't think the AI aspect adds anything for this comparison.


An argument can be made that "more is different." By making it easier to do something, you're increasing the supply, possibly even taking something that used to be a rare edge case and making it a common occurrence, which can pose problems in and of itself.


It's more dangerous if it's uncommon. It's knowledge that protects people and not a bunch of annoying "AI safety" "researchers" selling the lie that "AI is safe". Truth is those morons only have a job because they help companies save face and create a moat around this new technology where new competitors will be required to have "AI safety" teams & solutions. What have "AI safety" achieved so far besides making models dumber and annoying to use?


Put in a different context: The exploits are out there. Are you saying we shouldn't publish them?

Deepfakes are going to become a concern of everyday life whether you stop OpenAI from generating them or not. The cat is out of the proverbial bag. We as a society need to adjust to treating this sort of content skeptically, and I see no more appropriate way than letting a bunch of fake celebrity porn circulate.

What scares me about deepfakes is not the porn, it's the scams. The scams can actually destroy lives. We need to start ratcheting up social skepticism asap.


You probably don't care about the porn cause I'm assuming you're a man, but it can ruin lives too.


It can only ruin lives if people believe it's real. Until recently, that was a reasonable belief; now it's not. People will catch on and society will adapt.

It's not like the technology is going to disappear.


I mean, the same applies to scams, scams only work if people believe them.


Right - as I said, we need to ramp up social skepticism, fast. Not as in some kind of utopian vision, but "the amount of fake information will be moving from a trickle to a flood soon, there's nothing you can do about that, so brace yourselves".

The specific policies of OpenAI or Google or whatnot are irrelevant. The technology is out of the bag.


You are talking like it's something bad. Kids are learning AI and computing instead of drugs and guns. And nobody is hurt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: