I don't think the argument is that dumb. For a start there's a difference between white hack hackers and dark hat hackers. Then here he's talking specifically about people who do pentesting known exploits on broken systems.
Think about it this way: do you think Theo Deraadt (from OpenBSD and OpenSSH fame) spends his time trying to see if Acme corp is vulnerable to OpenSSH exploit x.y.z, which has been patched 3 months ago?
I don't care about attacking systems: it is of very little interest to me. I've done it in the past: it's all too easy because we live in a mediocre work full of insecure crap. However I love spending some time making life harder for dark hat hackers.
We know what creates exploits and yet people everywhere are going to repeat the same mistakes over and over again.
My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure". That is the mindset we need. But it didn't stop people using Unicode in places where we should never have used it, like in domain names for examples. Then when you test an homoglyphic attack on IDN, it's not "cool". It's lame. It's pathetic. Of course you can do homglyphic attacks and trick people: an actual security expert (not a pentester testing known exploits on broken configs) warned about that 30 years ago.
There's nothing to "understand" by abusing such exploit yourself besides "people who don't understand security have taken stupid decisions".
OpenBSD and OpenSSH are among the most secure software ever written (even if OpenSSH had a few issues lately). I don't think Theo Deraadt spends his time pentesting so that he can be able to then write secure software.
What strikes me the most is the mediocrity of most exploits. Exploits that, had the software been written with the mindset of the person who wrote TFA, would for the most part not have been possible.
He is spot on when he says that default permit and enumerate badness are dumb ideas. I think it's worth trying to understand what he means when he says "hacking is not cool".
Think about it this way: do you think Theo Deraadt (from OpenBSD and OpenSSH fame) spends his time trying to see if Acme corp is vulnerable to OpenSSH exploit x.y.z, which has been patched 3 months ago?
I don't care about attacking systems: it is of very little interest to me. I've done it in the past: it's all too easy because we live in a mediocre work full of insecure crap. However I love spending some time making life harder for dark hat hackers.
We know what creates exploits and yet people everywhere are going to repeat the same mistakes over and over again.
My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure". That is the mindset we need. But it didn't stop people using Unicode in places where we should never have used it, like in domain names for examples. Then when you test an homoglyphic attack on IDN, it's not "cool". It's lame. It's pathetic. Of course you can do homglyphic attacks and trick people: an actual security expert (not a pentester testing known exploits on broken configs) warned about that 30 years ago.
There's nothing to "understand" by abusing such exploit yourself besides "people who don't understand security have taken stupid decisions".
OpenBSD and OpenSSH are among the most secure software ever written (even if OpenSSH had a few issues lately). I don't think Theo Deraadt spends his time pentesting so that he can be able to then write secure software.
What strikes me the most is the mediocrity of most exploits. Exploits that, had the software been written with the mindset of the person who wrote TFA, would for the most part not have been possible.
He is spot on when he says that default permit and enumerate badness are dumb ideas. I think it's worth trying to understand what he means when he says "hacking is not cool".