Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>We live in a world where NIST is happy to give us a new hash function every few years. Maybe it's time we put this level of effort and funding into the protocols that use these primitives? They certainly seem to need it.

This is a great point. Are there any modern reasonable alternatives to TLS to use in applications? On the one hand developers are told to not implement crypto directly and use something like TLS. Yet on the other hand it seems most TLS implementations suck (don't check the keys for example) and the standard itself has a bunch of holes.



No. Developers should continue to use TLS.

If you look at the last few years of TLS --- which have been rocky, to be sure --- you have flaws that are really difficult to exploit and (usually) straightforward to mitigate. If you look at a representative sample of non-TLS transport protocols, you get clownish flaws:

* Block ciphers deployed in the default mode (ECB), which allows straightforward byte-at-a-time decryption

* Error-based CBC padding oracles for which off-the-shelf tools will do decryption

* Unauthenticated ciphertext --- not "used a MAC in the wrong order", like Lucky 13 exploits, but "literally no integrity checks at all", so attackers can trivially rewrite packets

* RSA implemented "in the raw" with no formalized padding or PKCS1.5 padding

* Key exchanges with basic number theoretic flaws

* Repeated IVs and nonces that allow whole message decryption by analyzing captures of just a few hundred messages

The list goes on and on. Not only that: two of the recent 4 TLS problems (BEAST's chained CBC IVs and CRIME's compression side channel) are equally likely to affect custom cryptography --- they aren't the product of any weird SSL/TLS requirement. Chained CBC IVs also happened in IPSEC; compressing before encryption was IIRC an _Applied Cryptography_ recommendation. The only reason the RC4 bug is unlikely to apply is that nobody outside of TLS server operators would choose RC4.

To be sure: your best options (PGP and TLS) are creaky and scary looking. But they are nowhere nearly as scary as the "New" cryptosystems people deploy. What's especially annoying about the new stuff is that they follow a release cycle that conceals how terrible they are:

* Initial release with great fanfare about the new kinds of applications they'll enable, press coverage

* Security researchers flag unbelievably blatant flaws in crypto constructions

* Blatant flaws are fixed, cryptosystem is rereleased, now with promotional text about the external security testing it has

For a cryptosystem published by someone without a citation record in cryptography, a basic crypto flaw should be considered disqualifying; it's a sign that the system was designed without an understanding of how to build sound crypto. But that's not how things actually work, because everyone wants to believe that cryptographic protection is the Internet's birthright and that we're all just a few library calls away from "host-proof" or "anonymous" communications.

If you're really worried about TLS security but have the flexibility of specifying arbitrary crypto, why not use a library that does TLS with an AEAD cipher, like AES-GCM?


  why not use a library that does TLS with an AEAD cipher, like AES-GCM?
Some possible reasons:

* lack of confidence in TLS's design and designers (for example, TLS1.2 still allows compression and fails to counsel against its use).

* TLS has far too many options. I want a secure channel. I don't want a secure channel toolkit.

* TLS tends to be paired with a broken and discredited root-of-trust infrastructure (which often gives the misleading impression that TLS itself was broken).

(nb. I don't have any evidence that the AEAD TLS1.2 ciphersuites are broken, I'm playing devil's advocate here.)

Regarding your 'new cryptosystems' point: I agree, and its completely and frustratingly hopeless. But that's why the world needs a decent secure channel standard with good security bounds, and no knobs on the side which break confidentiality or integrity, and no backwards-insecurity ability.


* You should have even less confidence in new cryptosystems.

* Downthread, I suggested that TLS doesn't have as many extraneous options as it appears.

* If you can specify AES-GCM, it is even easier to specify not using default CA roots.

* Using an AEAD cipher removes crypto logic from the SSL protocol (the order of operations and message formatting for getting a block cipher to work with a hash-based MAC) and moves it into the block cipher mode, which (unlike TLS) is NIST-standardized.


Is there a problem with the body content of an http response being compressed, or is it mainly a header thing?


There can be if a secret is on the same page as text under the attacker's control. The attacker can run a hidden JavaScript reload attack on the page, the fiddle with the text under their control until compression is maximized.


I wasn't suggesting that because TLS is as you say "creaky an scary looking" I'm going to go off and write something from scratch. I think a good job has been done of making developers fear writing their own cryptosystems.

What I'm wondering is if there's any serious effort out there that could in the near future replace TLS? Like the article was saying, NIST has promoted new crypto primitives, who can be trusted to create the next generation of crypto systems?


Browsers are going to rely on a secure transport that features a directory-style PKI and session resumption for the foreseeable future --- CA's aren't going anywhere, and handling millions of inbound connections is going to be a requirement.

As long as we need a directory-based PKI and a session feature, what complexity can we really cut out of TLS? The record layer is sane and simple; it's more than HTTPS needs, but isn't hard to implement. The handshake is complicated, but it's complicated because it addresses 15+ years of downgrade attacks.

After thinking about that, ask, what's the real benefit of having two (really three, including SSH) mainstream encrypted transports? No matter what happens in any other protocol, a vulnerability in the transport used by browsers is going to be a hair-on-fire emergency. So why not just have everyone use the transport the browser uses?

The last point I'd make is, it's 2013. SSL 3.0 goes back to, what, 1996? The vulnerabilities we're finding in SSL are protocol flaws, and they've taken more than a decade to surface. Who feels better about new protocols?


The NIST contests are great, but I would expect that running such a contest for a hash or a cipher is easier than doing so for most protocols. It is possible to define exactly what a cryptographic hash must do in order to be considered a cryptographic hash. I think one would have a harder time making an analogous characterization of the solution space for HTTP security.

Whatever eventually replaces TLS, I doubt it will be something that could have emerged from a limited-duration contest.


NIST has weighed in on how to use TLS: http://tools.ietf.org/html/rfc6460

I am not aware of any proposed attacks on the approved cipher suites that are anywhere near feasible. TLS deployment is far behind known best practice. We should do something about that.


For sure! I was responding more to the lament that TLS is different from e.g. SHA3. As I see it, this difference is inevitable.


> Are there any modern reasonable alternatives to TLS to use in applications?

We're using NaCl, and love it. With libsodium out now, hopefully more people will give it a try.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: