Somehow the people that get law enforced on them are done so by a government composed of people that should be the first to have the law enforced on themselves
Encrypted DNS has existed for quite a while now through DNS over HTTPS, the missing link was that to connect to a website, you first had to send the server the hostname in plaintext to get the right public key for the site. So someone listening on the wire could not see your DNS requests but would effectively still get the site you connected to anyway.
The new development (encrypted client hello) is you no longer have to send the hostname. So someone listening in the middle would only see you connected to an AWS/etc IP. This will make blocking websites very difficult if they use shared services like cloudflare or cloud VPS hosting.
> blocking websites very difficult if they use shared services like cloudflare or cloud VPS hosting.
I see this as a very good development and a big win for privacy. I have been running my own DNS server for years to prevent passive logging, but could basically do nothing against the SNI leak.
Though I worry that instead western governments will beat the judges to the punch and start asking things like DNS providers or even HTTPS servers to keep logs that can be subpoenaed much like a telecom company keeps a log of each phone call ("metadata"), or else be blocked...
Western governments just send a court order to the hosting provider to shut the site down / revoke their domain name. Site blocking is more of a problem for small counties trying to block sites the rest of the world allows to be hosted.
In terms of privacy, your DNS history probably isn't very interesting. It's almost all going to be requests for the top social media sites. Which governments have full access to the stuff you post there.
In principle, it means you could run multiple sites from the same IP and someone intercepting traffic to that IP (but not the client’s DNS path) couldn’t tell what site each connection was to. It mostly makes sense for CDNs, where the same IP will be used for many sites.
If you don’t use a CDN at all, the destination IP leaks what site you’re trying to connect to (if the domain is well known). If you use a CDN without ECH, you send an unencrypted domain name in the HTTPS negotiation so it’s visible there. ECH+CDN is an attempt to have the best of both worlds: your traffic to the site will not advertise what site you’re connecting to, but the IP can still be shared between a variety of sites.
It’ll be interesting to see how countries with lighter censorship schemes adapt - China etc. of course will just block the connection.
Even for China so-called "overblocking" where to censor a small thing you have to block a much larger thing, is a real concern with these technologies. There's a real trade here, you have to expend effort and destroy potential and in some cases the reward isn't worth it. You can interpret ECH as an effort to move the ratio, maybe China was willing to spend $5000 and annoy a thousand people to block a cartoon site criticising their internal policies, but is it willing to sped $50 000 and annoy a ten thousand people? How about half a million and 100K people ?
That requires the client to only emit ECH, even if the ISP-provided (and therefore government controlled) DNS blocks HTTPS/SVCB records. China can easily make the default for a browser in China be to never even try to use ECH as well. Then they'll only annoy people trying to actively circumvent their system. They already do TCP sessionization to extract the SNI domain. Detecting ECH and then just dropping the connection at L3 is functionally equivalent.
In theory, sites could eventually require ECH to serve anything at all. But we're very far from that.
So for example, Firefox since version 119. Or Chrome since 117
Now, for most services ECH doesn't have an encrypted target server. But the important choice in ECH was in this case it just fills that space with noise. An encrypted message also looks like noise. So you can block all the noise, in case it's secrets, or you can let through all the noise (some of which might be secrets) or I suppose you can choose randomly, but you can't do what such regimes want, which is to only forbid secrets, that's not a thing.
We've been here before. When sites starting going to TLS 1.3 lots of HN people said oh, China will just block that, easy. But the choice wasn't "Use TLS 1.3 or keep doing whatever China is happy with instead" the choice was "Use TLS 1.3 or don't connect" and turns out for a lot of the Web China wasn't OK with "don't connect" as their choice, so TLS 1.3 is deployed anyway.
The great firewall was updated to support inspection of TLS 1.3. They didn’t just decide it was whatever and let everything through. It was easier to just update their parsing than to force everyone to turn it off, so they did that instead. Perfect forward secrecy was a thing before TLS 1.3, and they’ve found other methodology to accomplish what they want.
For ECH, China can just require you turn it off. Or distribute their own blessed distribution. It’s the more marginal censorship regimes that will be in an interesting spot. Especially ones where the ISPs are mostly responsible for developing the technical measures.
> The great firewall was updated to support inspection of TLS 1.3.
To actually "inspect" TLS 1.3 you need the keys which are chosen randomly for each session by the parties - so either (1) you have a mathematical breakthrough, (2) you have secured co-operation from one or both parties (in which case they could equally tell you what they said) or (3) in fact you don't have inspection.
As you observe forward secrecy was already possible in TLS 1.2 and China's "Great firewall" didn't magically stop that either. In fact what we see is that China blocks IP outright when it doesn't want you to talk to an address, the protocol doesn't come into that. What we changed wasn't whether China can block connections, but how easy it is to snoop those connections.
> For ECH, China can just require you turn it off
So did they? Remember, I'm not talking about some hypothetical future, this technology is actively in use today and has been for some time.
I don’t understand what your point about TLS 1.3 is. It’s only relevant if you’re doing a downgrade attack (or equivalently, using an active middleware box). TLS 1.3 itself is not vulnerable to this because it (a) doesn’t have non-PFS suites to downgrade to and (b) protects the cipher suites by including them in the key exchange material. But if the server supports TLS 1.2, an active MITM can still downgrade to it if the client doesn’t demand TLS 1.3 specifically (which browsers do not by default). It won’t matter to China until there are lots of TLS 1.3-only websites (which hasn’t happened yet).
China was already leaning on passive DPI and L3 blocking before TLS 1.3 complicated (but as I said, did not preclude) downgrading to PFS ciphers. The reason being that for about the last 10 years, many sites (including default CDN settings) used SSL profiles that only allowed PFS ciphers. For such a server, downgrade attacks are already not useful to the Great Firewall, so adding TLS 1.3 to the mix didn’t change anything.
> So did they? Remember, I'm not talking about some hypothetical future, this technology is actively in use today and has been for some time.
Google Chrome (for example) will now use ECH if the website has the relevant DNS record - but it doesn’t use the anti-censorship mechanism in the spec to make requests to servers that haven’t enabled it look like they may be using ECH. This, combined with the fact that China can just not serve the relevant DNS record by default, means it doesn’t really impact the great firewall.
This is actually a good example of the non-technical side of this: Chrome could send a fake ECH on every request, like the spec suggests. This would perhaps make China block all Chrome traffic to prevent widespread ECH. But then Chrome would lose out on the market share, so Google doesn’t do it. Technical solutions are relevant here, but even the most genius anti-censorship mechanism needs to content with political/corporate realities.
> if the server supports TLS 1.2, an active MITM can still downgrade to it
Nope. That's specifically guarded against, so double good news. 1) You get to learn something new about an important network protocol and 2) I get to tell you a story I enjoy telling
Here's the clever trick which is specified in RFC 8446 (the TLS 1.3 RFC)
In TLS we always have this "Random" field in both Client Hello and Server Hello, it's 32 bytes of random noise. At least, that's what it usually is. When a server implements TLS 1.3 but it receives a connection (in your scenario this is from a middlebox, but it might equally be somebody's long obsolete phone) which asks for TLS 1.2 then when it fills out the Random for this connection the last eight bytes aren't actually random, they spell "DOWNGRD" in ASCII and then a 01 byte. If the client seems to ask for any older version of TLS which is supported then the server writes DOWNGRD and then a 00 byte instead.
As you hopefully realise this signals to a client that a MITM is attempting to downgrade them and so they reject the failed attack. You very likely have never seen your web browser's diagnostic for this scenario, but it's very much a failure not some sort of "Danger, Chinese government is spying on you" interstitial, because we know that warning users of danger they can't fix is pointless. So we just fail, the Chinese government could choose to annoy its citizens with this message but, why bother? Just drop the packets entirely, it's cheaper.
You might wonder, why Random ? Or, can't the MITM just replace this value and carry on anyway ? Or if you've got a bit more insight you might guess that these questions answer each other.
In TLS the Client and Server both need to be sure that each connection is different from any others, if they didn't assure themselves of this they'd be subject to trivial replay attacks. They can't trust each other, so to achieve this both parties inject Random data into the stream early, which means they don't care if the other party really used random numbers or just (stupidly) didn't bother. Shortly after this, during setup, the parties agree on a transcript of their whole conversation so far.
So, if the Random value you saw is different from the Random number your conversation partner expected, that transcript won't match, connection fails, nothing is achieved. But if the Random value isn't changed but somehow we ended up with TLS 1.2 it says DOWNGRD and a TLS 1.3 capable client knows that means it is under attack and rejects the connection, same outcome.
Now, I said there was an anecdote. It's about terrible middle boxes, because of course it is. TLS 1.3 was developed to get past terrible middle boxes and it was mostly successful, however shortly after TLS 1.3 non-draft launch (when the anti-downgrade mechanism was enabled, it would not be OK to have anti-downgrade in a draft protocol for reasons that ought to be obvious) Google began to see a significant number of downgrade failures, connected to particular brands of middlebox.
It turns out that these particular brands of middlebox were so crap that although they were proxying the HTTP connection, they were too cheap to generate their own Random data. So your TLS 1.3 capable browser calls their proxy, the proxy calls the TLS 1.3 capable server, and the proxy tells both parties it only speaks TLS 1.2, but it passes this bogus anti-downgrade "Random" value back as if it had made this itself, thus triggering the alarm.
Obviously on the "Last to change gets the blame" basis Google had customers blaming them for an issue caused ultimately by using a crap middlebox. So they actually added a Chrome feature to "switch off" this feature. Why do I mention this? Well, Chrome added that feature for 12 months. In 2018. So, unless it is still 2019 where you are, they in fact have long since removed that switch and all browsers enforce this rule. That 12 months grace gave vendors the chance to fix the bug or, if they were able to, persuade customers to buy a newer crap middlebox without this particular bug, and it gave customers 12 months to buy somebody else's middlebox or (if they were thus enlightened) stop using a middlebox.
> In theory, sites could eventually require ECH to serve anything at all. But we're very far from that.
I doubt the Chinese government would care about that. They don't depend on the west for their online services any more than we depend on them. All that would happen is that the internet would bifurcate to an even greater degree than it already has.
It's extremely helpful at home in the west as a countermeasure against data monetization and dragnet surveillance. It certainly isn't perfect but at least it reduces the ability of ISPs to collect data on end users as well as forcing the government to formally move against the cloud providers if they want the data. Not that I want the cloud providers having my data to begin with but that's a different rant.
BTW Forgejo seems to be very similar to GitHub when it comes to bug tracking. There are so many project management systems and bug trackers out there, and I think GitHub (and as thus, Forgejo's) way of doing this is limiting.
I wonder if people would rather prefer Jira, Redmine, MantisBT, Bugzilla, or something completely different, or a choice to have X and Y and why, and so forth.
Well it’s certainly horrible that they’re not even trying, but not surprising (I deleted my X account a long time ago).
I’m just wondering if from a technical perspective it’s even possible to do it in a way that would 100% solve the problem, and not turn it into an arms race to find jailbreaks. To truly remove the capability from the model, or in its absence, have a perfect oracle judge the output and block it.
Again, I'm not the most technical, but I think we need to step back and look at this holistically. Given Grok's integration with X, there could be other methods of limiting the production and dissemination of CSAM.
For arguments sake, let's assume Grok can't reliably have guardrails in place to stop CSAM. There could be second and third order review points where before an image is posted by Grok, another system could scan the image to verify whether it's CSAM or not, and if the confidence is low, then human intervention could come into play.
I think the end goal here is prevention of CSAM production and dissemination, not just guardrails in an LLM and calling it a day.
Normalizing AI as being human equivalent means the AI is legally culpable for its own actions rather than its creators or the people using it, and not guilty of copyright infringement for having been trained on proprietary data without consent.
reply