I've been using SSH for ~15 years and never knew about these escape sequences. I'm eagerly awaiting my next hung session so that I can test `~.`. It's much nicer than my current approach of having to close that terminal window.
If hung SSH connections are common it's likely due to CGNAT which use aggressively low TCP timeouts. e.g. I've found all UK mobile carriers set their TCP timeout as low as 5 minutes. The "default" is supposed to be 2 hours, you could literally sleep your computer, zero packets, and an SSH connection would continue to work an hour later, and generally speaking this is still true unless CGNAT is in the way.
If you are interested there are a few ways you can fix this:
Easiest is to use a VPN, because the VPN's exit node becomes the effective NAT they usually have normal TCP timeouts due to being less resource constrained. Another nice benefit of this method is you can move between physical networks and your connection doesn't die... If you use Tailscale then you already have this in a more direct way.
Another is to tune the tcp_keepalive kernel parameters. Lowering the keepalive timeout to be less than the CGNAT timeout will cause keepalive probes to prevent CGNAT from dropping the connection even while your SSH connection is technically idle. For Linux I pop these into /etc/sysctl.d/z.conf, I have no idea for Windows or Mac:
This is really a misuse of these settings, they are supposed to be for checking TCP connections are still alive and clearing them up from the local routing table. Instead the idea is to exploit the probes by sending them more frequently to force idle connections to stay alive in a CGNAT environment (dont worry the probes are tiny and still very infrequent).
_time=240 will send a probe after 4 mins of idle connection instead of the default 2 hours, undercutting the CGNAT timeout. _intvl=60 and _probes=120 mean it will send 120 probes 60 seconds apart (2 hours worth) before considering the connection dead. This will keep it alive for at least 2 hours, but also allows us to have the best of both worlds so that under a nice NAT it keeps the old behaviour, e.g if I temporarily lose my network the SSH connection is still valid after 2 hours, but under CGNAT it will at least not drop the connection after 5 mins so long as I keep my computer on and don't lose the network.
There are also some SSH client keepalive settings but I'm less familiar with them.
Check Mosh. It supports these kind of cuts and it will reconnect seamlessly. It will use far less bandwidth too.
I successfully tried it with a 2.7 KBPS connection.
Depends on whether your sockets survive that, though. Especially on Wi-Fi, many implementations will reset your interface when sleeping, and sockets usually don't survive that.
Even if they do, if the remote side has heartbeats/keepalive enabled (at the TCP or SSH level), your connection might be torn down from the server side.
Yes, by generally I really mean all the defaults are pretty permissive, but I understand some people tune both TCP and SSH on their servers to drop connections faster because they are worried about resource exhaustion.
But if you throw up a default Linux install for your SSH box and have a not-horrible wifi router with a not-horrible internet provider then IME you can sleep your machine and keep an SSH connection alive for quite some time... I appreciate that might be too many "not-horrible" requirements for the real world today though.
Yes, this makes your connection more likely not survive client suspends. (ClientAliveInterval, which makes the server ping the client, will make it fail almost certainly, since the server will be active while the client is sleeping.)
Well, for different reasons, but you have similar issues with IPv6 as well. If your client uses temporary addresses (most likely since they're enabled by default on most OS), OpenSSH will pick one of them over the stable address and when they're rotated the connection breaks.
For some reason, OpenSSH devs refuse to fix this issue, so I have to patch it myself:
--- a/sshconnect.c
+++ b/sshconnect.c
@@ -26,6 +26,7 @@
#include <net/if.h>
#include <netinet/in.h>
#include <arpa/inet.h>
+#include <linux/ipv6.h>
#include <ctype.h>
#include <errno.h>
@@ -370,6 +371,11 @@ ssh_create_socket(struct addrinfo *ai)
if (options.ip_qos_interactive != INT_MAX)
set_sock_tos(sock, options.ip_qos_interactive);
+ if (ai->ai_family == AF_INET6 && options.bind_address == NULL) {
+ int val = IPV6_PREFER_SRC_PUBLIC;
+ setsockopt(sock, IPPROTO_IPV6, IPV6_ADDR_PREFERENCES, &val, sizeof(val));
+ }
+
/* Bind the socket to an alternative local IP address */
if (options.bind_address == NULL && options.bind_interface == NULL)
return sock;
I'm not sure what happens to the socket, maybe it's closed and reopened, but with this patch I have SSH sessions lasting for days with no issues. Without it, even roaming between two access points can break the session.
It would also seem to break address privacy (usually not much of a concern if you authenticate yourself via SSH anyway, but still, it leaks your Ethernet or Wi-Fi interface's MAC address in many older setups).
Not anonymous, but it's pretty unexpected for different servers with potentially different identities for each to learn your MAC address (if you're using the default EUI-64 method for SLAAC).
This is a very common misconception. The issue is not IPv4 or CGNAT, it's stateful middleboxes... of which IPv6 has plenty.
The largest IPv6 deployments in the world are mobile carriers, which are full of stateful firewalls, DPI, and mid-path translation. The difference is that when connections drop it gets blamed on the wireless rather than the network infrastructure.
Also, fun fact: net.ipv4.tcp_keepalive_* applies to IPv6 too. The "ipv4" is just a naming artifact.
Mobile carriers usually have stateful firewalls for IPv6 as well (otherwise you can get a lot of random noise on the air interface, draining both your battery and data plan), so it's an issue just the same.
The constrained resource there is only firewall-side memory, though, as opposed to that plus (IP, port) tuples for CG-NAT.
Or my predecessor/address space neighbor, or that of somebody using my wireless hotspot once, or that of me clicking a random link once and connecting to 671 affiliated advertisers's analytics servers...
I think a default policy of "no inbound connections" does makes sense for most mobile users. It should obviously be configurable.
Have been using that weekly since probably 20 years. Will change your life :)
My other favourite is I very often SSH with -v to figure out why the connection is hanging, you rapidly figure out if DNS is failing, the TCP connection doesn't open, it does open but no traffic flows at all or it opens and SSH negotiation starts but never finishes. You can learn a lot just from this about what is wrong.
And of course, you can use the ~v / ~V commands (as listed in the ~? menu) to increase/decrease verbosity after the connection is established.
That lets you `ssh -vvvv` to a host then once you've figured out the issue use ~V to decrease verbosity so that debug messages don't clutter your shell.
Also helps with auth failures, I've used it several times with co-workers who can't figure out why their ssh key isn't working. It lists the keys out and some extra information.
> It's much nicer than my current approach of having to close that terminal window.
You can also just kill the ssh process (say from another terminal). That way you get to keep your terminal window. And this works with everything "blocking" your terminal, not just ssh.
I last used this menu about 20 years ago when a dialup modem was the only way to roll, and have pretty much forgotten about it since the days of always-on direct to the desktop TCP/IP ..
`setHTML` is meant as a replacement for `innerHTML`. In the use case you describe, you would have never wanted `innerHTML` anyway. You'd want `innerText` or `textContent`.
It’s simple, you use innerHTML if you know for sure where the input comes from and if it’s safe (for example when you define it as a hard coded string in your own code).
You use setHTML when you need to render HTML that is potentially unsafe (for example forum posts or IM messages). Honest question, which part of that isn’t clear?
How is adding an element to the parent the same as replacing all the content of the element? You guys are exhausting. Think a bit before spouting nonsense?
In Apple's case, starting with macOS Tahoe, Filevault saves your recovery key to your iCloud Keychain [0]. iCloud Keychain is end-to-end encrypted, and so Apple doesn't have access to the key.
As a US company, it's certainly true that given a court order Apple would have to provide these keys to law enforcement. That's why getting the architecture right is so important. Also check out iCloud Advanced Data Protection for similar protections over the rest of your iCloud data.
> We’re gradually transitioning the AWS European Sovereign Cloud to be operated exclusively by EU citizens located in the EU. During this transition period, we will continue to work with a blended team of EU residents and EU citizens located in the EU.
I find it fascinating that the goal is to staff this exclusively with EU citizens, thereby excluding non-citizen residents of the EU.
> Replicating a broadly practiced mitigation mechanism that is established in EU institution and government hiring practices, operational control and access will be restricted to EU citizens located in the EU to ensure that all operators have enduring ties to the EU and to meet the needs of our customers and partners.
It's similar to FedRAMP systems like AWS GovCloud (US), which can only be accessed by someone who is a US person (US citizen or lawful permanent resident) and on US soil (physically in the US at the time of access).
The docs explicitly describe this cloud's independence from the US.
> The AWS European Sovereign Cloud will be capable of operation without dependency on global AWS systems so that the AWS European Sovereign Cloud will remain viable for operating workloads indefinitely even in the face of exceptional circumstances that could isolate the AWS European Sovereign Cloud from AWS resources located outside the EU, such as catastrophic disruption of transatlantic communications infrastructure or a military or geopolitical crisis threatening the sovereignty of EU member states.
I work on security at PostHog. We resolved these SSRF findings back in October 2024 when this report was responsibly disclosed to us. I'm currently gathering the relevant PRs so that we can share them here. We're also working on some architectural improvements around egress, namely using smokescreen, to better protect against this class of issue.
Here's the PR[0] that resolved the SSRF issue. This fix was shipped within 24 hours of receiving the initial report.
It's worth noting that at the time of this report, this only affected PostHog's single tenant hobby deployment (i.e. our self hosted version). Our Cloud deployment used our Rust service for sending webhooks, which has had SSRF protection since May 2024[1].
Since this report we've evolved our Cloud architecture significantly, and we have similar IP-based filtering throughout our backend services.
> The Deception of the Challenged Representations and Unlawful Marketing &
Sale of the Products. The Challenged Representations misled reasonable consumers into believing the Products possessed certain AI qualities, capabilities, and features, they simply do not have. As a result, Apple charged consumers for Products they would not have purchased, or at least not at its
premium price, had the advertising been honest. Beyond exploiting unsuspecting consumers, Apple also gained an unfair advantage over competitors in the market who do not tout non-existent AI features, or who actually deliver them as promised.
I utilized SSE when building automatic restart functionality[0] into Doppler's CLI. Our api server would send down an event whenever an application's secrets changed. The CLI would then fetch the latest secrets to inject into the application process. (I opted not to directly send the changed secrets via SSE as that would necessitate rechecking the access token that was used to establish the connection, lest we send changed secrets to a recently deauthorized client). I chose SSE over websockets because the latter required pulling in additional dependencies into our Golang application, and we truly only needed server->client communication.
One issue we ran into that hasn't been discussed is HTTP timeouts. Some load balancers close an HTTP connection after a certain timeout (e.g. 1 hour) to prevent connection exhaustion. You can usually extend this timeout, but it has to be explicitly configured. We also found that our server had to send intermittent "ping" events to prevent either Cloudflare or Google Cloud Load Balancing from closing the connection, though I don't remember how frequently these were sent. Otherwise, SSE worked great for our use case.
Generally you're going to want to send ping events pretty regularly (I'd default to every 15-30 seconds depending on application) whether you're using SSE, WebSockets, or something else. Otherwise if the server crashes the client might not know the connection is no longer live.
The way I've implemented SSE is to make use of the fact it can also act like HTTP long-polling when the GET request is initially opened. The SSE events can be given timestamps or UUIDs and then subsequent requests can include the last received ID or the time of the last received event, and request the SSE endpoint replay events up until the current time.
You could also add a ping with a client-requestable interval, e.g. 30 seconds (for foreground app) and 5 minutes or never (for backgrounded app), so the TCP connection is less frequently going to cause wake events when the device is idle. As client, you can close and reopen your connection when you choose, if you think the TCP connection is dead on the other side or you want to reopen it with a new ping interval.
Tradeoff of `?lastEventId=` - your SSE serving thing needs to keep a bit of state, like having a circular buffer of up to X hours worth of events. Depending on what you're doing, that may scale badly - like if your SSE endpoint is multiple processes behind a round-robin load balancer... But that's a problem outside of whether you're choosing to use SSE, websockets or something else.
To be honest, if you're worrying about mobile drain, the most battery efficient thing I think anyone can do is admit defeat and use one of the vendor locked-in things like firebase (GCM?) or apple's equivalent notification things: they are using protocols which are more lightweight than HTTP (last I checked they use XMPP same as whatsapp?), can punch through firewalls fairly reliably, batch notifications from many apps together so as to not wake devices too regularly, etc etc...
Having every app keep their own individual connections open to receive live events from their own APIs sucks battery in general, regardless of SSE or websockets being used.
I also used SSE 6 or so years ago, and had the same issue with out load balancer; a bit hacky but what I did was to set a timer that would send a single colon character (which is the comment delimiter IIRC) periodically to the client. Is that what you meant by “ping”?
> The Secure Enclave randomizes the data volume’s encryption keys on every reboot and does not persist these random keys, ensuring that data written to the data volume cannot be retained across reboot. In other words, there is an enforceable guarantee that the data volume is cryptographically erased every time the PCC node’s Secure Enclave Processor reboots.
Intel and AMD server processors can use DRTM late launch for fast attested restart, https://www.semanticscholar.org/paper/An-Execution-Infrastru.... If future Apple Silicon processors can support late launch, then PCC nodes can reduce intermingling of data from multiple customer transactions.
> The server can't afford
What reboot frequency is affordable for PCC nodes?
I wonder what impact this will have on Mozilla's OpenSSH configuration guide[0], which currently specifies `chacha20-poly1305@openssh.com` as its primary cipher. Should that be dropped to rely solely on AES ciphers?
reply