The share button I have uses the navigator.share API if it's available, or tries to fall back to navigator.clipboard. Unfortunately I think I didn't do adequate testing to make sure the clipboard feature works in whatever edge cases you might have the clipboard API but not the Share API, so I'm pretty sure _that_ is broken.
I really just put the URL at the end of the page with a `user-select: all;` to make it easy to copy _after_ you've read the content. It's also rendered server-side so even if I used utm or some other tracking things, the query params would automatically not be included in the server-rendered share link, just the permalink to the post.
But to the original authors point, I also do find myself often just copying from the address bar and then manually deleting a bunch of the garbage at the end of URLs. Maybe that's why I thought it nice for a simple "copy this link" when JS is disabled.
This is absolutely an option! It gets a little tricky to avoid shifting content around, since it's pretty typical to load styles in the head but load javascript either at the end of the DOM or with the defer tag, so the javascript would likely be running after the user has already seen the layout, and layout shifts could be clunky.
Damn I didn't know this. I don't know how my search failed to turn this up, because I was literally googling how to accomplish it with media queries lol.
Yeah based on the linked document, it's missing about 1000 apex domains based on the .gov zone export from today. Even the current-full.csv on the latest commit in github is short about 1000 apex domains.
I updated the post late last night to address the security bits of the host header. Based on my understanding of nginx documentation and some brief testing, I don't think path traversal in the host header is possible -- nginx throws a 400 instead of a 502, which indicates it isn't making it to the proxy_pass yet. I think the $host variable is basically guaranteed to at least match the server_name regex block by the time it reaches the proxy_pass -- so to further tighten it up, you could only allow alphanumeric characters in your server_name regex.
I just checked out your solution and also learned a new trick about ssh! I didn't know that setting the port to 0 would cause dynamic allocation for the tunnel. It makes sense, I did know about that 0 behavior just in typical linux processes, but never thought to apply it to an ssh tunnel.
Ohh so I just gave this a shot and I think that the trap runs when `.ssh/rc` exits, which is immediately when my bash prompt shows up. But if I want to make it non-interactive (in a really hacky way) then I can have my .ssh/rc file just infinitely sleep if domain is defined. Then I killed the ssh connection via a `kill` command on the client side and it appropriately cleaned up the socket file in tmp.
I combined this infinite loop in ssh rc and a -T and a simple command "echo hello" in my client function and now it prints out the link to visit, hangs infinitely until I close it or it gets closed, and cleans itself up.
This just took the level of hackishness to new heights and I love it.
I link to awesome-tunneling in my post :) I didn't know about that particular list until after I spent my night doing this.
I didn't know about headscale, that does seem pretty cool but I think MagicDNS also specifically would introduce a behavior that I didn't particularly want -- TLS certs being issued for my individual hosts, and thus showing up in cert transparency logs and getting scanned. Ultimately this is really only a problem in the first minutes or hours of setting up a cert, though.
Honestly I would probably recommend every other solution before I recommend my own. It was just fun to figure out and it works surprisingly well for what I wanted -- short lived development tunnels on my own infra with my own domain, without leaking the address of the tunnel automatically.
Sounds pretty cool, I have done some similar things in the past with using a vpn to proxy backwards into my home network (hello fellow k8s at home user). I think in this case I wanted to basically set up my one nginx config and never have to change the web server config again and support arbitrary services in the future. I've never used haproxy before, but I wonder if there could be some room for improvement (read: not using unix domain sockets) by using a web server that can dynamically detect upstreams in a particular set of ports. E.g. if all my "tunnel" ports are on localhost:8000-9000, it can dynamically pick them up. I guess I still wouldn't know how to answer the "pick a name for the tunnel at runtime" problem, but it's definitely something worth exploring further!
If I was doing something that I intended to have running more than an hour or two at a time, I would 100% do something more like what you're describing haha.
Oooh, I hadn't considered HUP. I tried to use a cleanup script with a bash trap on I think INT and KILL but it didn't seem to work correctly. I had also never tried to use a trap command, though, so there was a good chance I was doing it wrong lol. I'll give this a shot!
I hacked together nginx, ssh, and a little bit of bash to make a simple dev tunnel service on my own domain. I thought HN readers could appreciate (and probably roast) it.
Hacking all those things together feels empowering, like a complex construct that can be built from simple things we are already used to. This article has a very "hacky" spirit, love it!
I really just put the URL at the end of the page with a `user-select: all;` to make it easy to copy _after_ you've read the content. It's also rendered server-side so even if I used utm or some other tracking things, the query params would automatically not be included in the server-rendered share link, just the permalink to the post.
But to the original authors point, I also do find myself often just copying from the address bar and then manually deleting a bunch of the garbage at the end of URLs. Maybe that's why I thought it nice for a simple "copy this link" when JS is disabled.