Hacker Newsnew | past | comments | ask | show | jobs | submit | prussian's commentslogin

With zram, I can just use zram-generator[0] and it does everything for me and I don't even need to set anything up, other than installing the systemd generator, which on some distros, it's installed by default. Is there anything equivalent for zswap? Otherwise, I'm not surprised most people are just using zram, even if sub-optimal.

[0]: https://crates.io/crates/zram-generator


Zswap is enabled by default in Arch. Wont do anything without a backing disk swap though

Kernel arguments are the primary method: https://wiki.archlinux.org/title/Zswap#Using_kernel_boot_par...

Snag: I had issues getting it to use zstd at boot. Not sure if it's a bug or some peculiarity with Debian. Ended up compiling my own kernel for other reasons, and was finally able to get zstd by default, but otherwise I'd have to make/add it to a startup script.


I had the same issue with LZ4. I found a thread about it on the Linux Mint Debian Edition forum and posted my fix there: https://forums.linuxmint.com/viewtopic.php?p=2767087#p276708....

In short: add the kernel modules and update GRUB as usual, then install sysfsutils and add the following line at the end of `/etc/sysfs.conf`:

  module/zswap/parameters/compressor = lz4
  # For zstd:
  #module/zswap/parameters/compressor = zstd
Perhaps some kernel change between Linux 6.8 and 6.12 caused the old approach to no longer work.

This should've been a bash script...

It's a handy tool, but it doesn't even give you a reasonable zram size by default and doesn't touch other things like page-cluster, so "I don't even need to set anything up" applies only if you don't mind it being quite far from optimal.

  echo 1 > /sys/module/zswap/parameters/enabled

It's in TFA.

enabling != configuring. Are you saying this is all that's necessary, assuming an existing swap device exists? That should be made clearer.

Edit: To be extra clear. When I was researching this, I ended up going with zram only because:

* It is the default for Fedora.

* zramctl gives me live statistics of used and compressed size.

* The zswap doc didn't help my confusion on how backing devices work (I guess they're any swapon'd device?)


It doesn't really need any config on most distros, no.

That said, if you want it to behave at its best when OOM, it does help to tweak vm.swappiness, vm.watermark_scale_factor, vm.min_free_kbytes, vm.page-cluster and a couple of other parameters.

See e.g.

https://makedebianfunagainandlearnhowtodoothercoolstufftoo.c...

https://documentation.suse.com/sles/15-SP7/html/SLES-all/cha...

I don't know of any good statistics script for zswap, I use the script below as a custom waybar module:

  #!/bin/bash
  stored_pages="$(cat /sys/kernel/debug/zswap/stored_pages)"
  pool_total_size="$(cat /sys/kernel/debug/zswap/pool_total_size)"
  compressed_size_mib="$((pool_total_size / 1024 / 1024))"
  compressed_size_gib="$((pool_total_size / 1024 / 1024 / 1024))"
  compressed_size_mib_remainder="$((compressed_size_mib * 10 / 1024 - compressed_size_gib * 10))"
  uncompressed_size="$((stored_pages * 4096))"
  uncompressed_size_mib="$((uncompressed_size / 1024 / 1024))"
  uncompressed_size_gib="$((uncompressed_size / 1024 / 1024 / 1024))"
  uncompressed_size_mib_remainder="$((uncompressed_size_mib * 10 / 1024 - uncompressed_size_gib * 10))"
  ratio="$((100 * uncompressed_size / (pool_total_size + 1)))"
  echo "$compressed_size_gib.$compressed_size_mib_remainder"

Fedora and its kernels are built with GCC's _FORTIFY_SOURCE and I've seen modules crash for out of bounds reads.


_FORTIFY_SOURCE is way smaller in scope (as in, closes less vulnerabilities) than -fbounds-safety.


Don't forget GE Aerospace. It gets a bit weirder too since you have joint ventures like CFM and Engine Alliance.


Good point. Isn’t there also Safran?


Yes. Their engines are mostly for military applications, tough, so they are less well known of the general public. Other than that, they are half of CFM International, but again they are not very visible.


Which Comac? I thought they all used GE (CFM for Comac 919) or Russian/Chinese sourced engines.


That's the OP's point: COMAC is using CFM LEAP 1-C engines on the C919.

To be fair, they have taken the effort to build the CJ 1000A engine - which is on wing testing should the tangerine fellow cut them off. But its Plan B at best.


True, it can help Microsoft SQL Server as well. In SQL Server 2022, they finally added Strict Encryption. I'm glad to see more databases are removing these strange STARTTLS like features.


I think people forget that some of this software may be relatively fast. The problem is, most corporate environments are loaded up with EDRs and other strange anti-malware software that impede quick startup or speedy library calls. I've seen a misconfigured Forcepoint EDR rule block a window for 5 seconds on copy and paste from Chrome to Word.

Another example: it takes ~2 seconds to run git on my work machine

    (Measure-Command { git status | Out-Null }).TotalSeconds
while running the same command on my personal Windows 11 virtual machine is near instant: ~0.1 seconds. Still slower than Linux, but not nearly as bad as my work machine.


Just be mindful that any certs you issue in this way will be public information[1] so make sure the domain names don't give away any interesting facts about your infrastructure or future product ideas. I did this at my last job as well and I can still see them renewing them, including an unfortunate wildcard cert which wasn't me.

[1] https://crt.sh/


I use https://github.com/FiloSottile/mkcert for my internal stuff.


Just use wildcard certs and internal subdomains remain internal information.


A fun tale about wildcard certificates for internal subdomains:

The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)

I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.


Sounds like you need to get better reverse proxies...? Making your site traffic RELY on the fact that you're using different certificates for different hosts sounds fragile as hell and it's just setting yourself up for even more pain in the future


It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:

https://github.com/kubernetes/ingress-nginx/issues/1681#issu...

> We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.

That was 5-ish years ago though. I hope there are better ways than the cert hack now.


That's a misunderstanding in your use of this ingress-controller "ssl-passthrough" feature.

> This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

> SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation

So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.


Thank you very much for such a clear explanation of what's happening. Yeah, I sensed that it's not a limitation of the nginx per-se, as it was asked not to do ssl termination, hence of course it can't extract header from the scrambled bytes. As I needed it to do grpc through asp.net, it is a kestrel requirement to do ssl termination that forced me to use the ssl-passthrough, which probably comes from a whole different can of worms.


> it is a kestrel requirement to do ssl termination

Couldn't you just pass it x-forwarded-proto like any other web server? or use a different self signed key between nginx and kestrel instead?


There is definitely that. There is also some sort of strange bug with Chromium based browsers where you can get a tab to entirely fail making a certain connection. It will not even realize it is not connecting properly. That tab will be broken for that website until you close that tab and open a new one to navigate to that page.

If you close that tab and bring it back with command+shift+t, it still will fail to make that connection.

I noticed sometimes it responds to Close Idle Sockets and Flush Socket Pools in chrome://net-internals/#sockets.

I believe this regression came with Chrome 40 which brought H2 support. I know Chrome 38 never had this issue.


There's a larger risk that if someone breaches a system with a wildcard cert, then you can end up with them being able to impersonate _every_ part of your domain, not just the one application.


I issue a wildcard cert for *.something.example.com.

All subdomains which are meant for public consumption are at the first level, like www.example.com or blog.example.com, and the ones I use internally (or even privately accessible on the internet, like xmpp.something.example.com) are not up for discovery, as no public records exist.

Everything at *.something.example.com, if it is supposed to be privately accessible on the internet, is resolved by a custom DNS server which does not respond to `ANY`-requests and logs every request. You'd need to know which subdomains exist.

something.example.com has an `NS`-record entry with the domain name which points to the IP of that custom DNS server (ns.example.com).

The intranet also has a custom DNS server which then serves the IPs of the subdomains which are only meant for internal consumption.


This is the DNS setup I’d have in mind as well.

Regarding the certificates, if you don’t want to set up stuff on clients manually, the only drawback is the use of a wildcard certificate (which when compromised can be used to hijack everything under something.example.com).

An intermediate CA with name constraints (can only sign certificates with names under something.example.com) sounds like a better solution if you deem the wildcard certificate too risky. Not sure which CA can issue it (letsencrypt is probably out) and how well supported it is


I'm "ok" with that risk. It's less risky than other solutions, and there's also the issue that hijacked.something.example.com needs to be resolved by the internal DNS server.

All of this would most likely need to be an inside job with some relatively big criminal energy. At that level you'd probably also have other attack vectors which you could consider.


This is also my thinking.. if someone compromises your VM that is responsible for retrieving wildcard certs from let's encrypt, then you're probably busted anyway. Such a machine would usually sit at the center of infrastructure, with limited need to be connected to from other machines.


Probably most people would deem the risk negligible, but it’s still worth to mention it, since you should evaluate for yourself. Regarding the central machine: the certificate must not only be generated or fetched (which as you said probably will happen “at the center”) but also deployed to the individual services. If you don’t use a central gateway terminating TLS early the certificate will live on many machines, not just “at the center.”


You are absolutely right. And deployment can be set up to open up additional vulnerabilities and holes. But there are also many ways to make the deployment quite robust (e.g. upload via push to a deploy server, distribute from there). ... and just by chance, I've written a small bash script that helps to distribute SSL certificates from a centrally managed "deploy" server 8) [1].

[1]: https://github.com/Sieboldianus/ssl_get


It's the opposite - there is a risk, but not a larger risk. Environment traversal is easier through a certificate transparency log, there is almost zero work to do. Through a wildcard compromise, the environment is not immediately visible. It's much safer to do wildcard for certs for internal use.


Environment visibility is easy to get. If you pwn a box which has foo.internal, you can now impersonate foo.internal. If you pwn a box which has *.internal, you can now impersonate super-secret.internal and everything else, and now you're a DNS change away from MITM across an entire estate.

Security by obscurity while making the actual security of endpoints weaker is not an argument in favour of wildcards...


Can't you have a limited wildcard?

Something like *.for-testing-only.company.com?


Yes, but then you are putting more information into the publically logged certificate. So it is a tradeoff between scope of certificate and data leak.

I guess you can use a pattern like {human name}.{random}.internal but then you lose memoribility.


I've considered building tools to manage decoy certificates, like it would register mail.example.com if you didn't have a mail server, but I couldn't justify polluting the cert transparency logs.


Made up problem, that approach is fine.


I wish there was a way to remove public information such as this. Just like historical website ownership records. Maybe interesting for research purposes, but there is so much stuff in public records I don't want everyone to have access to. Should have thought about that before creating public records - but one may not be aware of all the ramifications of e.g. just creating an SSL cert with letsencrypt or registering a random domain name without privacy extensions.


Isn't this a case of shrinkwrap contracts? Use (viewing) is acceptance? https://en.wikipedia.org/wiki/Shrinkwrap_(contract_law)


Except there you were shown the agreement and had to click or whatever.

For event tickets, you are not even made aware there is an “agreement “


Honestly? Given I've seen crashes and printk messages from AMDGPU with words like "General Protection Fault," I'd say memory safety is probably the most important thing missing in these GPU drivers.


It's probably because FIPS 140-2 doesn't list it. I know machines booted with fips=1 and fips certified openssl, etc, openssh won't accept ed25519 keys for key auth.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: