This release includes most of the kTLS and NUMA work that I talked about at EuroBSDCon in 2019, including hw kTLS offload support for some NICs. This allows Netflix to serve over 350Gb/s of TLS encrypted video streams from a single 32-core AMD based server (at about 50% CPU).
And one more thing.. A lot of the core NUMA work, especially in the VM system, was done by Jeff Roberson, not me. I did a lot of hacks and Jeff turned them into production quality code. Without that work, my network siloing would have been useless. More people will benefit from the core work, since its not just geared to a web workload.
And we have John Baldwin to thank for upstreaming the Netflix kTLS.
One thing I haven't quite understood is why Netflix even bothers to encrypt the video streams. It seems like a waste of effort, since Netflix isn't exactly serving content that would qualify as "secret". There's not even any porn on there, so it's not the biggest concern if someone were to find out what some people were watching. This data is being collected anyway via endpoint technologies like Samsung's Automated Content Recognition (ACR): https://www.samsung.com/us/business/samsungads/resources/tv-...
It seems like encrypting the video stream traffic at this kind of scale requires a substantial engineering effort and special hardware.
Encryption also means that shared downstream caches at ISPs can't do anything to reduce their bandwidth consumption. The only option is to contact Netflix and physically install one of their edge caching boxes.
Content tampering protection could be obtained by simply computing content hashes offline once, which the clients could verify.
> There's not even any porn on there, so it's not the biggest concern if someone were to find out what some people were watching.
True there's no out-and-out porn, but there is content that's close to the line, and beyond that there are themes that could be indicators that a viewer is gay, or in some other demographic that he might want to keep private.
Target can tell if you're pregnant based on your shopping; certainly there's a lot that can be inferred from the movies you watch.
> Target can tell if you're pregnant based on your shopping;
I am very skeptical about that story, it sounds too good/scary to be true, and there is only 1 reported incident, without proper sources. Sounds made up to me.
"This helps protect member privacy, particularly when the network is insecure — ensuring that our members are safe from eavesdropping by anyone who might want to record their viewing habits." from https://netflixtechblog.com/protecting-netflix-viewing-priva...
The parent mentioned "This data is being collected anyway via endpoint technologies like Samsung's Automated Content Recognition (ACR)". If that's true then the encryption isn't really helping.
Samsung was one of the companies caught using their apps to scrape data out of the filesystem (config files and logs from other apps, location from camera roll, etc) and using it to bypass permissions you didn't want to give them.
They are completely terrible on privacy, lol, the answer here is "if you care about privacy don't use a samsung". Or more generally "don't use android".
I'm not really sure what you are getting at. Regardless of what Samsung is doing, I'm confused by the argument that encryption is useless because Samsung might take steps to work around it when not using a Samsung device is a viable option.
Samsung has below-root-level access on their phones, their apps are fundamentally aggressive towards your privacy (as repeatedly demonstrated in practice) and impossible to dig out without a complete image replacement. Maybe not even then.
If you care about privacy, you don't buy Samsung phones (or other products like TVs). They are the tip of the spear on data collection.
They're required to let you choose whether to opt in to ACR in Europe because of the GDPR. While the prompt is terribly vague and designed to encourage "just hit yes" behaviour, I have a q60r and the setup wizard at least presented a prompt I could opt out of.
Also while HN likes to raise the spectre of TVs connecting to open wifi/shipping with 5G radios, at the moment there is no evidence for either so users could always use a trusted device to play back Netflix rather than the TV app and leave the TV without internet
Is this actually in reach for realistic attackers?
Like let's say you're a network admin of a college with conservative religious views, and you want to see if anyone in the dorms is watching "immoral" content. You probably can just intercept an entire unencrypted session and replay it on your machine and see what it was. But you don't really have the funding or access to expertise to develop a side channel attack yourself, and there are no off-the-shelf devices that will do this for you, are there?
Encryption is likely the difference between your management saying "Show me what the kids are watching" and "This isn't worth assigning our network admin to spend half a year on effectively a cryptography research problem."
(Incidentally, encryption may also be what allows a sympathetic network admin to refuse an order from their management, which is also worth considering in your threat model.)
I think it's true that if you had either the resources of one of the richest handful of countries in the world or access to some talented grad students etc., you could do it. But if you're even a non-rich country (like one of the many small countries with moralistic governments that censor the internet) it seems harder, and if the goal is spying on what people watch, it's unlikely that people talented enough to do it will find this a problem they're happy to volunteer their time to solve.
(This is a genuine question - the attack might be much easier than I think!)
Are you attacking him because he's using TLS? Will you ever be satisfied?
TLS fixes a whole lot more than just privacy, it's also authenticating the remote end. Are we really suggesting dumping something that is trivially accelerated in hardware to do some homebrew crypto crap just for the sake of a forum thread?
Netflix solution is fine, and the concentration of interest in TLS means it only gets cheaper over time to build Netflix-like configurations, which is especially great since we've spent the past decade or more trying to convince the entire industry this configuration is also best practice
I'm just pointing out that the privacy justification is nonsense. Protecting the integrity of connections? That's important. Preventing web browsers from throwing a fit about "insecure connections"? That's also important. But privacy isn't the issue here, and I've talked to enough people at Netflix to know that they know that too.
People, before you downvote, check out the username.
But yeah, you’re right. You could glean a lot of information from nothing but a collection of movies’ exact runtimes, as visible from the network stream. Although that wouldn’t tell you much about a single movie, given enough viewings you could make pretty good guesses about which movies someone is watching.
Such as an end user device asking for 4k, but a middlebox somewhere between the customer and Netflix inspecting the http content and blocking it (or modifying the client request in flight), because "only 480p allowed here" or something like that.
Who's watching what is secret data and although it can be possible to extract that from packet sizes and duration of streaming, it's a lot easier to extract it from http paths.
But, beyond that, the battle to use http when it makes sense has been lost. Modern platforms strongly encourage https and discourage http.
I'd imagine the poor behaviour of ISPs (especially in the US) with regards to privacy is the reason. Deriving reported speeds for `fast.com` from the video sources was a masterstroke, too!
Sure, things like ACR exist, it doesn't mean we should give up. Lots of people think "privacy mode" should be default for everything and isn't just for when you're watching porn.
Encryption for DRM needs to happen only once per media, and then can be saved to storage and shared infinitely many times. So that's probably something else
I'm genuinely curious: why did you implement it in FreeBSD rather than Linux? Isn't that kind of offload something that already exists in Linux? Or does something in the way Linux is implemented/designed that makes it impractical / impossible to implement as efficiently? Or is somehow Linux not secure enough for your use case? Or not something else enough?
I'm asking because I'm always interested to know what are the practical limitations of Linux and also because, while I think it's nice that Linux had some serious competitors, I cannot help being a bit sad that contribution like this will only benefit the few people that use FreeBSD (even if to be fair, it seems that it will benefit all netflix customers at least, which is quite a few people ;) )
>I cannot help being a bit sad that contribution like this will only benefit the few people that use FreeBSD
Same could be said the other way around, the thing is Linux can implement it and even take the code, the other way is rather difficult....so much for freedom.
And not sure what you mean with "few" people, the few Playstation 3/4/5 users? Juniper? OpnSense/Pfsense installations? Or the few Truenas/Freenas users?
I still expect the number to be pretty small compared to the number of Linux users, no matter how you define user (if playstation users counts as freebsd users, then I guess nearly everyone is a Linux user :) )
The point of the licence is a good point. Is the kind of code drewg123 is referring to regularly copied from BSD to Linux? (i understand it should be possible in this direction?).
Thanks for the video, looks like exactly what I was looking for, I will watch it later.
Most of the changes Netflix is making aren't really directly copyable. A lot of it is related to or tied to the virtual memory model, and Linux's model is different.
However, publicizing the ideas and the results is valuable to other operating systems even without the code. If you're building something on Linux (or Windows? or ?) that could benefit from kTLS (including NIC accelerated kTLS), knowing it can work and having a roadmap is great. You would still need to do the plumbing, but you could skip a lot of the design. Being able to look at the code is nice too.
And, if you're willing to try FreeBSD + nginx, you could jump directly to that. If you're deploying something with similar performance characteristics to a CDN box, it's probably a very narrow application, and doesn't need to run on the same stack as the rest of your fleet.
Haha exactly :) my own definition of a user of an operating system is someone who know what operating system they are using. So a playstation and a Netflix user would not count as freebsd users (they are not using FreeBSD, they are using a playstation). Netflix engineers are the FreeBSD users. Pfsense and freenas users certainly counts ;)
At work I've only ever used various distributions of Linux, but at home I've started using OpenBSD and FreeBSD. I'm currently running FreeBSD 12 on my long in the tooth HP Microserver N54L, with a ZFS mirror + NFS/Samba serving up music/photos to my home network. It's been rock solid for the past few years, and I love in particular how 'boring' it is. It has great tech like ZFS but it's very predictable.
As a nice winter Sunday activity, I installed the beta to my X230. Now everything seems to work quite nicely, I have: encrypted ZFS, Wayland, Sway and all my development tools installed and working. It took about two hours, a bit less to get here. My first time running FreeBSD on a laptop, mostly just going with OpenBSD or Arch.
The installation was much easier compared to Arch, almost all packages that I wanted I just got with `pkg`. With using sway and wayland, the experience is really really fast and snappy even with this old laptop. At least compared to OpenBSD and xorg, the desktop experience here just flies.
up.bsd.lv is a proof-of-concept of binary updates using freebsd-update(8) for FreeBSD 13.0-CURRENT and 12-STABLE to facilitate the exhaustive testing of FreeBSD and the bhyve hypervisor and OpenZFS 2.0, and to help elevate ARM64 to Tier 1 status. Updates are based on the SVN revisions of official FreeBSD Release Engineering weekly snapshots.
Features:
- Kernels are built with the GENERIC-NODEBUG configuration file
- /etc/freebsd-update.conf is modified to use up.bsd.lv
- /usr/sbin/freebsd-update is modified to "xargs" parallel phttpget (Thank you Allan Jude)
I tried adding the wireguard module to my loader.conf this morning and my VM bombed horribly on reboot. I'm not sure if it's some ordering issue or something else, but use some caution when doing this.
There's not a ton of reason to load it in loader.conf (which is the bootloader configuration), by the way. The recommended location would be rc.conf kld_list, which is after userspace starts. loader.conf is only really needed for hardware drivers needed to load the operating system from storage devices (and most such drivers are already built in to the kernel).
Is FreeBSD still used to as great of an extent in US Government / Military / Intelligence as it used to be?
There seemed to be some talk of Linux I noticed in past years, and I’ve been thinking about trying to out again, but I’d feel better if I knew that it was still well-used.
I'd hoped that aarch64 (ARM) would now be a tier 1 architecture in 13 but that's not to be the case (https://www.freebsd.org/platforms/) and the FreeBSD wiki page on what's required to make aarch64 a Tier 1 architecture was flagged as being out of date ~18 months ago (https://wiki.freebsd.org/ARMTier1).
For info, a key thing about Tier 1 architectures is that "Binary updates and source patches for Security Advisories and Errata Notices will be provided for supported releases" (https://docs.freebsd.org/en_US.ISO8859-1/articles/committers...), which would make maintaining ARM-based SBCs much easier (no need for setting up cross-compilation toolchains or for very slow compilation on the SBCs) and could make FreeBSD an even better OS for IoT stuff.
It is enormously disappointing to have to wait yet another cycle for officially supported updates on aarch64. For a relatively well funded project, FreeBSD should be able to meet this kind of goal and they should be asking what went wrong here and how they can improve.
Probably not enough developers motivated enough to work on aarch64. Maybe if Nvidia and/or another Arm64 hardware vendor was a sponsor OTOH things would be different.
I set up FreeBSD on my rpi 4 a while back and had to jump through some hoops to get it to recognize all the RAM available; [1] anyone know if this version will do that out of the box?
I wish FreeBSD booted faster. I recall it took nearly a minute from turning my machine on (with an SSD and a recent Intel i5 processor) before I was prompted for my login credentials. It's little things like that which make the system feel a bit dated, at least to someone like me who is an outsider to FreeBSD but has used Linux for a decade. I also wished FreeBSD needed less configuration to use as a desktop (see https://www.c0ffee.net/blog/freebsd-on-a-laptop/ for what I mean). Also, does Zoom work on FreeBSD?
I made some significant speedups a couple years ago, but I'm sure there's still room for further improvements, especially if you have hardware which I didn't test with.
One key impetus for Linux moving to systemd was faster boot time, parallelizing init tasks. Is FreeBSD looking at moving beyond traditional rc script boot?
There's no need to abandon rc scripts to get parallelism. It looks like there's some work in progress on this [1]. From what I can tell, the change to rcorder to generate parallel start info is there in 13.0, but the change in /etc/rc to request it and to use it is not. You would need to patch that in manually for now.
As somebody who fled Linux to FreeBSD in order to avoid systemd, the prospect of that happening again makes me a little sick to my stomach. Running out of places to go that aren't controlled by people who are openly hostile to POSIX standards.
Wouldn’t have to be systemd, macOS has launchd for init, FreeBSD could come up with its own thing. This is a good talk I saw recently (by a FreeBSD contributor) on why these sorts of systems have become more common, and why they seem so sweeping in scope. https://youtu.be/o_AIw9bGogo
(In fact systemd and launchd are so tailored to their respective operating systems that FreeBSD likely would have to do their own init replacement if they cared to.)
That is not a good talk unless you already advocate for abandoning standards, and celebrate Linux's growing power to dictate how the shrinking number of remaining OSes design things. You might easily miss it if you're already onboard with systemd, but in his point by point address of systemd's shortcomings - he repeatedly handwaves and redirects.
Systemd is a mess, which is an amazing accomplishment considering the fact that it is supposed to be this great unifying/simplifying layer. So awful that the DoD held out for the longest time in formalizing any kind of baseline security audit procedures that address it. Compare that to something like Solaris SMF. I've never had a problem with the rc system that made me blame the underlying design, but if I did - systemd would be at the bottom of a very long list of my potential solutions, right under 'Set the machine on fire and pickup a copy of "Industrial Society and Its Future"'.
I really like the guy running the OpenZFS project, but it is painfully obvious that it is only going to increasingly cater to Linux - and coincidentally a lot of the really annoying aspects of the project are a result of that. For example: that Rube Goldberg build system, with massive amounts of code duplication... that wouldn't be there if not for autotools and gnu export symbol games.
lol, the Internet Explorer school of thought. Sure, form a work group for an open init standard with systemd as the basis - chaired by people both inside and outside the systemd project. But before that, rent a home in Malibu and wire it for reality TV, because that would be the most entertaining WG ever.
One of these init systems is run by a guy that is openly hostile to POSIX, the other can be found on several different operating systems and has been around forever. hmmm...
Only some Linux distros use SysV init and they're not any of the big players (Fedora/RH was the distro creating systemd), Ubuntu adopted systemd as the default 5 years ago, SUSE did the same 6 years ago, Debian 5 years ago. Almost everything else is a hobbyist OS (I've never seen Gentoo or Slackware in production anywhere else than at small companies where the server was on the critical path for maybe several tens of thousands of dollars, at best, which is peanuts on a global scale) or a niche OS (Alpine is used 95% as a container distro, and who freaking cares about the init system in a container? :-) ). Basically anyone who "puts there money where their mouth is" uses systemd.
SysV init was never adopted by BSD.
I think Solaris used to use it but it's been using Service Management Facility for the past 16 years.
HP/UX and AIX use SysV init, I think, but they're so vanishgly rare that 99% of developers and sysadmins out there could have the most successful careers in history and both HP/UX and AIX could be nuked from history for all they care.
And my point was that nobody even bothered to standardize SysV officially, though it was supposed to be a standard. That's how little everyone cared.
Poettering might be annoying, but he cares, he's opinionated and is actually knowledgeable. I've been using his software for a while and it's quite solid and his vision makes sense. POSIX isn't enough for a modern operating system.
So what are you arguing? Because I see a list of disparate platforms sharing the same init system, and I see the homogeneity that is systemd/linux. What is that a counterpoint to? Or is this a celebration of network effect?
> POSIX isn't enough for a modern operating system.
So there should be no standard? I can think of only one time I ran into a shortcoming that can be blamed on POSIX, and a few more where the OS just did a bad job in implementing the standard. Can you imagine how awful things would be right now if the DoD hadn't forced the concepts of interface commonality and multi-source procurement? You'd likely be reading this on a Honeywell glass TTY hooked up to an IBM timeshare. Because without standards the network effect acts as an insurmountable barrier to entry, and incumbents fully control the height of that barrier with needless complexity and arbitrary breaking changes. Hell, not that long ago there was talk about wielding GPL export symbols as a weapon to punish nvidia... I'm no fan of nvidia, but I like the idea of an API inspired by spite a lot less. Also, in a world where your vision prevailed, there would be no AMD - because Intel already set the industry "standard", so why worry about second-source availability?
Are you intentionally missing that the point is the fact that any one of them can use the portable init system that isn't systemd?
> No, they should turn systemd into a real standard. ECMAinitd :-p
Well systemd can only work on linux, by design. So every standard compliant init system would require that it package a linux kernel. That is not only moronic, but with history as our guide: would directly result in never patched, network active, embedded software.
Honestly, what do we need POSIX for in 2020? As discussed in the talk, it traces to the age of the Unix wars, when there was a panoply of processors and Unix variants. POSIX is great if you’re concerned with recompiling some C so it works on SPARC and Alpha and POWER, on Solaris, Tru64, and AIX. That world is gone.
Yes, I’m for abandoning pointless standards. There are concrete benefits to systemd - faster boot, comprehensive service management, power savings, memory savings, a consistent interface to changing system state. What is the benefit to sticking with an rc script architecture designed to run a handful of processes on a pdp?
You've never written portable software, have you? You can't write portable software without a common interface. Have you ever looked at what autotools vomits out? You'll see shell scripts containing the likes of "echo $1 | sed 's/^x//'" in order to just get things to where they have a chance of sharing enough commonality to compile. Imagine how much worse it would be without POSIX. Nothing would run on anything that the developers didn't account for, this includes OS (plus version), environment, and hardware architecture. That is the world you are asking for, not the one we presently live in.
> What is the benefit to sticking with an rc script architecture designed to run a handful of processes on a pdp?
Freedom, for not only the end user but all the way up to distro packager. Systemd is designed to be non-portable, and force a network-effect pressure. I was similarly suspicious about WSL designing toward a linux API instead of POSIX... an effort to reduce the potential for alternatives emerging in the future, where your choices are windows or linux.
> That is the world you are asking for, not the one we presently live in.
So, like mobile? Which is 70%+ of computing these days? :-)
> Freedom, for not only the end user but all the way up to distro packager. Systemd is designed to be non-portable, and force a network-effect pressure. I was similarly suspicious about WSL designing toward a linux API instead of POSIX... an effort to reduce the potential for alternatives emerging in the future, where your choices are windows or linux.
Freedom to do what? Init systems should be a solved problem, done and done. Innovation should happen higher up the stack, the init system should be uniform, hopefully flexible and universally adopted, and ideally standardized fully (as in ECMA & co.).
> So, like mobile? Which is 70%+ of computing these days? :-)
Media consumption spy devices + smiley emoticon. Yuck.
> ...and ideally standardized fully...
uh, you know that you are advocating for the direct opposite of that - right? Systemd is designed to be impossible to use outside of linux. So what, everyone should adopt the perpetually changing linux ABI as their point of commonality? You know that would be insanely stupid, right?
My point is that systemd is intentionally designed to make that impossible to do without also pulling linux (ABI, environment, whatever) within the same standard.
There are a few Linux distros that use OpenRC with decent success. I run lots of FreeBSD servers, but I also run some Alpine servers, where openrc works well for me (with some s6 thrown in for a few services).
That was one of the notional reasons, but I (and others) haven't noticed much of a speedup. If anything, it's slightly slower in my experience (though this obviously depends upon the specific configuration).
It was also not unique in parallelising startup tasks. Even sysvinit can be configured to do that with insserv+startpar.
I would question whether boot time is such a meaningful target. It's trivial to leave a system up days or weeks relying on hibernation or suspend instead. This seems especially true given that freebsd is even more likely than Linux to see use as a server instead of a desktop.
Besides boot time on any init system with a fast ssd seems to be pretty damn quick. Saving 10 seconds every month will take a long time to pay off.
Desktops can sleep as well. Boot time is important for embedded computers, e.g. in-vehicle systems, but otherwise I don't see why would this be such an issue.
However the internet is seemingly full of complaints that machines drain fast during sleep so your experience may not be unusual but it might pay to check if something is wrong or at least sub-optimal.
For example GPU not actually turning off, usb device keeping it from sleeping, hardware or software waking up your machine.
The default bootloader delay is 10 seconds; you can probably tune that to one or zero, with autoboot_delay="1" in /boot/loader.conf
Otherwise, I'm not aware of any easy wins (unless you're running 10.1 or older, in which case set hw.memtest.tests="0" to disable the boot time memory test which is very time consuming if you have a lot of ram and doesn't seem to be very useful; it was disabled in 10.2 and later).
If you can see what steps are particularly slow, some of them might be influenced to be a bit faster (eg, if it's waiting on usb scan to mount disks, you can make that time out sooner if it's not necessary on your system; you may be able to do background dhcp instead of synchronous dhcp, etc). But I think someone could spend some real time making it faster to the benefit of those who boot often.
Edit: following in the footsteps of cperciva's work; especially the profiling setup.
As others have mentioned the comparatively slow boot is because (for simplicity's sake) the process isn't parallelized. You can install alternate init systems from ports if you want to speed this up.
I'm running FreeBSD on T480 and after I configured the audio backend in Firefox (one variable in about:config) and the webcamd driver (2 lines in rc.conf) I can do video calls just fine. Both Jitsi and Zoom work as expected.
I used Zoom on FreeBSD. By use I mean I was able to be in a meeting, see screen share, share my screen, but I had to dial in for audio and never tried webcam.