Hacker Newsnew | past | comments | ask | show | jobs | submit | peanut-walrus's commentslogin

I've always found it weird that CNAMEs get resolved and lumped into the answer section in the first place. While helpful, this is not what you asked for and it makes much more sense to me to stick that in additional section instead.

As an aside, I am super annoyed at Cloudflare for calling their proxy records "CNAME" in their UI. Those are nothing like CNAMEs and have caused endless confusion.


I've been a Linux admin for 25 years but up until a few months ago my personal computer has been windows (gaming desktop) or Mac (laptop).

I decided to give desktop Linux another shot and I'm glad I did. I was prepared for a lot of jankiness but figured I have enough experience to fix whatever needs fixing. Surprisingly, this has not been the case at all, the PC has been not only as stable as Windows or Mac but also performs better and is much more comfortable and intuitive to use. I never really want to "work on" my personal computer, I want it to just be there for me reliably. I've always had a soft spot for free software, but I just couldn't justify the effort until now.

So I guess this is my love letter to all the devs that have made the modern Linux desktop possible. Even compared to just a few years ago, the difference is immense. Keep up the good work.


I've been running a Linux desktop for about 13 years. There are still "moments" where you have to work on it and it can be more opaque than Windows/Mac. But you have the control to do what you need to do, which is one huge factor for me in Linux's favor.

I moved my immediate and mostly non-tech family to all run Linux including an aging relative who needed a locked-down Firefox install to keep her from falling victim to predatory sites and extensions. Pretty easy to script the entirety of the OS install and lockdown so that it was documented and repeatable. Can't do that without techie roots but I love that it's possible and mostly straightforward from a scripting perspective. It's almost exclusively get the right file with the right config in the right place and restart a service.

The only major day-to-day downside IMO is battery life on Linux laptops. Can't compare to current generation of Macs but that's true for Windows too.


I have been using desktop Linux for about the same amount of time and the way I see it now, even on the occasion where I have to troubleshoot something weird (which has maybe been one or two times in the past few years), it doesn't sound any different from the issues people are having with Windows and Mac these days—and at least I can fix it!


Yes exactly. When I had a Mac for work, I had to tinker with that thing just as much if not more so than I do Linux. To windows credit, it was the best of the three when it came to not having to tinker to get what I want, but the lack of ability to configure it in a way that was comfortable and preferable was more limited and difficult, so there were annoyances I had to just live with. The point at which they started injecting ads into my desktop experience was a dark day and the day I said goodbye


Oh god, I had a Mac for work recently and had to spend 3 weeks becoming an expert in Mac External Displays And Thunderbolt just to get my HP Thunderbolt 4 dock (officially compatible with Macs!) to use a dual monitor setup with it. Finally I got it working, but every configuration I tried Just Worked(tm) on Linux. Jeez...


This sounds more like problem of HP’s dock than a Mac. Just because they said it is officially compatible with Mac doesn’t mean it is. Also, compatible with which Mac- Intel or M series? I use three different docks on two Mac Mini (M4 Pro) and they all worked out of the box. I did my research before buying them by watching YouTube reviews.


So Mac doesn't support DisplayPort MST like everyone else does (Windows and Linux have supported this STANDARD for years), because they are assholes and don't care about their users, and the fact that multi monitor support is different between Intel Macs, certain M1s (cannot use more than one external monitor at all!), and the rest of the Apple Silicon lineup (other M1s, M2+) is insane.

I eventually got it working on this Intel Mac by using one HDMI and one specific DisplayPort output on the dock so it wouldn't try to multistream it internally in the dock or whatever (can't remember what exactly it was doing). It might have involved an HDMI to DP converter. I honestly tried to purge my memory of it once I got it working.

Note that all setups worked fine with Linux without modifications. Would have likely worked fine on Windows, too, since it supports MST. Only one specific setup worked with Mac.

So no, it's not a problem of the dock, it's a problem with Apple refusing to support a standard so they can make people buy the expensive $400 docks they hawk in Apple stores. Or because they are lazy and think because they don't care, their users shouldn't either.

You will find many people complaining about Mac's multi monitor support (or lack thereof) online. They are choosing to ignore user feedback.


This is exactly the double standard, or bias when talking operating systems.

When it's macOS/Windows, it's someone else's fault; when it's Linux, it's Linux fault.

When you have to tinker on macOS/Windows that's just what has to be done, no biggie; when you have to tinker on Linux, it's a burden nobody should be subjected to.

People are blind to the work they've grown accustomed to. There are many things that are much, much easier on Linux than macOS or Windows.


Indeed, and especially the double standard regarding "oh but on Linux you have to carefully check if the hardware is supported by Linux" but when it's a Mac it will "just work." In GPs comment they have to do the same hardware compat research as a Linux user does, but that's never listed as a downside for Mac


This is a bizarre complaint. There are more Mac users than Linux users but still far fewer than Windows users. As such, there are so many examples of hardware and software that are incompatible with Mac. Our IT dude keeps telling me to switch to Windows because of better support from third party vendors.

My comment was specifically about the HP dock. I have nothing for or against Linux as I have never used it, don't know anybody who uses it, and I have no plans to use it. I am simply not qualified to comment on Linux.


Don't take it personal. This wasn't about you specifically, but the sentiment frequently observed. No one would have been blaming HP when we were talking Linux. The need for tinkering, or hardware considerations is frequently brought up against Linux; it's never brought up for macOS, on the contrary, on macOS everything "just works" - even when it doesn't. On Windows, for years, you had/have to search the web for obscure software and drivers, then download them from shady third-party websites, repeating the process for updates; on Linux you always were able to install and update almost everything signed and shipped from trusted sources through the package manager, long before app stores, but apparently adding a line to some config file is unbearable inconvenience. Somehow people are very ignorant about the limitations (sometimes unfixable) and troubleshooting in Windows and macOS, but hyper vigilant when it comes to Linux.

I am running this Fedora installation for a few years, now. No clean install in-between, just super stable and pleasant upgrades. Everything just works for me. Zero tinkering. If there is a bug, it's usually tracked and gone in a few weeks, at most the next release 6 months off. If a HP dock doesn't work, it's HP's fault for not using open standards, certainly not a problem of Linux.


> and at least I can fix it!

100% this!

I wrote this in another thread: https://news.ycombinator.com/context?id=46120975

> Openbox does everything I need it to. I don’t want Mac or Windows, they both suck in ways I can’t change. Sure, Linux can be rougher, but at least I’m not helpless here. I can make the changes I need, and the software is generally less broken IME


In my experience, the remaining difficulties with Linux tend to revolve around managing ownership and permissions of files and directories.

I recently plugged in my external hard drive into my Linux PC and it just wouldn't read it. "You do not have permission to access this drive" or something like that. The solution after googling ended up being (for some reason) some combination of sudo chown -R user /dev/sda1 and unplugging and reconnecting the drive.

No way to do that from the GUI (on KDE at least) and I'm not sure how I'd even solve that problem if I didn't know the super user password.

Still glad to be using Linux, of course, but sometimes these problems still pop up.


Don't make block devices directly read/write-able to your user unless you want every user process to be able to have raw disk access to it.

Your distro should have been set up so that you can mount USB drives indirectly through the options your DE exposes.


This shouldn't happen with external disks formatted with ntfs, ext or udf. If you have an EXT4 or something like that external disk things get more hazy...


Whether it should or shouldn't, it did. But I think the issue is less that it happened, and more that the user interface doesn't respond to the "no permission" error by offering up a button you can click to attempt to grant yourself permission. If it can be done through the terminal, there should be a novice friendly way as well.

(For that matter, a novice user shouldn't even have to know how their external hard drive is formatted! It might not even be their drive; it could be a family member attempting to share photos with them. If they're just plugging it in for the first time and seeing errors, they'd be pretty hesitant to mess around with the terminal typing in commands they don't understand).


Sorry, I didn't mean to imply this isn't an important problem that needs to be addressed. I mostly agree with what you say and I bet the right way to deal with this is to have it be mounted with a special user space filesystem like fuse that wraps the permissions to always look correct for the user that mounted it, but I guess no one so far has decided to take upon such task...


Can't it just do what I _mean_ if it's a Desktop install and mount it like ntfs, udf, or etc?


no? A file system is the format that the data on the disk is stored as. If you mount an ext4 disk as ntfs, it wouldn't load properly. It's not just the interface for loading the data, it's how the data is actually stored.


What I mean is that it should ignore permissions on external ext4 by default in Desktops.


There's no concept of "external". What would it be, "USB" or anything mounted under /mnt or /media? What if it's the root OS drive of another computer you're trying to fix connected through a USB-SATA adapter? Should any program running with minimized privileges get to overwrite even root files in that OS drive?

I think that it's a pretty good heuristic that if permissions exist in the filesystem, they matter and shouldn't be ignored.


They shouldn't be ignored. but they can be ignored, is the problem. File permissions are not encryption or security: If I can't read a file on this machine, because I'm not root, I'll just move the drive to a different machine where I am root.

But I agree with you, they do have a use and to some use cases matter, and we shouldn't arbitrarily decide to ignore them.


I don't doubt you had that problem. But it, and the solution you want, sound a bit strange. You want a button that gives your user access to everything despite its access settings... Than login and work as root.

I mean it's hard to tell what really happened. But a different user could have created this files with access rights only for himself on purpose. Something one can do with NTFS on Windows too. It also could have been a distro bug.

> but sometimes these problems still pop up.

I'm a 90% Windows- 9.5% Linux- 0.5% Mac-Admin at day job: Don't tell me Windows has no problems poping up. ;-)


Yes. Another user could have restricted access rights on purpose, maybe? But I can still apparently seize them for myself by typing an arcane command into the terminal. Why shouldn't the UI give me a way to do this more easily?

If it requires typing in an admin password to solve, so be it, but at least the UI could lead me to the answer while offering a password prompt.

And yes, I wasn't telling you that Windows has no problems. In fact, Windows probably caused this problem -- this drive worked just fine with Linux the night before; then I transferred some files into it from Windows and plugged it back into my Linux computer and suddenly this happened. I have no doubt that Windows was responsible for messing up the drive state and causing the problem. But to a non-technical user, it's not a question of who is to blame; Windows reads the drive fine whereas Linux gives an error that has no obvious solution. And it can't be solved by right clicking the drive in the explorer and selecting "take ownership and mount" or something like that, it requires using an unfamiliar command into the terminal to fix the problem. And that's basically the case with most file-permission errors that I encounter on Linux systems.


>Windows reads the drive fine whereas Linux gives an error that has no obvious solution. And it can't be solved by right clicking the drive in the explorer and selecting "take ownership and mount" or something like that, it requires using an unfamiliar command into the terminal to fix the problem. And that's basically the case with most file-permission errors that I encounter on Linux systems.

That definitely seems like a feature that could/should be added to some (most? all?) linux file managers. In fact, it doesn't even sound like that difficult to implement with standard system calls[0].

It's not really an issue for me (I prefer the command line -- heck, I still use octal when setting permissions instead of 'rwx'), but it sounds like it bugs you a lot.

You don't mention which Desktop Environment (DE) you're using, but I imagine the file manager in your DE is open source. As such, I'm sure you could make yourself and the (I'm sure) many others who'd like to be able to modify file/dir/filesystem permissions/ownership via their GUI file manager much happier.

Try doing that with Windows Explorer or Finder. I think not.

Good luck!

[0] https://www.tutorialpedia.org/blog/how-to-change-show-permis...

Edit: Clarified prose.


Hm, I'm a KDE user. I just tested what happens when I try to open a folder I don't have access rights for. The standard file browser Dolphin says authentication is required. "Act as administrator." If clicked there comes a warning and I can enter my password. Than it shows the content.

https://i.postimg.cc/VLgkWpy7/image.png

This feature exists since 2022.

https://kde.haraldsitter.eu/posts/kio-admin/


Good! That's exactly what I would like to have happen! I think the error was more like that it didn't have permission to mount the drive. I logged the message at the time, but I don't have access to that computer this week, so I'm going from memory.


I've been on Mint for nearly 4 years now,. migrating from Windows.

The only hiccup I had was botched updates once, and the OS would error during boot.

The fix was easy, boot to terminal, fiddle with timeshift to restore to the point prior to update, then apply de updates carefully with a few reboots in between.

Now, was that easy? For someone well versed in the technicalities, yes. For a layman, probably not.

Now, that said, it was the only problem I had in 4 years. It has been very smooth sailing besides that.

My experience with Windows prior to that was always horrible. Yearly clean installs because after a while the computer felt extremely sluggish. Random blue screens for god knows what reason.


For a layman, that's a catastrophic entire OS-loss right? Especially if your issue is somewhat novel or stack specific. *Most people* (not us) just lost their only desktop computer and are now trying to debug by googling random OS words and browsing reddit and forums on their mobile to try and find out what went wrong with a seemingly benign update.

---

Now, AI makes this *WAY* easier since you have a practically omniscient distro debugger with infinity patience and you don't have to wait on their responses. So this is probably coming down as a barrier soon, but I want to stress that "the only problem I had in 4 years" is loosely the same as "I bought a new car and the only problem I had was a catastrophic transmission failure. I just had to rebuild the transmission from scratch using specialized tools and knowledge and it was okay.


I managed to get around ~7W idle on a 2024 dgpu/igpu laptop, with room to further optimize. From my limited casual checks (nowhere near proper benchmark), it's better than windows.

But yes it's an area that still requires tweaking, which is a cost I don't want to incur. Also just within this year I got a regression (later fixed) because of a bug in nvidia-open driver so it stopped going into low power state giving me a toaster on the go. These are still very obscure to root cause and fix.


Current Intel chips get 20h of regular laptop usage. For real: https://www.notebookcheck.net/Intel-empire-strikes-back-with...

The upciming Intel and Qualcomm CPUs are even better. They really caught up with Apple.


Not 20h of regular laptop usage:

> The ThinkPad T14s Gen 6 Intel lasted for more than 21 hours in our Wi-Fi test (150 cd/m² brightness). This device will easily last more than ten hours in everyday use.

Also, tested on Windows not Linux. Still, if I could get 10 hours of regular usage on Linux, I'd be ecstatic.


If you add a MacBook to comparison there on that website, you'll see they last basically the same in same usage. Qualcomm actually can get even more hours, if I remember correctly.

In any case, I don't think the battery time is an issue anyone with 2025+ devices.


> Pretty easy to script the entirety of the OS install and lockdown so that it was documented and repeatable.

What distro? It's niche enough of a use case. Have you considered releasing the code?


I've been running desktop Linux for about eighteen years, though I did take a break and run a Macbook for about four years.

It's a little upsetting that Windows has gotten so terrible, because I think in a lot of ways the NT Kernel is a better piece of software than the Linux kernel. Drivers are simply easier to install and they generally don't require a reboot and they don't require messing with kernel modules, IO is non-blocking by default, and a bunch of other things that are cool and arguably better than Linux.

The problem is that, while the kernel is an important part of an operating system, it's not the only part. Even if the NT kernel were the objectively best piece of software ever to be written by humans, that still doesn't change the fact that Windows has become a pretty awful mess. They have loaded the OS with so much crap (and ads now!), the Windows Update tool routinely breaks your computer, their recovery/repair tools simply do not work, their filesystem is geriatric and has been been left behind compared to stuff like ZFS, btrfs, and APFS, and they don't really seem determined to fix any of this stuff.

Even if the Linux kernel were to be slightly worse, it's still good enough. Even if you do have to muck with kernel modules it's not that hard now with DKMS. Even if the IO is blocking by default epoll has been around for decades and works fine.

So at that point, if the kernel is good enough, and if we can get userland decent enough, then desktop Linux is better than Windows. Linux is good enough, without ads, with recovery tools that actually work, and performs comparably or better than Windows.


It's been like that for 15 years or more.

The fact that you now need an account for almost any piece of hardware, including computers, phones etc is a major drawback that arrived with the internet era. Linux has been able to avoid that temptation.


Let's not get ahead of ourselves here. 15 years ago I was still looking up installation and driver procedures and workarounds to install Linux on my devices. I failed to install arch in college because I didn't have a driver for my SATA drive for example.

Today though. Yeah totally easy. Especially if you get one of the many machines with Linux support. Smooth sailing all around.


Facetiously: Well actually, you didn't need a driver for the SATA drive but the SATA controller.

Something that was also true for Windows and such a common problem that many BIOSes would offer a IDE compatibility mode one could switch to.

26 years ago I installed SUSE and it just worked on my self build PC. Smooth sailing all around. Than I tried Debian and couldn't for the life of me get X11 to work.

So yeah, the distro and hardware lottery is still a problem.


Windows has also needed external drivers installed at times, since the DOS days. It's the nature of obscure, new, or advanced hardware.


The difference was the device came with a disk containing the driver for DOS and Windows.


I don't see how that is Linux' fault.


I didn't say it was. This discussion is about relative difficulty of setting things up. It is, objectively, more difficult when you need to download a driver for new hardware and the NIC on your laptop needs a driver your distro didn't come with.


Not for a very long time.


I don't buy a lot of hardware, but the last thing I bought (Vocaster One) came with the driver installer on a small USB mass storage volume when the device plugged in.


That's Arch for you.

I've been on Linux since Ubuntu 8.04 or longer and I literally never had OS install problems with any of it at all. Except for Arch, but what did you expect? Hands-on is the point of Arch Linux.

On the other hand, I remember installing a lot of drivers by hand on windows. Most people never (re-)installed Windows or macOS to begin with.

Probably depends on your hardware, but that's a matter of vendor support, not operating system. It's not like Microsoft or Apple are writing all those drivers. With a Thinkpad or Brother, you likely never had problems with Linux for the last 20 years. People don't complain they can't install macOS on a Chromebook, and Windows has been the absolute OS monopoly everyone had to support. However, with tinkering, you can install Linux on a washing machine.

What has drastically changed is the user space software side. First off some FOSS alternatives like Blender, OBS, Krita... are becoming equal or better than the competition, but Valve also basically solved gaming on Linux, now. Virtualization and software development is and always has been better.

To be fair, Linux also shines now due to enshitification of everything else.


My experience, as a software developer, is that both Windows and Linux desktop are great. The biggest advantage Windows has is better support for desktop applications that are used by a lot of people, which is just the nature of Windows being more popular for desktop users, and is why I use it. With Linux, it's more likely you'll have to be a bit more savvy with occasional issues.

To note, with official Linux support on Windows, it's trivial for me to get everything I want as a developer on Windows, so that's never been a hard blocker for me.


> To note, with official Linux support on Windows, it's trivial for me to get everything I want as a developer on Windows, so that's never been a hard blocker for me.

Maybe not as a developer, but as a user I still think WSL is only kind of superficially a solution. You still are stuck with an update process that happens automatically and can brick your computer and recovery tools that, as far as I can tell, have never actually worked for anyone in history. You're still stuck with NTFS, which was a perfectly fine filesystem thirty years ago but now is missing basic features, like competent snapshotting/backups, and instead you have to rely on System Restore, which again doesn't actually work.

I mean, yeah, you can do `sudo apt install neovim`, and that's kind of cool I guess, but the problems with Windows, to me are far deeper and cannot be solved with a virtualization layer on top.


I dunno, for local development, I've never ran into any issues with this. And for stuff that really matters, I'm running it in a container anyways which makes it irrelevant which OS my computer is.


I've been using Linux as a desktop for that entire time, and actually, it was better before. The hardware was simpler, more compatible, and relied less on firmware blobs, so making Linux drivers was way easier. And the software was simpler because GUI makers weren't trying to be fancy. The peak of Linux desktop stability and ease of use was in 2002. It's been downhill from there.


The only reason I haven't gone over to Linux is gaming with my RTX card. Interested to know your gaming setup and distro. Any stability/compatibility issues?


Not the op, I've been gaming on Linux for over 10 years I think, I have an rtx2080, and using Arch Linux, Nvidia support has gotten better by a lot.

Steam performs exceptionally well. Initially there were issues, but I haven't face any for really long time now.

I don't play mp games though. So that part I can't say much.


Very useful info; much appreciated!


Disagree. If you have an error that NEEDS fixing, your program should exit. Error level logs for operation level errors are fine.


Personally I find just using nftables.conf straightforward enough that I don't really understand the need for anything additional. With iptables, it was painful, but iptables has been deprecated for a while now.


Same here, I'm surprised most linux users I know like to install firewalld, UFW, or some other overlaying firewall rather than just editing the nftables config directly. It's not very difficult, although I've never really dug deep into the weeds of iptables. I suspect many people who have used iptables long ago in the past assume nftables is samilar and avoid interacting with it directly out of habit.


With nftables you need to learn a lot before you cam be partially sure of wbat you do.

With ufw gui you need a single checkbox - block incoming connections.


Not sure what you find difficult about it, but I just took the "workstation" config from the gentoo wiki and used it on my laptop.

Perhaps if you're doing more complicated things like bridging interfaces or rerouting traffic it would be more difficult to use than the alternatives, but for a simple whitelist it's extremely easy to configure and modify.


It's an open protocol, you don't need to use any of the vendors. My Yubikey is a "passkey", so is my Flipper Zero. Keepass provides passkey support.

For the general public, they already rely on either Google or Apple for pretty much all of their digital life. Nothing wrong with extending this to passkeys, it's convenient and makes sense for them.


> It's an open protocol, you don't need to use any of the vendors. My Yubikey is a "passkey", so is my Flipper Zero. Keepass provides passkey support.

I don't want to use a Yubikey. It's a pain in the butt. I just want to use my Mac, with no more damn dongles.

Keepass is a vendor, and one who doesn't even have a Safari extension.

> Nothing wrong with extending this to passkeys, it's convenient and makes sense for them.

I didn't say there was anything wrong with extending this to passkeys. The problem is the lock-in, e.g., Safari requires iCloud keychain for passkeys, but not for passwords. And there is no plaintext export/import, unlike with passwords.

Nobody can convince me that passkeys are good when I buy a Mac and use the built-in Safari but can't even use passkeys to log in to websites unless I give my passkeys to a cloud sync service or have to install some third-party "solution" (for a problem that should not exist in the first place). That experience is so much worse than passwords.


So don't use software that forces lock-in (Safari)? Sounds like a you problem.


> So don't use software that forces lock-in (Safari)? Sounds like a you problem.

No, this is a passkeys problem. Safari does not force lock-in of passwords.

Why in the world would I want to ditch my web browser just to use passkeys? I'd rather ditch passkeys.


Again, this is a Safari problem, not a passkeys problem. You are literally complaining about missing features in Safari.


> Safari requires iCloud keychain for passkeys

Repeating this doesn’t make it true. https://developer.apple.com/documentation/authenticationserv...

All of the 3rd party credential managers I’ve used that support passkeys work with safari, and through the APIs that Apple offers the credential managers you can even pick your default CM and never think about iCloud again…


> All of the 3rd party credential managers I’ve used that support passkeys work with safari

I've already addressed this pedantry: https://news.ycombinator.com/item?id=46304137


Rather ironic to complain about lock in as an Apple user, there is no such problem on Linux. The problem isn't passkeys but Apple.


Presentation and context are important to understand the meaning of a text.


Is the message deep and important or was the article attempting to manipulate you into thinking it is?


So everyone has wanted "year of the Linux desktop" for a while. This year, since Microsoft has decided to call open season on their own feet and Valve has taken a break from swimming in their money pool to make sure absolutely any piece of software ever written can run on Linux, it looks like this might actually be happening. I am seeing a massive influx of new users, driven by distros like Cachy, Nobara, Bazzite. A lot of them don't have previous Linux experience and are generally not the most technically savvy users.

This absolutely terrifies me. Linux desktop security is, to put it politely, nonexistant. And the culture that goes with Linux desktop users just makes things worse, there's still a lot of BOFH gatekeeping going on, laughing at the new users when they inevitably mess something up and worst of all, completely refusing to admit that the Linux desktop has security issues. Whenever a new user asks what antivirus they should run, they are usually met with derision and ridicule, because the (oldschool) Linux users genuinely think their computers are somehow immune and can never be hacked.

The first cybercriminals to put some development effort into Linux ransomware/stealers are going to wreak havoc and a lot of people are going to be in for a rude awakening. The D-Bus issue with secrets in the article is just one of many many many ways in which Linux desktops are insecure by design.

There are of course distros out there that take security seriously, but we are not really seeing new users migrating to Qubes en masse.

Edit: not calling out the distros above in particular, all 3 are doing very good work and are not really any worse in security than most other distros.


Any windows program you download can steal all your secrets too. The only operating systems that isolate programs by default are on phones (and chromebooks).


Unless you give it admin permissions, it really can't (admittedly, a lot of Windows users do run their computers with their admin account by default). Also, Windows users generally have at least some kind of anti-malware running, which, while not perfect, does work well against most spray-and-pray malware out there.

Edit: did some research, I must correct myself, the stealers have indeed evolved so admin permissions are not required for most credentials on Windows either.

However, should "strictly speaking, not really worse than Windows" be the security target we aim for in Linux?


All your data is owned by your user. If you run a program, it will have access to all your data. Admin or not is irrelevant here.

The keyring is pretty open on Windows, if you know the key you can request anything even if stored by another app. There is a way to lock a secret to a specific app but it's not properly enforced in most versions of Windows.

The only user data that would require admin privilege is that of sandboxed Windows Store applications where even the owner can't access it directly from outside the program and you have to be admin.


The main problems with these kinds of in-repo vault solutions:

- Sharing encryption key for all team members. You need to be able to remove/add people with access. Only way is to rotate the key and only let the current set of people know about the new one.

- Version control is pointless, you just see that the vault changed, no hint as to what was actually updated in the vault.

- Unless you are really careful, just one time forgetting to encrypt the vault when committing changes means you need to rotate all your secrets.


Agreed with 1 and 3, just a tip re 2 though: sops encodes json and yaml semantically, key names of objects are preserved. Iow you can see which key changed.

Whether that is a feature or a metadata leak is up to the beholder :)


git-crypt solves all 3 (mostly)

> Sharing encryption key for all team members

you're enrolling a particular users public/key and encrypting a symmetric key using their public key, not generating a single encryption key which you distribute. You can roll the underlying encryption key at any time and git-crypt will work transparently for all users since they get the new symmetric key when they pull (encrypted with their asymmetric key).

> Version control is pointless

git-crypt solves this for local diff operations. for anything web-based like git{hub,lab,tea,coffee} it still sucks.

> - Unless you are really careful, just one time forgetting to encrypt the vault when committing changes means you need to rotate all your secrets.

With git-crypt, if you have gitattributes set correctly (to include a file) and git-crypt is not working correctly or can't encrypt things, it will fail to commit so no risk there.

You can, of course, put secrets in files which you don't chose to encrypt. That is, I suppose, a risk of any solution regardless of in-repo vs out-of-repo encryption.


for 1), seems like you could do a proxy encryption solution.

edit: wrong way to phrase I think. What I mean to say is, have a message key to encrypt the body, but then rotate that when team membership changes, and "let them know" by updating a header that has the new message key encrypted using a key derived using each current member's public key.


Re 2 you can implement a custom Git diff tool, and so (with the encryption key) see what's changed, straight from `git diff`


Here's another one:

- using a third party tool to read and store credentials is an attack vector itself.


Cloudflare is widely used because it's the easiest way to run a website for free or expose local services to internet. I think for most cloudflare users, the ddos protection is not the main reason they're using it.


I am using cloudflare because the origin servers are IPv6 only.


Cloudflare hosts websites for free?


Yup, the free plan is quite generous.


Yes, they have free plans.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: