Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.

Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.

If you're an Electron developer (like the apps mentioned), I recommend:

* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.

* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.

* You probably want to rotate your certificates if you ever gave anyone else access.

* Lastly, you should probably be the only one with the keys to your update server.



How about we don't build an auto-updater? Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible. Touching files on a user's system should be treated as a rare special occurrence. If a server is involved with the app, build a stable interface and think long and hard about every change. Meticulously version and maintain everything. If a server is involved, it is completely unacceptable for a server-side change to break an existing user's local application unless it is impossible to avoid - it should be seen as an absolute last resort with an apology to affected customers (agree with OP on this one).

It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).

You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.

Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.

We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).


> Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible.

That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.


You're allowed to have a support matrix. You can refuse to support versions that are too old, but you can also just... let people keep using programs on their own computers.


Yep.

And anyone who does will find a percentage of users figure it out and then just get back to work.


Have been there, done that.

The answer is a support window. If they are in bounds and have active maintenance contracts, support them.

If not, give them an option to get on support, or wish them luck.

Then the other answer is to really think releases through.

None of it is cheap. But it can be managed.


Besides what others said, realistically, the effort to support N versions is not O(n). I think it's something like O(log n), because code will largely be shared between versions - you're not doing a rewrite every release.


Sounds like you come from the B2B, consultancyware or 6÷ figure/year license world.

For the vast realm of <300$/year products, the ones that actually use updaters, all your suggestions are completely unviable.


And it's not like B2B doesn't get whacked by bad software or bad actors regulalry. The idea that software updates itself is vastly more benefitial than harmful in the very long term. There so many old machines running outdated software in gated corporate networks, they will get owned immediately once a single one of them is compromised in any way. They are literally trading minor inconveniences for a massive time-bomb with a random timer.


The two sides of your thought are going head to head. "Gated corporate networks" don't benefit from software that "updates itself" (unless we're talking about pure SaaS). It's exactly where auto-updating is completely useless because any company with a functioning IT will go out of its way to not delegate the decisions of when to update or what features are forced in out to the developer and their product manager.

Auto-updates mostly ever practically happen for software used at home or SMB which might not have a functioning IT. If security is the concern why not use auto-updates only for security updates? Why am I gaining features I explicitly did not want, or losing the ones which were the reason I bought the software in the first place? Why does the dev think I am not capable of deciding for myself if or when to update? I have a solid theory of why and it involves an MBA-type person thinking anyone using <$300 software just can't think for themselves and if this line of thought cuts some costs or generates some revenue all the better.


You're not thinking of it long term. In the short term you might be better off deciding when to update yourself, but in the long term you will be infinitely worse off because the reality of business practice is to delay updates until something catastrophic happens just to save a few bucks in the IT department. This approach merely means your system will run smoother over short time scales, while it becomes a complete clusterfuck over long time scales.


The reality of auto-updates is that you get your workflow broken during critical project phases.


True, but only rarely and with foreseeable and preventable damage. The alternative leaves you open to basically infinite losses at an exponentially increasing risk over time. That tradeoff is simply not worth it if you want your company to exist long term.


This mentality is how we get incidents like CrowdStrike. Relying on auto-updates for security is a crutch that allows insecure designs to spread.


Crowd strike was primarily an issue of running third party software in the kernel. If you're fine with this approach ad a company, you'll always be at the mercy of other people not screwing up in the lightest. Auto update issues are actually one of the nicer things you can run into over there.


This is how consumer programs used to work before everyone got fast broadband.


There was not much consumer programs to exploit in the days of dialup.

Sure, virii were with us since the early 80's, but they mostly targetted the OS, and there were no rapid security patch release cycles back then. You just had 'prevention' and mostly cleanup.


> How about we don't build an auto-updater?

Sure. I’d rather have it be provided by the platform. It’s a lot of work to maintain for 5 OSs (3 desktop, 2 mobile).

> we should try our best to release complete software to users that will work as close to forever as possible

This isn’t feasible. Last I tried to support old systems on my app, the vendor (Apple) had stopped supporting and didn’t even provide free VMs. Windows 10 is scheduled for non-support this year (afaik). On Linux glibc or gtk will mess with any GUI app after a few years. If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.

> Touching files on a user's system should be treated as a rare special occurrence.

Huh? That’s why I built an app and not a website in the first place. My app is networked both p2p and to api and does file transfers. And I’m supposed to not touch files?

> If a server is involved with the app, build a stable interface and think long and hard about every change.

Believe me, I do. These changes are as scary as database migrations. But like those, you can't avoid them forever. And for those cases, you need at the very least to let the user know what’s happening. That’s half of the update infrastructure.

Big picture, I can agree with the sentiment that ship fast culture has gone too far with apps and also we rely on cloud way too much. That’s what the local first movement is about.

At the same time, I disagree with the generalization seemingly based on a narrow stereotype of an app. For most non-tech users, non-disruptive background updates are ideal. This is what iOS does overnight when charging and on WiFi.

I have nothing against disabling auto updates for those who like to update their own software, but as a default it would lead to massive amounts of stale non-working software.


> file transfers. And I’m supposed to not touch files?

I'm pretty sure you know what I meant, it's obvious from context. System program files. The files that are managed by your user's package manager (and by extension their IT department)


There isn’t a package manager in many cases: windows store requires a MS account. macOS app store nerfs apps by sandbox restrictions. Linux has so many flavors of package managers it’s death by 1000 paper cuts. None of the major bundlers like flutter, electron and tauri support all these package managers and/or app stores. Let alone running the infrastructure for it.

Which leaves you with self-updaters. I definitely agree ideally it shouldn’t be the applications job to update itself. But we don’t live in that world atm. At the very least you need to check for updates and EOL circuit breakers for apps that aren’t forever- local only apps. Which is not a niche use-case even if local-first infra was mature and widely adopted, which it very much isn’t.

Anyway, my app works without internet, pulls no business logic at runtime (live updates) and it uses e2ee for privacy. That’s way more than the average ad-funded bait-and-switch ware that plague the majority of commercial software today. I wish I didn’t have to worry about updates, but the path to less worries and healthy ecosystem is not to build bug-free forever-software on top of a constantly moving substrate provided largely by corporations with multiple orders of magnitude more funding than the average software development company.


> windows store requires a MS account

they avoid mentioning it, but the Microsoft managed package format (MSIX) works just fine without the Microsoft Store. create an App Installer manifest, stick it on a website, and get semver-d differential updates across multiple architectures for free: https://learn.microsoft.com/en-us/windows/msix/app-installer...

msft have woefully underinvested in the ecosystem and docs though. I wish they'd fund me or others to contribute on the OSS side - electron could be far simpler and more secure with batteries-included MSIX


That's interesting and unexpected. How does the update check, notification & install process work?

EDIT: I think your link answered some of these questions. I’m on .msi myself so can’t benefit from it yet anyway.. basically these things need to be managed by the app bundlers like electron & tauri otherwise we’re asking for trouble. I think..


> This isn’t feasible. Last I tried to support old systems on my app, the vendor (Apple) had stopped supporting and didn’t even provide free VMs.

Why do you need "free VMs" as a professional software company. A couple of legacy machines is pocket change comparet to the salary of even a signle developer.

> Windows 10 is scheduled for non-support this year (afaik).

So? People are still releasing new software for XP. It's not that hard.

> On Linux glibc or gtk will mess with any GUI app after a few years.

glibc provides extreme long-term backwards compatibility so isn't a problem as long as you build against the oldest version you want to support.

gtk is a problem but also doesn't change as often as you are implying - we are only at version 4 now. And depending on what software you are building you can also avoid it.

> If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.

So that your users have a reason to choose you over Microsoft, Google and Apple.


For me, Windows is thousand years ahead in this regard. I download software, and run it. And it works 99.9% of the time. Yes, I have a chance of getting a virus. Happened one time in 30 years I'm using Windows. (More in DOS times).

Linux, I got burned again yesterday. Proxmox distribution has no package I need in their repository.

I am trying to use Ubuntu package - does not work.

I try to use debian - too old version.

How do I solve this? By learning some details of how the Linux distributions and repositories work, struggling some more and finding customly built version of .deb. Okay, I can do it, kinda, but what about non-IT person?

Software without dependencies is awesome. So, docker is something I respect a lot, because it allows the same model (kinda).


Windows Store and winget. Developers are the ones behind the times.


> How about we don't build an auto-updater?

Auto-updaters are the most practical and efficient way of pushing updates in today's world. As pointed out by others, the alternative would be to go through app store's update mechanism, if the app is distributed via app store in the first place, and many people avoid Microsoft store/MacOS app store whenever possible. And no developer likes that process.


I do agree with you but I think that unfortunately you are wrong on the job of updates. You have an idealistic vision that I share but well, it remains idealistic.

Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.

For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.


This is actually precisely how package management works in Linux today... you release new versions, package maintainers package and release them, while ensuring they actually work. This is a solve problem, it's just that nobody writing JavaScript is old enough to realize it's an option.


And that's why I said "apart for Linux". Where are the package maintainers on the OSes everyone uses ? (and don't think that's sarcasm, I'm writing this comment on my linux desktop).


Homebrew and chocolatey?


My exact thought as well, simply point the user to a well established and proper channel for auto updates and then the dev simply needs to upload/release to said repos when a new version is put out. As an aside: Chocolatey is currently the only (stable/solid) way to consistently keep things up to date on the Win platform in my book.


I have clients who have been running for more than 10 years in old versions for diverse reasons. I design a layer of backward compatibility in our apis to keep updating optional. it works well


This one is right.

Have a shoe-box key, a key which is copied 2*N (redundancy) times and N copies are stored in 2 shoe-boxes. It can be on tape, or optical, or silicon, or paper. This key always stays offline. This is your rootiest of root keys in your products, and almost nothing is signed by it. The next key down which the shoe-box key signs (ideally, the only thing) is for all intents and purposes your acting "root certificate authority" key running hot in whatever highly secure signing enclave you design for any other ordinary root CA setup. Then continue from there.

Your hot and running root CA could get totally pwned, and as long as you had come to Jesus with your shoe-box key and religiously never ever interacted with it or put it online in any way, you can sign a new acting root CA key with it and sign a revocation for the old one. Then put the shoe-box away.


Signing a revocation doesn't magically inform all affected devices. In practice this is equivalent to pushing an update that replaces the root key.


I mean sure but is that possible for OS builds? Generally you will generate a private key, get a cert for it, give it to Apple so they sign it with their key and then you use the private key to sign your build. I have never seen a guide do a two level process and I am nof convinced it is allowed.


> It can be on tape, or optical, or silicon, or paper.

You can pick up a hardware security module for a few thousand bucks. No excuse not to.


I see a good excuse right there: the few thousand bucks.

I'd rather one the most reliable and cheap hardware security model we know of: paper.

Print a bunch of QR/datamatrix codes with your key. Keep one in a fireproof safe in your house, and another one elsewhere.

Total cost: ~$0.1 (+ the multipurpose safe, if needed)


Printers often have hard drives with cached pages


That's why you buy a printer, then destroy it with a baseball bat after you print.

It is a bit expensive when it gets to 5-10 printers but still cheaper than the thousands.


Put the printer in the safe with the paper?


Yubico will sell you one for $650

https://www.yubico.com/store/


Question.

I've noticed a lot of websites import from other sites, instead of local.

<script src="scriptscdn.com/libv1.3">

I almost never see a hash in there. Is this as dangerous as it looks, why don't people just use a hash?


1. Yes

2. Because that requires you to know how to find the hash and add it.

Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.


Well, to be honest, the browsers could super easily solve that. In dev mode, just issue a warning "loaded script that has hash X but isn't statically defined. This is a huge security risk. Read more here" and that's it. Then you can just add the script, run the site, check the logs and add the hash, done.


You can define a CSP header to only exec 3rd Party scripts with known hashes


But that doesn't make it easy to integrate a new script from an author who doesn't provide the hash already.


Vendor your dependencies. It’s better for you as a maintainer anyway, since caching only works[0] with first party domains with any reliability.

And once you vendor your dependencies you can calculate the hash yourself

[0]: there are caveats to this


How would third party distribution like a cdn affect hashing?

I think https and integrity hashes address two very orthogonal attack vectors.


Because if you're not getting the real benefit (improved response times due to caching) you can stop worrying about hashing it properly or not and simply serve a copy you know to be good (or at least known and probably version controlled). Now you don't need to hash or know which hash is correct or worry about the user getting served the wrong file because someone else got hacked.


Not sure I follow, how does hashing mean you lose improved response times or caching?

Do you mean that hashing the file takes time? I guess that can be significant, but it's probably 2 or 3 cycles per byte, and average js size is like 10kb tops? 30khz doesn't look like much, it's a millionth of a second.


No, the hashing either when generating or checking is very fast like you said. Hashing itself isn't the culprit, but the battle between browsers and those fingerprinting users.

Originally the point of using a shared CDN like this was that if others used it too the file would already be cached on the user's computer and make it even faster. But, this feature was used for fingerprinting users by checking which files from other websites were cached and browsers have isolated the caches in response which makes it impossible to get the speed benefits from before.

So if you're not getting that speed benefit, and only really getting a tiny bandwidth reduction, the risks of serving the file from a 3rd party (which could be mitigated by the hashes) aren't worth it compared to simply vendoring the file and serving it yourself.

So it's not that hashing prevents caching or lowers response times, but that the risk it is mitigating isn't worth the effort. Just 'err on serving the file yourself.


It’s about control foremost. If you vendor your dependencies you know what you’re serving and can calculate the hash and use it in your CSP, and provides more stability for versioning.

Plus, as mentioned, only 1st party origins enjoy any benefits of caching content for faster load times so you get an additional benefit


But you can .. get the hash yourself?

wget url; sha256 file


Of course you can. But when it comes to security, one thing is very important in practice: making the secure way of doing things as easy as possible.

So, why did you not actually post the correct shell script? Apparently that would have been more effort to get right and ensure is correct right? And also work for every OS. And there you have it: if someone first has to figure out which script to run, some percentage will give up here. And that's my point: the browser should make it as easy as possible to avoid that from happening.


Cool. Now your site will break if the upstream pushes a fix while you're on vacation.


> In dev mode, just issue a warning "loaded script that has hash X but isn't statically defined.

How does the browser know which files to warn you about? What about scripts that are generated dynamically and have no static hash? There's plenty of reasons why you wouldn't want this.


It just warns about all embedded non-same-origin scripts that don't have a hash.

> What about scripts that are generated dynamically and have no static hash?

Well, then the warning is still valid because this is a security risk. I guess it'd be fine to be able to suppress the warning explicitly in those cases.

> There's plenty of reasons why you wouldn't want this.

For example? Honestly curious where you would not want a warning by default.


I wish popular browsers would get together and release an update that says:

- After version X we are displaying a prominent popup if a script isn't loaded with a hash

- After version Y we blocking scripts loaded without hashes

They could solve this problem in a year or so, and if devs are too lazy to specify a hash when loading scripts then their site will break.


Literally every website that uses JSONP will stop working if that happened. This would break the web in fundamental ways. If we're going to break the web in fundamental ways, resource integrity is hardly among the things that I'd be interested in changing.


Why would people choose JSONP over CORS?

There are security risks with JSONP (a hack to bypass same-origin policy), and the successor (CORS) has been around since 2009, so phasing it out may be a good thing.

https://dev.to/benregenspan/the-state-of-jsonp-and-jsonp-vul...


Good. JSONP requests to a domain you don't control are a security nightmare.


I don't think it's just laziness. There's use cases where the libraries are designed to be updated automatically.

Also some of the tracking scripts I don't think are strictly static content, maybe their strategy to fingerprint browser involves sending different shit to different users.


If you are the one serving the website, then you are the one generating the hash. If you want to serve different stuff then you could dynamically generate the hash for that different stuff rather than hard code it statically.

Specifying a script hash says that you as the owner of that site agree to load the content only if it matches the hash. Presumably you trust the content enough to serve it to your users.


Yes it is. Hashes must absolutely be used in that case.


It should just not be done at all. But the main browser vendor loves tracking so they won't forbid this.


Are you saying Chrome should block all script includes that don't have hashes? That'll break tons of sites. See "Don't break the web"[1].

Disclosure: I work at Google, but not on Chrome.

[1] https://flbrack.com/posts/2023-02-15-dont-break-the-web/


Also expired certificates break a lot of websites… should we disable checking?


Certificate expiration isn't an unanticipated regression. You know when you get a certificate when it will expire.


I don't mean to be pedantic, but not always--see the recent DigiCert delayed revocation issues. I will admit it is rare though and more often than not, you (should) know when your certs are going to expire.


Those websites set up the expiring certificate themselves.


Maybe, but just from a security point of view it's totally fine.


Getting tracked is less secure than not getting tracked.


Getting hacked is less secure than getting tracked.


Very clever. But getting tracked doesn't in any way protect you from getting hacked. It just exposes you to more risks, including getting hacked.


Fair enough.


Hi. I'm an electron app developer. I use electron builder paired with AWS S3 for auto update.

I have always put Windows signing on hold due to the cost of commercial certificate.

Is the Azure Trusted Signing significantly cheaper than obtaining a commercial certificate? Can I run it on my CI as part of my build pipeline?


Azure Trusted Signing is one of the best things Microsoft has done for app developers last year, I'm really happy with it. It's $9.99/month and open both to companies and individuals who can verify their identity (it used to only be companies). You really just call signtool.exe with a custom dll.

I wrote @electron/windows-sign specifically to cover it: https://github.com/electron/windows-sign

Reference implementation: https://github.com/felixrieseberg/windows95/blob/master/forg...


The big limitation with Azure Trusted Signing is that your organization needs to be at least 3 years old. Seems to be a weird case where developers that could benefit from this solution are pushed towards doing something else, with no big reason to switch back later.


That limitation should go away when Trusted Signing graduates from preview to GA. The current limitation is because the CA rules say you must perform identity validation of the requester for orgs younger than 3 years old, which Microsoft isn't set up for yet.


This is not true. Or maybe it is but they missed me? I signed up with a brand new company without issue.


Hi. This is very helpful. Thanks for sharing!


> No magic.

There's plenty of magic. I think that Electron Forge does too many things, like trying to be the bundler. Is it possible to set up a custom build system / bundling with it or are you forced to use Vite? I guess that even if you can, you pull all those dependencies when you install it and naturally you can't opt out from that. Those dev dependencies involved in the build process are higher impact than some production dependencies that run in a sandboxed tab process (because a tiny malicious dependency could insert any code into the app's fully privileged process). I have not shipped my app yet, but I am betting on ESBuild (because it's just one Go binary) and Electron Builder (electron.build)


Code signing is a really excellent place to look at ponying up the money for one of those hardware security modules that triggers sticker shock. The ones on their own PCI card with potted chips and optional Byzantine Generals access cards and consultants wearing ties. It’s cheaper than blowing six months of developer time trying to fake it (remember it will always take you twice as long as you think it will)

I built one code signing system after being the “rubber duck” for a gentleman who built another, and both used HSM cards and not cheap ones. Not those shitty little USB ones. One protected cellphones, the other protected commercial aviation.


> For Windows signing, use Azure Trusted Signing

I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.

I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?

Not sure if this is FUD spread by the EV CA's or not though?


Im not sure if they're technically considered EV but mine is linked to my corporation and I get no virus warnings at all during install.


You know, there's this nice little thing called AppStore on the mac, and it can auto update


All apps on the Mac AppStore have to be sandboxed, which is great for the end-user, but a pain in the neck for the run of the mill electron app dev.


And yet, tons of developers install github apps that ask for full permissions to control all repos and can therefore do to same things to every dev usings those services.

github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.

IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.

As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"


Why spend that effort when any code you run on your machine (such as dependency post-install scripts, or the dependencies themselves!) can just run `gh auth token` can grab a token for all the code you push up.

By design, the gh cli wants write access to everything on github you can access.


I will note that at least for our GitHub enterprise setup permissions are all granular, tokens are managed by the org and require an approval process.

I’m not sure how much of this is “standard” for an org though.


I personally haven't worked with many of the github apps that you seem to refer to but the few that I've used are only limited to access the specific repositories that I give and within those repositories their access control is scoped as well. I figured this is all stuff that can be controlled on Github's side. Am I mistaken?


Yeah, turns out "modern" software development has more holes than Swiss cheese. What else is new?


Question that I hope you can help me. I'm working on a Electron app that works offline. I am plan to sell it cheap, like $5 one payment.

It won't have licenses or anything, so if somebody wants to distribute it outside my website they will be able to do it.

If I just want to point to a exe file link in S3 without auto updates, should just compile and upload be enough?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: