Hacker Newsnew | past | comments | ask | show | jobs | submit | ysleepy's commentslogin

I've used it for some time, it feels very much like it is in maintenance mode.

You manage a PKI and have to distribute the keys yourself, no auth/login etc.

it's much better than wireguard, not requiring O(N) config changes to add a node, and allowing peoxy nodes etc.

iirc key revocation and so on are not easy.


Nebula just had a major release that added IPv6 support for overlay networks. Hardly maintenance mode.

The main company working on it now seems to be adding all the fancy easy-to-use features as a layer on top of Nebula that they are selling. I personally appreciate getting to use the simple core of Nebula as open source. It seems very Unix-y to me: a simple tool that does one thing and does it well.


Nebula does not require O(n) config changes for adding a node.

O(n) is only required for:

- active revocation of a certificate (requires adding the CA fingerprint to the config file)

- adding/removing a lighthouses (hub for publishing IPs for p2p) or relay (for going over p2p)

- CA rotation


AFAICT you and 'ysleepy are in agreement.

We are, wireguard needs O(N) updates to add a node to every other node.

This problem has been brought up in the OpenZiti community many times. I like Nebula, but it's not 'truly open source'.

What do you mean?

Referring to the previous person's comment, that you need to manage a PKI and have to distribute the keys yourself, no auth/login etc.

How does that make it not "truly open source"?

I made a shell script that does most of that for my needs.


Fair, I was being loose with my language. What I should have said is that it does not come fully featured open source, that you need to do a certain amount of rolling your own.

The same could be said for a webserver, a radius server, etc. I mean ssh "requires" a network to be remotely useful :)

Edit, since I can't reply sadly:

You're right, that was a bad example.

I can probably list at least a few dozen things that all require certificates though, which was really my point. Everything has dependencies.

Also if you just... Don't trust big tech, run your own CA.


Right, but if certificates are a fundamental part of your design, you should include the functional mechanisms to manage them imho (i.e., key distribution, auth/login). The developers created it, but they keep it in the commercial product. Other overlays which use PKI include those functions in the FOSS.

nah, I dont buy that. A network is not a functional requirement of SSH etc in your use case.


Oddly the notification brief in germany specifies false-high glucose readings, which would explain the urgency of the problem much better.

For high glucose you inject insulin, but if you don't really have high glucose you end up with dangerously low levels leading to coma or death.

https://www.bfarm.de/SharedDocs/Kundeninfos/DE/10/2025/42777...


That sounds more sensible. I was thinking maybe the error caused DKA due to pumps suspending all insulin overnight or something.


How about no.

Cloudflare is a cancer interjecting itself into all sorts of communication I'd rather have directly with the other party, like my bank, email, blogs, health providers etc.

Gatekeeping the broader internet from people in poorer countries, people using VPNs etc.

I predict they will be the first pushing DRM blobs instead of html/js and killing the open web.


+1

Obligatory resource: https://0xacab.org/dCF/deCloudflare

Any single US entity trying to MITM such large swatches of global internet traffic is inherently dangerous to global freedom. they're a single point of failure for national security letters and secret gag orders that can compel them to perform targeted censorship, backdoor all sorts of software via HTTP distribution channels, assist in US disinformation operations by rewriting third party content, etc. They could be logging literally every plaintext HTTP request and response passing through their servers and leaving it wide open in some noSQL database for hackers to go steal from someday - users have no way to trust that Cloudflare is even competently qualified to protect what they collect, and there's nothing stopping Cloudflare from blatantly lying about what they collect. This wouldn't be as big of an issue if they weren't collecting your social security / national insurance number, name, age, date of birth, address, contact information, credit card details, usernames, passwords, and every other piece of data under the sun on sites that sit behind CF, including government websites and websites that function more or less as public utilities.

Cloudflare poses an impossible to overstate threat to your right to privacy, your right to freedom of speech, to democracy itself, to say nothing of the threat they pose to the free and open web. They are very nearly as large of a stain on what was arguably one of the crowning accomplishments of the human race (the internet) as the largest evil corporations on the planet - Microsoft, Alphabet (Google), Amazon, Meta (Facebook), etc.


Still, why endorse and practically make everyone implement an algorithm that only the NSA wants, while there is a superset already standardised.

This is about the known bad actor NSA forcing through their own special version of a crypto building block they might downgrade-attack me to.

I pay like 1% overhead to also do ecc, and the renegotiation to the non-hybrid costs 2x and a round-trip extra. This makes no sense apart from downgrade attacks.

If it turns out ecc is completely broken, we can add the PQ only suite then.


Nobody has to implement the algorithm only NSA wants! That's not how RFCs work.


Tell yourself what you want, but this sort of AI positive proclamation will make your project seem less trustworthy to many people.

I choose not to use a vibe coded password manager, rigorous review or not, to protect my entire digital existence, monetary assets and reputation.

It's the pinnacle of safety requirements, memory unsafe language, cryptography, incredibly high stakes.

I have the distinct displeasure having to review LLM output in pull requests and unfailingly they contain code the submitted doesn't fully understand.


It's very easy to do this with LXC containers in Proxmox now, as passing devices to a container is now possible from the UI.


With containers, making backups seemed to become impractical with large libraries, since it seems to copy files individually?

I had to switch to VM because of that, passing through the GPU.


Just as easy with VMs, just have to pass the device to the VM


The only downside is that you essentially lock the GPU to 1 VM which there is nothing wrong with doing. At least with LXC, you can share device across multiple containers.


They're fine, but they're incompatible with building fat-jars ro have single file deployment and dead to me because of that. Spring does some ugly jar-in-jar custom classloader stuff which I hate out of principle because it's spring.

Oracle hates that people build fat-jars and refuses to adress the huge benefit of single file deployables.


Nice to see so many researchers calling out this monstrosity.

Thank you.


Wouldn't it be more useful to measure the number of rows the model can process while still hitting 100% accuracy?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: