Hacker Newsnew | past | comments | ask | show | jobs | submit | eschaton's favoriteslogin

Exactly, it doesn't work very well in practice.

Even when working alone, the complexity gradually creeps up on you.

Because it's all made to work together, start pulling anywhere and before you know it you're using another feature, and another, and so on.

And many features interact in exotic and hard to predict ways, so hard that entire careers have been spent on trying and failing to master the language.


Like I recently wrote in response to someone here who was fascinated that mailing lists were “still a thing in 2025”:

Please, inform us of an alternative which is:

• Non-proprietary

• Federated

• Archivable

• Accessible

• Not dependent on a specific company

— <https://news.ycombinator.com/item?id=43972038>


Hello. I didn't invent Protocol Buffers, but I did write version 2 and was responsible for open sourcing it. I believe I am the author of the "manifesto" entitled "required considered harmful" mentioned in the footnote. Note that I mostly haven't touched Protobufs since I left Google in early 2013, but I have created Cap'n Proto since then, which I imagine this guy would criticize in similar ways.

This article appears to be written by a programming language design theorist who, unfortunately, does not understand (or, perhaps, does not value) practical software engineering. Type theory is a lot of fun to think about, but being simple and elegant from a type theory perspective does not necessarily translate to real value in real systems. Protobuf has undoubtedly, empirically proven its real value in real systems, despite its admittedly large number of warts.

The main thing that the author of this article does not seem to understand -- and, indeed, many PL theorists seem to miss -- is that the main challenge in real-world software engineering is not writing code but changing code once it is written and deployed. In general, type systems can be both helpful and harmful when it comes to changing code -- type systems are invaluable for detecting problems introduced by a change, but an overly-rigid type system can be a hindrance if it means common types of changes are difficult to make.

This is especially true when it comes to protocols, because in a distributed system, you cannot update both sides of a protocol simultaneously. I have found that type theorists tend to promote "version negotiation" schemes where the two sides agree on one rigid protocol to follow, but this is extremely painful in practice: you end up needing to maintain parallel code paths, leading to ugly and hard-to-test code. Inevitably, developers are pushed towards hacks in order to avoid protocol changes, which makes things worse.

I don't have time to address all the author's points, so let me choose a few that I think are representative of the misunderstanding.

> Make all fields in a message required. This makes messages product types.

> Promote oneof fields to instead be standalone data types. These are coproduct types.

This seems to miss the point of optional fields. Optional fields are not primarily about nullability but about compatibility. Protobuf's single most important feature is the ability to add new fields over time while maintaining compatibility. This has proven -- in real practice, not in theory -- to be an extremely powerful way to allow protocol evolution. It allows developers to build new features with minimal work.

Real-world practice has also shown that quite often, fields that originally seemed to be "required" turn out to be optional over time, hence the "required considered harmful" manifesto. In practice, you want to declare all fields optional to give yourself maximum flexibility for change.

The author dismisses this later on:

> What protobuffers are is permissive. They manage to not shit the bed when receiving messages from the past or from the future because they make absolutely no promises about what your data will look like. Everything is optional! But if you need it anyway, protobuffers will happily cook up and serve you something that typechecks, regardless of whether or not it's meaningful.

In real world practice, the permissiveness of Protocol Buffers has proven to be a powerful way to allow for protocols to change over time.

Maybe there's an amazing type system idea out there that would be even better, but I don't know what it is. Certainly the usual proposals I see seem like steps backwards. I'd love to be proven wrong, but not on the basis of perceived elegance and simplicity, but rather in real-world use.

> oneof fields can't be repeated.

(background: A "oneof" is essentially a tagged union -- a "sum type" for type theorists. A "repeated field" is an array.)

Two things:

1. It's that way because the "oneof" pattern long-predates the "oneof" language construct. A "oneof" is actually syntax sugar for a bunch of "optional" fields where exactly one is expected to be filled in. Lots of protocols used this pattern before I added "oneof" to the language, and I wanted those protocols to be able to upgrade to the new construct without breaking compatibility.

You might argue that this is a side-effect of a system evolving over time rather than being designed, and you'd be right. However, there is no such thing as a successful system which was designed perfectly upfront. All successful systems become successful by evolving, and thus you will always see this kind of wart in anything that works well. You should want a system that thinks about its existing users when creating new features, because once you adopt it, you'll be an existing user.

2. You actually do not want a oneof field to be repeated!

Here's the problem: Say you have your repeated "oneof" representing an array of values where each value can be one of 10 different types. For a concrete example, let's say you're writing a parser and they represent tokens (number, identifier, string, operator, etc.).

Now, at some point later on, you realize there's some additional piece of data you want to attach to every element. In our example, it could be that you now want to record the original source location (line and column number) where the token appeared.

How do you make this change without breaking compatibility? Now you wish that you had defined your array as an array of messages, each containing a oneof, so that you could add a new field to that message. But because you didn't, you're probably stuck creating a parallel array to store your new field. That sucks.

In every single case where you might want a repeated oneof, you always want to wrap it in a message (product type), and then repeat that. That's exactly what you can do with the existing design.

The author's complaints about several other features have similar stories.

> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?

> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.

OK, well, I've worked on lots of systems -- across three different companies -- where this feature is essential.


I'm curious, how often throughout history has a statement like "that guy is an idiot" had an unnamed person been so easily and widely recognized.

We all know who you are talking about, just like everyone know what "guy" I'm talking about.


Given how close that election turned out to be, this smear campaign likely changed the presidency, and given George WMD Bush's actions, changed the course of the world for the worse in many ways. (For those who were too young or not yet born at the time, these jokes were MASSIVE to the extent that became largely Al Gore was known for, for years after. So it's not much of an exaggeration to say they had a material impact on his perception and hence the votes.)

Al Gore understood technology, the internet, was a champion for the environment, and it's unbelievable today that he came that close to presidency (and then lost). When people say "we live in the bad timeline", one of the closest good timelines is probably one where this election went differently.


>Will you believe them in the future?

An underdiscussed frustrating aspect of this whole era is that there is never any true retrospection. There is no adjustment in the credibility of the people who predicted exactly how things would play out or the people whose predictions ended up being incredibly wrong. If there is a lack of consequence for being wrong, it ends up meaning there won't be any consequences for maliciously lying in the moment knowing it's only a matter of time until they are proven wrong because when that day comes, they have already moved onto some other lie and the cycle continues.


I'm not sure if I agree with the premise.

Microsoft had a lot of big successes in the "dead" period - Xbox for example is pretty absent, as is Surface. They also had a lot of non-successes that were revolutionary in their own right - basically inventing the first major music streaming service.

Also, most of the key changes at Microsoft were underway before Nadella - most importantly getting business customers to agree to subscription service models and cloud migration.


The rules are more like guidelines, you see.

The universe has had a lot of opportunities to come up with wacky stuff.


If this really was a mistake the easiest way to deal with it would be to release people from their non disparagement agreements that were only signed by leaving employees under the duress of losing their vested equity.

It's really easy to make people whole for this, so whether that happens or not is the difference between the apologies being real or just them just backpedaling because employees got upset.

Edit: Looks like they're doing the right thing here:

> Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.


Why in a browser if it's local-first?

Solvespace has the benefit of being a single download/executable.

It also has a constraint solver which has been used in a couple of projects: CADsketcher as you noted, and Dune 3D: https://github.com/dune3d/dune3d where the author noted:

>I ended up directly using solvespace's solver instead of the suggested wrapper code since it didn't expose all of the features I needed. I also had to patch the solver to make it sufficiently fast for the kinds of equations I was generating by symbolically solving equations where applicable.

Any relation to: https://github.com/jay3sh/cadmium ?

Also, for CAD kernels, Manifold was not mentioned: https://github.com/elalish/manifold/wiki/Manifold-Library --- while I understand it to have many of the same disadvantages as OpenCASCADE, it does seem worth mentioning.

Interestingly the kernel was previously discussed here:

https://news.ycombinator.com/item?id=35071317

It seems really interesting/promising, esp. the compleat history and editability (I'd love to see that history listed in a pane which could be opened/closed --- add a series of disclosure triangles which would allow hiding finished elements so that one could focus on the current task and it would be a dream come true for me --- if I can puzzle out the 3D stuff, so far I've crashed and burned on all the apps I've tried (BRL-CAD, FreeCAD, Solvespace, Alibre Atom...) --- the only thing I've been successful w/ is OpenSCAD and similar coding tools).


It always seemed like a reasonable, but still big, assumption that antimatter behaved the same way under gravity. Anti-particles have opposite charge, so maybe it could have made sense that they have opposite "gravitational charge"? But also gravity doesn't have "charge".

So yeah, agreed. A good thing to confirm, even if (especially if) they expected the result to be unexciting.


You can do this:

    pdf2ps a.pdf    # convert to postscript "a.ps"
    vim a.ps        # edit postscript by hand
    ps2pdf a.ps     # convert back to pdf
Some complex pdf (with embedded javascript, animations, etc) fail to work correctly after this back and forth. Yet for "plain" documents this works alright. You can easily remove watermarks, change some words and numbers, etc. Spacing is harder to modify. Of course you need to know some postscript.

I hadn't seen this new Libreboot policy. This is fantastic!

The FSF's criteria have become quite calcified and unprincipled at this point. Specifically I'm talking about how blobs loaded from flash are given a pass, while blobs on isolated coprocessors are verboten.

Principle requires that binary blobs in flash (or even ROM) are put in the same class as every other binary blob. And pragmatism for the modern world requires that we incorporate security relationships into our analysis of user freedom.


Upvote for mentioning Subversion.

Back in 2016 I briefly worked at a old, large corporation that was still using CVS.

During that time management decided that company policy was now to be hip with the younger generation. One very senior manager actually used the expression "we must strive to be the Uber of [our industry]"

This also led to a mandate that we should switch to Git because it was new and shiny.

You can just imagine the how steep the learning curve was for all the old hands, especially as some of them joined the company in the previous century as COBOL programmers.

We were also pulled in to team building sessions where we were encouraged to "think outside of the box" and be "innovative". During one of those sessions I suggested that "you know, we should just use SVN instead" and was promptly shot down.


In the US, I have never seen a severence agreement that did not include confidentiality provisions. However (IMO) the inclusion of non-compete provisions has become more common across all levels of employment and all job categories. The vast majority of US workers will sign these agreements. That cultural trend makes it (a) easier for employers' legal counsel to manage any issues raised by the minority and (b) more difficult, if not impossible, for the minority to negotiate.

We have this kind of censorship in India as well, even the in weirdly innocous places. In James Bond movies, and I think Gone Girl as well, scenes were by zooming into character's faces or just straight cuts.

This is probably the only reason I maintain a US iTunes accounts (used to have to buy gift cards from sketchy sites online to keep this going, but I recently discovered that my Indian Amex card works fine with a US address).

Also trivia for those who are wondering how cuts are made, at least for cinema content: all video and audio assets are usually sent to theatres in full, but there's an XML file called the CPL (composition playlist) that specifies which file is played from which to which frame / timestamp in what sequence. Pure cuts or audio censorship can be handled by just adding an entry to skip the relevant frames or timestamp, or by specifying a censor beep as the audio track for a particular time range.

https://cinepedia.com/packaging/composition/


You’re so disconnected from reality. Ask yourself how many people are unable to work remotely. If you can’t think of enough examples to justify roads, go outside more.

Hope this asshole never needs to work again. He’s just cost all of his prior employers many, many thousands of dollars in legal review of every hiring committee he’s ever been a part of, and potentially millions of dollars in discrimination settlements, so he’s going to be radioactive to any company but those explicitly on the ideological far right.

So, disclosure policy is kinda an active discussion within the security community but there is a general move away from coordinated disclosure (aka responsible disclosure) where the vendor and reporter coordinated on disclosing the vulnerability, not publicly disclosing until the vendor okays it.

Coordinated disclosure puts a lot of power in the vendor to simply ignore or delay fixing issues, and frankly may not actually be the "responsible" course of action. Full disclosure, where the first warning anyone has about the issue is when all the details are dropped about it to the public _may_ result in a faster patch time but it also increases risk of it being weaponized during that in-between period. There is the chance it was being used in-the-wild without being known also, but releasing the information increases the risk of those in the-wild-attacks but reducing the overhead necessary to carry them out.

All that said, there is a newer option that has been pretty steadily gaining popularity over the last seven or so years. Deadline-based disclosure. This did exist before, but really gained popularity in recent years as Google's Project Zero adopted it as their disclosure policy. This is the idea where the reporter discloses a vulnerability which starts a countdown to the public disclosure (90 days is fairly common, but I've seen 30, 60, and 180 reasonably often also)

I think this deadline-based disclosure option strikes a good balance between the benefits of coordinated, and full disclosures.

Fwiw there was a good talk from Ben Hawkes about Project Zero's Disclosure Philosophy at FIRST 2020, https://www.youtube.com/watch?v=9x0ix6Zz4Iw


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: