Hacker Newsnew | past | comments | ask | show | jobs | submit | mindfulmark's commentslogin

Anecdotal, but I'm extremely disappointed with my Hyundai Tucson purchase. It's the first car I've owned. The drive train is gone on it and the mechanic says it's a common issue. Only 140k on it, 2019. It's hard to believe I paid so much for it and got so little use.


140,000 miles is a lot of use in 7 years. "expected" design life of cars can be 10yrs/100,000 miles. Sorry to hear about your drivetrain issues. I don't know about the Tucson in particular, but many manufacturers are lying about the transmissions having "lifetime" fluid when the transmission manufacturers themselves recommend fluid changes around 60-80k. But if you don't change it, yes technically the car will make it to 100k, but then no shop will touch the transmission fluid for fear of wrecking it.


More like we are all equally unprincipled when it comes to survival


I’ve reviewed stacked PRs a couple of times and found it pretty terrible. The only one that ends up making any sense is the first. Better off with either just one single big PR, or don’t ask anyone to look at the next PR until the first one is merged.


Disagree. Gukesh was constantly putting pressure on Ding to find defensive moves and Ding finally made a mistake. The fact that it happened when it did just makes it even more dramatic. We know from the other matches that Ding is capable of finding them, and the fact that he didn't just highlights that they're both human, both under extreme pressure and that it's not just mindless computation.


I'm not sure we disagree at all. Gukesh's strategy throughout the match was to constantly ask difficult questions and the surprise really was that Ding didn't fold earlier.


I guess I was just disagreeing with your opening sentence, the rest was spot on.


So why call it a horrible finish?


Because as a chess fan and just as a human being my heart goes out to Ding Liren who seems like a genuinely likeable and nice human being who has been open about the tremendous struggle he has had with mental health etc since winning the world championships. To pull himself out of a hole that deep and play really great chess for 13 and 9/10s matches and then lose it with a blunder at the last second is awful.

And I say that as 100% someone who wanted Gukesh to win from the beginning, which is a result I think is great for chess and I think is “objectively correct” in the sense that he has played better chess and has been (apart from Magnus Carlsen and his compatriot Arjun Erigaisi who is also a complete monster) the story of the chess world for the last year.


Because the ending was pretty meh. All this excitement, and then Ding just flubs up an end game that most super gm's should be able to draw against stockfish.

The best finale's are often when two players at their best duke it out, and one comes out on top. This was simply not Ding's best.


Can a bear be blamed for murder? Somewhere in between the two is where AI models currently are, and they’re going to continue getting closer to the bear scenario.


Second this. Libraries like Material UI are battle tested and cover the vast majority of components you’re going to need. You can extend or theme them to match your needs and often include a built in design system. The likelihood of building something better from scratch are low.


It’s interesting to read comment threads of people that are dead set against Typescript. It’s a tool that has very few downsides and that improves nearly every single line of code you write. Either they’re scared to learn something new, not willing to take the time, or misunderstanding how useful it is. For anyone reading these comments and agreeing with Typescript naysayers, I would think more about why the commenter and yourself feel that way. You’re putting yourself at a big disadvantage.


As anything - "it depends"™. I did not notice "every single line of code" getting better at all. Yes, it makes things easier on a large team where people do not have time to do codebase discovery - or where people are moved to be highly interchangeable, on big codebases. Yes, static verification can help those teams and those codebases.

But it also introduces a lot of extra work "just to appease the type system". It rarely improves performance (if ever). Because TS has no runtime inference/validation, working with larger libraries or the browser can be a chore because half of your code are type signatures or casts.

So - not necessarily a naysayer, but I do believe that TS is oversold and with smaller teams/projects it might be slowing you down as opposed to helping.


I manage a relatively junior developer who has been using ts ignorer statements a couple of. times. I have said to him, that everytime he feel inclined to either use ts ignorer or do type coercion, he should call me first.

every single time it is a reasoning flaw implementing a solution that is sub par and bug riddled. Had they just let types guide them, they would have become better developers and not had broken the application.

I am curious though. can you provide a snippet where types would be a disprovement?


The worst I had to deal with was converting anything browser-native into data structures that would satisfy the type checker (dealing with native references), and the whole "struct or class instance" dichotomy. Specifically - when there is a lot of DOM-native input (like drag&drop events and their targets and the targets of their targets) that have to be "repackaged" into a TS tree (ending up with properties which would be a JS-version of void*).

An example of what I call "ceremony" would be

  interface BlockIndex {
    [key: string]: UploaderBlock;
  }
  const perServerId = {} as BlockIndex;
  uploaderFiles.map((fe) => fe.blocks.map((b) => b.serverId && (perServerId[b.serverId] = b)));
While somewhat useful, this is in internal code which never gets used directly, and there are 4 lines of ceremony for 1 line of actual code.


The ceremony is caused not by typescript but your misuse of map. You don’t need to create perserverid as an object first. Instead you could flatten fe.blocks, and then filter by b.serverId and then map to a key,value array and use Object.fromEntries to turn this into a keyed object.

Something like:

    const perServerId = Object.fromEntries(uploaderFiles.flatMap(fe => fe.blocks).filter(b => Boolean(b.serverId)).map(b => [b.serverId,b]))
And typescript infers the types correctly. But I still wouldn’t write it as one line, and I’d use lodash instead.


for most frameworks these typing are built. in (eg. react).

my expectation is that there are some packages/DOM typings so you don't need to write them?

regardless, your point stands: typing external dependencies is a pain.


Duck typing can lead to a false sense of security when you /think/ you have Foo when in reality you have Bar with the same shape.

Also Typescript sucks at keeping track of type changes in a single scope. While in Rust I can assign string to foo and then update it with int, I can't in Typescript. This leads to worse types or worse code for the same operation. Combined with typescript's lack of statements as values, conditionally initializing a value is pretty obtuse.

Those are the issues that come to mind right now.


> Duck typing can lead to a false sense of security when you /think/ you have Foo when in reality you have Bar with the same shape.

This is literally always your problem with javascript, its only sometimes your problem with typescript. It's a weird argument.

> Also Typescript sucks at keeping track of type changes in a single scope.

Isn't this considered a very bad practice? Also rust does not allow this, it only allows shadowing.

> Combined with typescript's lack of statements as values, conditionally initializing a value is pretty obtuse.

Can you give an example?


For the first one: It's not an issue in JavaScript because there isn't some compiler telling me yeah that's fine, I have to confirm myself.

For the second one: I know it is shadowing, what I mean is I find commonly that I'd like to have it in Typescript as well. In JavaScript is not necessary since I can just use the same variable.

For the third one: If I have some string variable that needs to be created from either one set of instructions or another, in Rust I do exactly that:

let foo = if x { ... } else { ... }

In ts your options are making it mutable undefined and mutate it inside the if else, using a very weird unreadable ternary, using an IIFE that returns into the constant, or creating extra functions to move the logic out. None of these are even close in readability, locality, or soundness to the rust example.

I find the _combination_ of those things that make it harder to write ts than js.


1: You are similarly able to confirm yourself when you ducktype in TS, regardless: once you've ducktyped once in TS you are then at least helped by the compiler. Again, this is really not a good argument at all.

2: This is a programming practice I never see and would seriously question if its necessary ever, let alone "commonly". I think you may have picked up bad practices from writing in dynamic languages. Please see this for a few example arguments against this practice: https://softwareengineering.stackexchange.com/questions/1873...

3: You are now debating that Rust has better typing than TS, which makes sense because Rust is made from the ground up to have extremely well done static type checking, whereas typescript has to comply with dynamic typing originating from JS. It follows trivially that Rust has the better design because it has more freedom to do what it wants. JS < TS < Rust


I am curious on any example where changing type in scope is more performant or more readable.

it is not really an argument against typescript that Javascript is so bad that you need to spent time tracking your changes.


While in Rust I can assign string to foo and then update it with int

When do you need to do that? Can you give an example?


I suspect they're talking about shadowing. You can't change the type of an existing variable, but you can create a new variable with the same name but a different type.


You can use branded types for the first case.


Even as a one person developer, you inevitably need to come back to old code and understand what's happening. Types help with that. The size of the team or codebase is irrelevant.


Small projects have a habit of getting bigger and small teams have a habit of growing also - usually to deal with the mess of the small project that is now bigger


Is that a bad thing? If you're building an MVP, do you really worry about how 2k developers are going to work on this 10 years from now?

Requirements change and the code base needs to adjust with those requirements, that's gonna happen no matter what. I've met a lot of people trying to predict future requirements, deciding to overenginer today for a brighter future. I have very rarely seen anyone guess the future requirements accurately.


Small projects become bigger projects much faster than that! I’m not suggesting that anybody should think too far ahead when it comes to building mvps, but if it’s a choice between a typed language and a dynamic one like JavaScript, baking a poor decision in early is going to hurt later. And later is much sooner than you think.

That’s not going to negatively affect your initial velocity, if it does, the team isn’t strong enough.

If the project is just a one off website or something genuinely small, sure, who cares? Otherwise it's worth realising that you'll be dealing with the fallout of poor early decisions pretty quickly.


> That’s not going to negatively affect your initial velocity, if it does, the team isn’t strong enough.

This. Note also that "a poor decision" might as well be "have developers fight the type system instead of delivering UI and pivoting if users don't like it".


There are also cases where small teams that have grown grow unproportionally to the size of the product, and while the product is set up in a fairly sane way (and there is little wrong with it!) having 20 fresh people swarm into it destroys both the architecture and the execution. And with a small team enforcing cohesion for both of these is much easier! So a small project might as well stay small, but this should be somewhat a priority.

Mythical man month in action.


Some people also hate parking between the lines and returning shopping carts at the grocery store. Those are similar in that they have negative value to the individual but help the community around them.

TS often can interrupt an individual's flow, so feels like a negative value. It's only when the whole team is using it on a bigger codebase with lots of changes that the benefits start to manifest.


Not just with teams, going back to a solo project after some time is so much more of a hassle if you don't have any types to guide you.


A million times this. Many a time I have done something "clever" to elegantly solve a problem in Javascript, only to come back to it a year later and not understand what the hell I did. The context for the problem wasn't fresh, so I didn't understand why I was doing that "cleverness", nor what restrictions there were on that solution, etc.

I rewrote one of those projects in Typescript a while back, and came across a similar "clever" solution (mainly having to do with dates having potentially multiple sources, so being in potentially multiple formats), and it made the code _infinitely_ easier to understand. So much so that when I came back to it recently, one quick glance at the types for that section of code gave me all the information I needed to confidently extend that code without worrying about bizarre runtime errors.

People forget that even in single-person teams, you're actually working with many different "people" over the lifetime of the project, given how different you and your understanding of the context of your code will be over time.


Imagine you come from a small town where there are no parking lines at all, and everyone efficiently parks on unmarked blacktop in a respectful way.

Now imagine you go to a big city where they have a bunch of lines in the parking lot and people only half use them correctly, parking over the lines, diagonal, etc.

The existence of lines doesn't guarantee good behavior. The absence of lines doesn't guarantee bad behavior.

This is the argument I see for javascript-only folks who don't necessary enjoy using "the worlds most bloated javascript linter"

For the record, I am a Typescript enjoyer and I use it in my personal projects as well as professionally, but even I can admit that it's not automatically superior to javascript and it has a number of really frustrating and time-consuming downsides.

It's very easy to type the args and returns of a function and protect callers, but it's much more challenging to work with types between libraries and APIs all together. Lots of `as unknown as Type` or even the dreaded `any` to try and cobble the stack together.


100%. Great to have type consistency. Terrible to deal with similar conflicting extended types in the enterprise codebase that make minor changes because someone couldn't figure out a compiler issue 5 years ago.

For the record I don't like the syntax either. Combining ES type spreading with TS type annotation makes for difficult reading in my opinion. Why settle for this bastardized language and not just compile something made to be strongly typed into js?


That's apples and oranges, though. Sure, if you have a dev team that "parks via the shuffle algorithm", sure, painting lines isn't going to help.

But if you have a dev team that is taking the time to efficiently park in a respectful way, if you paint lines, _you're going to make that parking job a hell of a lot easier to do!_ And THAT's the big win of Typescript.


> the dreaded `any`

You're dreading javascript


This kind of aggressive "there is something wrong with you if you don't have the same preferences and priorities as me" is such a turn-off.

I don't use it because the compiler is just too slow; waiting 2.5 seconds for even simple files is a massive pain. I want the old "CoffeeScript experience" where you compile stuff on-demand in development and output errors in the webpage or stderr. It works very well, is low complexity, and requires almost no effort. But you can't as it's just too slow.

esbuild doesn't typecheck so it's not an option. And hugely complex background builders are, well, hugely complex, it's not an option.

TypeScript-the-language may be nice, but TypeScript-the-tooling is not particularly great.

And even if this was solved: any build step will add complexity. The ability to "just" fetch /script.js and "just" edit it is quite nice. This is also why I've avoided CSS pre-processors since forever (needed a bit less now that variables are widely supported).

Of course different projects are different and for some projects these downsides are less pronounced. There is no one perfect solution. But there are definitely downsides to using TypeScript.


I don't think anti-TS people are 'scared to learn something new'. I'm sure most of those people write TS on a daily basis, because it's an industry standard right now.


I'm not against TypeScript, but I don't really see the massive advantage. I rarely see problems that are due to typing, and the downside is usually limited as I keep my JS on the frontend, not the backend. Regular JS/ES6 just flows better.


>I rarely see problems that are due to typing

This is a fallacy similar to the Blub paradox: if your language has a weak[1] type system, then it isn't capable of recognizing many problems as "type error". But stronger type systems can express stronger invariants. So something that isn't a type error in one language will be a type error in another. This changes how the programmer conceives of problems.

Example: missing a case in a switch statement isn't a "type error" in C or Java, but it is a "type error" in languages like Rust or ML, because they have sum types with exhaustiveness checking. Other examples: array bounds checks can be eliminated with dependent types; lifecycle bugs like use-after-free and double-free can be eliminated with substructural types.

[1] "weak" in an informal sense of "not very expressive"


The real Blub paradox to me is that the most powerful and expressive language is best characterized by minimalism at the language level.


Correct me if I'm wrong, but doesn't the Blub Paradox imply that languages dedicated to Code Golfing are at the pinnacle of expressiveness, and look down on everyone else's languages (for their extreme verbosity, compared to the golfing languages)?


No I don't think it's about compactness of expression, but rather what it's possible to express at all.


Yeah or even simple typos, or mixing up the order of arguments, are things that are hard to catch in regular JS (except at runtime) but trivial in TS.

I suspect a lot of people might have had bad experiences with codebases which overuse complex types or trying to type things like Redux which is messy. When I use TS for personal stuff I’ll typically be a bit loose about things like any in places where I don’t care (for now) and I feel it doesn’t add much overhead, but I have been using it for a long time so it’s become second nature.


This is a very fair comment, and you seem open to understanding why types are useful.

"problems that are due to typing" is a very difficult thing to unpack because types can mean _so_ many things.

Static types are absolutely useless (and, really, a net negative) if you're not using them well.

Types don't help if you don't spend the time modeling with the type system. You can use the type system to your advantage to prevent invalid states from being represented _at all_.

As an example, consider a music player that keeps track of the current song and the current position in the song.

If you model this naively you might do something like: https://gist.github.com/shepherdjerred/d0f57c99bfd69cf9eada4...

In the example above you _are_ using types. It might not be obvious that some of these issues can be solved with stronger types, that is, you might say that "You rarely see problems that are due to typing".

Here's an example where the type system can give you a lot more safety: https://gist.github.com/shepherdjerred/0976bc9d86f0a19a75757...

You'll notice that this kind of safety is pretty limited. If you're going to write a music app, you'll probably need API calls, local storage, URL routes, etc.

TypeScript's typechecking ends at the "boundaries" of the type system, e.g. it cannot automatically typecheck your fetch or localStorage calls return the correct types. If you're casting, you're bypassing the type systems and making it worthless. Runtime type checking libraries like Zod [0] can take care of this for you and are able to typecheck at the boundaries of your app so that the type system can work _extremely_ well.

[0]: https://zod.dev/ note: I mentioned Zod because I like it. There are _many_ similar libraries.


Do you ever see null or undefined access errors? As a TypeScript developer I haven’t seen one for many years.

Also, when you have types it changes how you code itself. When I change a schema or refactor some function, I don’t need to think at all to make sure I’ve updated all the code that depended on the old schema or API; just fire the TypeScript compiler and it tells me everything that needs to be updated.

I’ve also not seen any issues for a long while where I’ve missed some conditional case, because I use discriminated unions with switch statements more, something that looks weird in normal JS but is very useful with types, since it tells me if I missed a case automatically.

Add that I’m managing a team of engineers, and so I can easily make sure they’re also not missing cases either, by setting the convention and having them see the light.

Putting aside other things like for instance always knowing that we’ve validated inputs for API endpoints since unvalidated inputs are the unknown type and therefore effectively unusable; or always knowing we’ve parsed and serialized dates correctly since we use branded string types to distinguish them from any other string with 0 runtime impact; the list goes on.

So yeah, it might just be the case that you haven’t actually internalized what coding with types even means, so you’re unable to imagine how it can help you.


I always feel like those comments are written by people working on 2-person projects who never worked in a 50+ people shared codebase, and not understanding that the world different than theirs exists, and what challenges that brings.


Please don't sneer, including at the rest of the community.

https://news.ycombinator.com/newsguidelines.html


It's not that it's bad. But sometimes the project and the team are not that big that its qualities matter. And you lose a little bit of readibility with type-intensive code.

And nicely written TypeScript looks awesome, but badly written TypeScript can be a huge mess, as it can with any language, but TypeScript purists sometimes forget that the language is just a part of a nicely written and designed system.


I would describe typed code as more readable, not less. I take “readability” to mean ease of understanding, not how much my code sounds like written english. Not knowing what the type of something is makes understanding harder.


Inferred types seem to be an indication that even the most type-safe languages (eg rust) recognize that types hinder readability, at least in some way.


>You’re putting yourself at a big disadvantage.

Why not just appreciate the diversity of opinion and move on, rather than lecture people?


It's the Great Typing War all over again.

Some people feel more comfortable with JavaScript, Common Lisp, Lua, etc.

Some people feel more comfortable with TypeScript, Typed Racket, Luau, etc.

And that's okay.


> It’s a tool that has very few downsides and that improves nearly every single line of code you write.

Sometimes I just don't feel like dealing with those very few downsides though, but I can accept it's mostly personal preference.

At my age, sometimes I just don't want to deal with:

1. Yet another configuration file (tsconfig.json in this case). When something breaks, having one more place to look at is not something I want. The more extra files like this are needed for the development environment to even work (as in, something undesirable happens if you remove them), the less confidence I have in the project's long term reliability/stability.

2. That same configuration has misleading naming. The `"strict": true` setting should be called `"recommended": true`, or at least `"preset": "recommended"`, because it's not even strict. I would expect this `strict` flag to enable everything to the most restrictive way possible, and let devs disable checks (if) they don't want them. In its current state it doesn't enable strict checks like `noFallthroughCasesInSwitch`, `noImplicitOverride`, `noImplicitReturns`, `noUncheckedIndexedAccess`, `noUnusedLocals`, `noUnusedParameters` (I might be missing more).

3. Related to previous point: Inconsistencies between projects. So I work on one project with strict settings, tsc properly mentions possibly undefined accesses, etc; and then I move to a different project, and if I forget to context switch ("TypeScript config is different here"), I could be accidentally trusting the compiler to keep undefined accesses (and other stuff) in check, when it's not actually doing so.

4. Last time I checked, I couldn't just have a git repo "foolib" that is 100% TypeScript (100% .ts files, zero .js files), and `npm install` that repo on a separate project, and have it Just Work™. There's always extra steps that need to be done if you want to use .ts files from a separate package (usually compile to .js and install that; or using a bundler (read first point again)).

5. Why does the "!" operator even exist (or at least, why isn't there a flag to forbid it (for example the strict flag)). In my experience, using it is just developer laziness, where someone just doesn't want to write proper checks because "it's noise".

---

Those 5 points came off the top of my head so I'm almost certainly forgetting stuff.

It's mostly "death by a thousand cuts" kind of stuff, so sometimes I might not mind, but other times I might not be in the mood to deal with this and heavily influences my decision to go with TypeScript (keeping it approachable to as many people as possible) or a different language/ecosystem altogether.

Yes, I could "just" write a package that I can just npm install and it autoconfigures TypeScript and other stuff for me (and I have done so, for my own sanity). But I shouldn't need to do that, and it's too brittle for my taste.


Don’t let perfection be the enemy of the good


[flagged]


Please don't cross into personal attack, no matter how wrong someone is or you feel they are.

https://news.ycombinator.com/newsguidelines.html


People think typescript is really the silver bullet. If a developer writes crappy code, using tool x will not make him write great code. and the tool may not improve the project either. It's really tiring to work with people like you who are morbidly in love with a tool.


Seems like the gases are getting compressed either way and it's just different ways of wording the same effect. As for it being reversible or not, is it not just a matter of whether the energy was actually transferred somewhere? Like you could technically undo the shock the same as you could depressurise air in a pump no? I don't really know what I'm talking about though, fyi.


Gas is being compressed, but that doesn't mean the heating is from compression.

There's a gas heater on the market that works by using rapidly moving vanes to induce shock waves in the gas. The outflow has the nearly the same pressure as the inflow, but the gas has been heated, potentially to a temperature higher than could be achieved by resistive heating elements. EDIT: I mistated this; see below for link.

Consider also that once the shock heated air around the reentry vehicle has expanded back to ambient pressure, it will be hotter than it initially was.


resistive heating elements made of carborundum can heat air to 1625° https://www.kanthal.com/en/products/furnace-products/electri... and molybdenum disilicide to 1800° https://zircarceramics.com/wp-content/uploads/2017/02/Design.... while graphite heating elements can go to 2200° https://apps.dtic.mil/sti/tr/pdf/ADA329681.pdf but not in air. what are these vanes made of?


The point is that the shock isn't the air hitting the vanes, it's the air hitting other air.

Similar to re-entry heating: the specific kinetic energy of the returning capsule is many times greater than would be required to melt and vaporize any material. So why do things survive re-entry? Because most of the energy is dissipated in the bow shock, significant distance away from from the capsule, where air gets heated to temperatures higher than the surface of the sun when other air slams into it. The purpose of the heatshield is to protect from radiative heating from the bow shock, not convective heating. Ablative heatshields do not work because ablation consumes energy which removes heat (again, there is sufficient energy going around to ablate the entire craft), but because they place a shade (made of ablated carbon particles) between the bow shock and the craft, which shields it from the radiative heating.


> The purpose of the heatshield is to protect from radiative heating from the bow shock, not convective heating.

In this case the entry regime was such that convective heating far outweighed radiative heating.


i'm interested to hear more about these heaters. do you know what they are called or what they are made of?


See link in a comment below.



hmm, that's a good point; so if you run coolant through the vanes they can operate without damage while producing temperatures that would vaporize them?


Do you have a link or company/product name? Sounds fascinating.


https://coolbrook.com/wp-content/uploads/2023/06/REPRINT-202...

I mistated slightly: the gas is accelerated to supersonic speed then slowed in a diffuser, where shock waves heat it.


thank you, this is great! it sounds like they're only targeting 1700°, though, which is a temperature that exotic resistive heating elements can reach. it's too bad they didn't include any kind of diagram


The gasses get so hot they give off a lot of black body radiation. Heat shield is mostly beat up with infrared.

When the gases decompress they'll be a lot cooler, just like your AC.


I believe for entry from LEO, and particularly for small RVs like this one, convective heating is orders of magnitude higher than radiative heating.

https://ntrs.nasa.gov/api/citations/20140012475/downloads/20... (see slide 7)


From Slide 7: "Radiation dominates convection at ~11.5km/s for 1m radius • Radiation dominates convection at ~10km/s for 5m radius"


Since this is reentering from LEO, it's maybe 8km/s (and dropping). And the radius of curvature on the nose of that shield was maybe 20 cm?


There are a some things in your comment that give me the impression that your opinions are too strong for your experience. 99% of codebases are bad. It’s the baseline condition. It’s our job everyday to slowly make them better. I’m extremely picky about what comments are allowed to make it into the codebase since the majority of comments I see are wrong, outdated, obvious and riddled with typos. I think it’s very easy to complain about bad code when really the best thing to do is just suck it up and fix things.


It took me 15+ years in the business to realize this. If someone gave you a time machine to go into the past and give me your exact comment AND punch me in the face, it still wouldn't have dawned on me because I was in pursuit of some weird perfection that doesn't exist.

Thanks for spreading sanity and insight -- I hope someone else digests your message sooner than I did.


> It’s our job everyday to slowly make them better.

Why? If 99% of code bases are bad and it's the baseline condition, why should I try to move things in a better direction?

OP proposed better coding practices and got ignored. OP proposed adding some automated lint checking, etc, and got mocked for it. OP is trying to make things better, and will suffer for it. OP comes to HN and doesn't get any better reception for his ideas either.

I think OP should hold on to his opinions and make sure his resume is up to date.


> Why? If 99% of code bases are bad and it's the baseline condition, why should I try to move things in a better direction?

Code is like entropy: it's natural tendency is to become disordered and not suit the current problem domain.

In order to fight entropy, you need to perform the unnatural act of ordering back and sorting out the code you wrote based on your current understanding of the curremt state of your problem domain.

Also keep in mind that developers continuously gain experience, and gather insight into what works or not, and refine their expertise with time. Your yesterday's code is worse than the code you'd write today. By definition this means that when you look at all the code you wrote, it all needs improving.

> OP proposed better coding practices and got ignored.

He didn't. He proposed wasting time adding comments and dump onto someone else the responsibility of cleaning up his mess. That is not helpful and should be laughed out of the room.


Realizing that external dependencies are regular codebases just like the one you're working on. That you can open them up in VSCode, look around and figure out any bugs or issues you're having and even open pull requests to improve them.

At that point, you lose the feeling that there are magic things out there that you will never understand and that for the most part everything is just regular old code that regular people wrote.


This is indeed a superpower.

I don't really remember when I felt that external dependencies were magic, but thinking about this, it explains a lot of the behavior I see on some developers who are very negative about the more challenging parts of the job.

Some of them don't really believe the research stuff we do at work are even possible. They're constantly surprised when other devs finish those tasks. Some don't believe that other devs can code in C++ or Rust, or write parsers, database modules, implement IQueryable in C#, or develop novel algorithms for novel applications.

To them, if a package exists it must just work, and that package comes from another breed of developer that can't coexist with them. I see a similar thinking with AI: now with ChatGPT and GPT-4, there's a hubbub about there being "no reason for our AI team to exist anymore".

I'm not a big fan of working with those developers.


> I'm not a big fan of working with those developers

I agree. And it ties into something I often see that puts me on edge: programmers not taking responsibility for the code they put into their projects.

What I mean is that when you incorporate any code, from any source (library, framework, copypaste, etc), then you are responsible for that code and its proper behavior as much as for the code you actually wrote. So you're well-advised to understand it.

That's one of the reasons why I won't include code that I don't have the source code to. I need to understand it and be able to fix it.


Good point.

The "out of sight, out of mind" approach doesn't really work for code you're actually responsible for.


I've worked with some of those folks too. It seems to me like they haven't really learned programming the way I understand it; instead, they've learned various incantations that can be strung together, and are just at a loss when they don't work as expected/documented.


I’ve noticed this as well - they don’t view software building as a form of engineering that can be learned from basic principles of computer science, but instead as magic.

So the go to for every solution is to find a third party library from a “real” witch or wizard, and follow its basic tutorial. Maybe try to customize it a bit at most.

If something breaks, just start randomly moving things around or copying and pasting more code from forums until it works.

I can’t live like that. I need to know why something’s not working, AND why it IS working. I like stepping through my code with a debugger just to make sure things look right, even when they’re working.

I think the craziest part, though, is just how much people with this “software is magic” mindset can actually get just by brute force cobbling things together.


Oh, that's definitely true.

To me the issue is they assuming everyone around them is like this. I'm totally fine with lack of experience or knowledge, but a co-worker constantly underestimating their peers is not alright.


The senior developer above me is like this for me, and it’s rough because it shakes my self esteem AND often leads to me having to support and extend a fragile and changing 3rd party library to solve a rather specific problem that would be better solved with custom code.

Where it really stings is when I do something of my own initiative (like, for example, create a transparent API over memoizing and caching some expensive calculation, or refactor some of our common client customizations into their own set of classes so it’s easier to extend) and he ignores it or scoffs at it.

Only to then find a third party library that implements something similarly. THEN it’s presented to me as a “brilliant idea” that we can take advantage of.

At that point, when I say “we” already do this, he usually rephrases what his brilliant 3rd party library does, as though he can’t fathom that I would be capable of doing something like that myself, and clearly I’m just not understanding what he’s telling me.

I think it’s a defensive mechanism for his ego against one of his peers or employees being more capable than he is.

But it’s also not like he can’t learn this stuff himself, he just doesn’t ever put in the time or effort.


With many exceptions (the left-pad debacle comes to mind), it’s generally much better to use a third party library instead of supporting your own implementation.


I know this is the mantra, but my experience has been highly mixed and only generalizable to how low in the stack it sits.

For, say, security implementations for authorization and access controls, or even low level HTTP request routing? Absolutely. The goal there is to adhere to something standard and battle tested by experts, and the third party libraries tend to be fewer in number, and of higher quality, with longer term support and clearly defined upgrade paths.

But that’s the lower level stuff, where your special custom needs are superseded by the primary goal of just “doing the one right thing”, or “adhere to the commonly agreed upon standard”.

For all the other things that make an app unique - things like CSS frameworks, UI components (beyond basic, accessibility-minded building blocks), chart drawing, report generation and caching - my experience has taught me otherwise, the hard way.

Being stuck using a 3rd party library that doesn’t do what the client or business needs it to do, having to juggle our own internal patches and bug fixes with updates to the library itself, all only to have the library abandoned or deprecated in favor of the author’s next pet project, really sucks and often comes with a high opportunity cost and a high development cost.

I now consider third party implementations of higher level features (and especially anything front-end) to be something that needs to be evaluated as equally costly as an internal implementation by default, and not favored just because somebody else wrote it.

Maybe I’ve just been unlucky in my experience, though. I also suspect ecosystem makes a difference. The PHP and JS ecosystems are full of poor libraries with snake oil sales pitches. I suspect this is different with, say, Rust.


I think we mostly agree with each other. As I said, there’s many exceptions.

I’ve mostly worked with Python and JVM languages, which probably explains why I’m less passionate about the counter-argument than you are. Ecosystem definitely matters a lot. VanillaJS is the only good JS framework IMO.


I’ve never met a developer with that attitude, but I’ve met too many managers with it. You’ll be even less of a fan of working for those developers, I guarantee it.

That mediocre dev is a straight shooter with middle management written all over them…


I think it depends on the industry and in location. In my neck of the woods I'm seeing an uptrend towards more technical managers, or sometimes a mix of lead developer and manager, who can actually get things done.

It's a matter of preference but IMO/IME it works better when a manager is actually good at it.

A manager that is supposed to be technical but only wants/knows how to do the management part, and can't make more than the basic stuff won't really fly. I've seen a couple of those getting fired during probation period.

On the other hand, the "bad developer" that thinks packages are magical will eventually settle as an expert beginner in a low-expectation environment. Which is fine.


Adding to this, the decompiler built in to many ides really up'd my game understanding underlying libs. How they work, what methods to call, etc. Very helpful! As much as people trash java this is a really nice feature. I'm sure other languages have decompilers as well, but I've never seen anything close for c# for example.


>I'm sure other languages have decompilers as well, but I've never seen anything close for c# for example.

Dotpeek is integrated into Rider and is a world class decompiler. It also integrates into Visual Studio either standalone, or with ReSharper.

You can also integrate external source symbol servers into your IDE of choice as well that will let you debug into libraries seamlessly.


>I've never seen anything close for c# for example

If you highlight a method you call from external code and hit CTRL + F12, Visual Studio will automatically decompile it for you.


In the case of Java at least, what helps is that the IDE can decompile the code or, which is often even more helpful, download the source code and allow you to step through it while debugging, at least if a source JAR was published (which is pretty often the case).

In the case of non-compiled languages, of course you don't even need this step since all your libs exist in source form already, so it was pretty simple for me to step through Ruby library code with a simple debugger and no fancy IDE.

I have a habit of sometimes debugging even horribly abstract framework (e.g. Spring) code when I don't understand what it's doing. That's maybe not the most efficient method, but it does usually make me understand why thing X is not working the way I expected it to work.


In python pdb ‘breakpoint()’ is great


Funny thing was I never even thought to do this until I was working on a very strange bug, and a senior engineer at my company suggested I look at the source code for one of our dependencies. Sometimes really obvious and basic advice can be a big step for people.


Yeah I think it's helpful to recognize that this isn't always obvious to people, even folks who seem like they'd instinctively do so–I had a similar experience a year or two into being a professional programmer, despite being someone whose first experience with dependencies years beforehand was downloading Perl files and directly editing them.


Had a similar experience during my Bachelors thesis. I was adding new functionality to an existing code inspection framework and was also supposed to add a graphical interface with GTK (this was 2014). At some point I identified a performance bottleneck within a GTK component. My advisor suggested fixing it and I just couldn't understand how I lowly student am supposed to tackle anything in this big behemoth. In the end, I didn't do it but it made me think and jump into various big open source projects in the following years. And you get used to navigating these suprisingly fast.


> That you can open them up in VSCode, look around and figure out any bugs or issues you're having and even open pull requests to improve them.

I think the real skill then is to learn to navigate big codebases in few days time than taking a few weeks and then feel dejected by the time spent and still unsure.

I often feel ambitious for such endeavors but navigating big codebases take time, any tips?


It helps if you can get your IDE's indexer configured, so that you find refererences to functions and variables reliably.

More importantly is to use an IDE with a good fast global search function and get comfortable with it. At least for me, 99% of navigating a large codebase is global search.


Or use an ide that doesn't need indexed configured and just works


Is there a way to get vscode to this level.


For C++, I use clangd (works fine for GCC projects). The only config it needs is the path to compile_commands.json, which can be automatically generated by CMake and some other build systems. For TypeScript no config is needed. For Java there is the redhad Java plugin in VS which provides good indexing.


Not using VSCode currently but I think pretty much yes.

You can goto or peek definition in VSCode. And the find tool has the option to find all occurrences throughout your project.


My favourite language for this is Go. The standard library is totally exposed and easy to jump into, right there for you to learn from and to make sense not only of the library but how to actually write Go in the first place.


CGO_ENABLED=0 for new projects though :)


Just calling out that if it weren’t for open source this would be much harder


Imagine any other profession needing to inspect, rebuild, and fix the tools that they are given to do their job.


that's actually, um...common? i just had to repair an electric jackhammer last week. i worked in a machine shop for a large well drilling company not long ago, and not only did we create/repair tools for the company, but obviously had to keep our mills and lathes and cranes, etc. in good working condition.


It's not common in the same sense. First of all, tools are very different from software products. And there is never the same level of analogy that one has to do in software.

Imagine buying a hammer, that hammer not working, the hammer's design being so complicated that it's impossible to understand or mend, and then having to design and build your own hammer, and then putting up with that situation over and over again and accepting that as the status quo. That would be the correct analogy.


I'll agree wholeheartedly that the analogy needs some work. Tools are different - we have our literal physical tools that we don't generally dive into (keyboards, mice), we have tools that are maybe more battle tested and rarely examined (cat, grep, find).

We have do have tools like the hammer - there is one design, everyone more or less agrees on it. There is still high quality and low quality, but it has one job. We have tools like a bulldozer - complex, numerous parts, requires constant maintenance, closed source.

As the parent said - it is not uncommon to have to maintain old equipment, as well as design new tools as new requirements pop up.

Sure, our rust is a little bit different - time wears on software in a different way. Use wears on software differently. (Changing product requirements leading to a new tool is probably common.)

The maintenance may be trickier - but I'm sure changing components on a tool when a certain component is no longer available is not easy, thats where shim layer comes from!


Have you met farmers? They will readily tear apart their equipment to fix an issue or modify the tool to make them more ergonomic. This trend of massive multi-million dollar John Deere combine harvesters with DRM widgets that you need to take to a specialized tech to get fixed, is a relatively modern one, and one detested by virtually every farmer.

This was and is quite common in any blue-collar field where works don't always have the money or time for a brand new jawn every time something goes wrong.

I love estate/yard/garage/barn sales and going through the tools and reading the tales they tell from their wear patterns, field repairs, revisions, and hacks from their owner.

Case in point, here's a tool which has gotten multiple leases on life through modifications. https://youtube.com/watch?v=oTA513ttrbQ


I worked on a farm. I think the things you're bringing up are still subtlely different. Modifying something you can see and touch for some unintended purpose versus modifying some piece of software because it doesn't work are miles apart from each other.

For example, I have a graphics project where screen tearing suddenly started appearing although my code didn't change, and the tearing wasn't there before. Is the issue in Skia, OpenGL, the graphics driver, the Intel or NVidia GPU, or the OS? Or is it some latent issue in my code that started showing up because of a change in these dependencies? Or is it some other complex interaction between multiple dependencies? I couldn't possibly know, and I actually don't think there is anybody that actually knows. And there is zero chance I could ever figure it out. I mean, it at least appears it was a driver issue as after some updates and a reboot it just went away, but there is zero insight into why or what actually made it go away.

If I modify a plow because some part as originally designed was flaky and constantly broke, you usually know to a pretty good degree why your fix works.

In software, the abstractions are such that it is practically impossible atbtimes to understand.


That to me sounds like a difference of degree or heap problem, rather than a categorically different condition.

Sure, welding back on a hardpoint that broke off a plow is concrete and obvious (maybe that's analogous to fixing a broken dependency that got renamed), but there are other fixes that definitely fall under "I don't know why it works but it works". I've had electrical gremlins and fixed them by grounding something that looked like it ought to be grounded, and was grounded when I tested for continuity, but nonetheless was acting floaty. It was probably a loose connection elsewhere in the system, but I didn't know that for sure, nor do I even know that it was the problem in the first place.

Software definitely reaches insane depths of complexity, but again, if you can dig down and understand the abstraction well enough to attempt a fix, and the problem goes away, isn't that good enough? The real difference is mechanical systems typically only have a few layers of abstraction, while software has dozens to thousands of layers of abstraction.


I feel like this is quite common but it depends on the optics of situation.

A mine site usually fixes its own tools used to do their job, they usually fixed things they've purchased to use that are broken or become broken, is that not the same?


I highly doubt that a mining operation finds it acceptable that tools that they have purchased show up broken, undocumented, and unsupported.


Your job as a developer is to write code and piece code together to make your life easier. That includes taking the work that others have done and make it work to suit your code base.

On a mine site, the job is to mine and process minerals. That includes engineering a mine using products that eventually fail over time and need fixing. As part of that process an automotive electrician may need to fix the electronics on a vehicle that have become faulty or damaged.

Mechanics and engineers need to inspect, rebuild, and fix the tools they are given to do their job. The difference is that yours may not work up front, whereas these tools need maintenance over time.


When I was doing enterprise work with closed-source libraries, any defect required them fixing it. We've found some issues with AWS and they fixed it too.

The difference between most professions and programming is that we have open source. The equivalent of that in the real world is getting blueprints to build our tools and then building them ourselves, with no guarantees.


You'd be surprised how common it is.


I've worked in many industries, and none of them put up with the poor state of tools and treat it as a given like the software industry does.


Hmm again I think mining are pretty good at making do with the poor tools they have

Very different situation

I've also found that the software industry obsesses over tools more than any other industry


"I was an ordinary person who studied hard. There are no miracle people. It happens they get interested in this thing and they learn all this stuff, but they’re just people." - Richard Feynmann


Not all external dependencies are noncompiled. Also, library code is often different from your regular codebase especially in a language like C.


This. My favorite language for this is Clojure. There's usually not much library code to sift through because it's so terse.


Relatedly, understand the frameworks that you build upon.

For React devs, this means learning how React actually works under the hood.


Whenever I have this situation, it's always with a library too big for my smooth little brain to comprehend.


The trick is to dig deeper into those big library's dependencies as well. It's turtles all the way down.

The other thing I find is that big libraries are either mostly dependency bloat (as implied above) or dealing with a hard domain problem. If it's the latter, what you're really struggling with is not the library, but the domain it's trying to represent.


"If it's the latter, what you're really struggling with is not the library, but the domain it's trying to represent."

But this doesn't change anything of the programmers problem. If I stumble on a bug in a physics libary I am using for a game, I cannot just jump right into there and go fixing things. I mean I can start doing it, but at the cost of not getting anything else done for quite some time.

There are lots of hard domains in programming. Cryptography is hard. Networking is hard. Fast rendering is hard. Efficient DBs are hard. OS are hard. Drivers are hard.

You can maybe fix a trivial error in such libaries, but everything else is usually a (big) project on its own.


Absolutely correct, but I think it’s still valuable to be able to recognize if you’re struggling against the code or the domain.

Additionally, once you recognize that you can also recognize if you’re dealing with incidental complexity (e.g. poorly thought out/designed code) and inherent complexity (e.g. physics calculations). The former can be fixed, the latter cannot. Knowing the difference saves much pain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: