Hacker Newsnew | past | comments | ask | show | jobs | submit | xmodem's commentslogin

> And I'm guessing that the reason macOS doesn't give more details is because macOS is likely not involved in the step that fails

And I guess because of the wide variety of third-party hardware macOS has to support, it's not practical to write a pre-flight check into the update process either.


I've never tried it myself, but it's oft-repeated folk wisdom in Apple circles that enabling filesystem case-sensitivity breaks all manner of third-party software that has only ever been tested on the case-insensitive default.


Are you hosted on cloud platforms that are SOC2 compliant? Or have you achieved and been audited for SOC2 compliance yourself? I'm going to have to assume it's the former because if it was the latter you would directly say so. To me that type of sleight-of-hand inspires distrust, which is fatal to any prospect of me evaluating the product.

Beyond that, a key risk that has been brought into focus more and more lately is data portability and vendor lock-in. At this point I do not deploy a new vendor without documenting the exit strategy.

The best exit strategy you can offer is an open source, self-hostable version of the product with a simple migration plan. Some of the other existing competitors in the enterprise chat space already offer this. Even if no-one uses it, by offering it you keep your priorities aligned with your customers.


The point is that no indie dev was able to plan around the surprise release of Silksong, precisely because it was a surprise.


Ah, so then just an agreement with "Matters way less than people think" in the end?


No.

A lot of devs delayed their launches:

https://www.gamespot.com/articles/silksong-release-date-has-...

Those that didn't or couldn't think it hurt them pretty badly:

https://www.dexerto.com/gaming/hell-is-us-boss-slams-silkson...

In general I think you are probably right. But there are definitely exceptions and this is one of them.


I have not been as aggressive as GP in trying new AI tools. But the last few months I have been trying more and more and I'm just not seeing it.

One project I tried out recently I took a test-driven approach. I built out the test suite while asking the AI to do the actual implementation. This was one of my more successful attempts, and may have saved me 20-30% time overall - but I still had to throw out 80% of what it built because the agent just refused to implement the architecture I was describing.

It's at its most useful if I'm trying to bootstrap something new on a stack I barely know, OR if I decide I just don't care about the quality of the output.

I have tried different CLI tools, IDE tools. Overall I've had the best success with Claude Code but I'm open to trying new things.

Do you have any good resources you would recommend for getting LLM's to perform better, or staying up-to-date on the field in general?


If you haven't yet, check Claude Code's plan mode:

https://claudelog.com/mechanics/plan-mode/


What's your point, though? Let's assume your hypothesis and 5 years from now everyone has access to an LLM that's as good as a typical staff engineer. Is it now acceptable for a junior engineer to submit LLM-generated PRs without having tested them?

> It was thought impossible for a computer to reach the point of being able to beat a grandmaster at chess.

This is oft-cited but it takes only some cursory research to show that it has never been close to a universally-held view.


In the scenario I'm hypothesizing, why would anyone need to "check" or "test" its work? What chess players are checking to make sure Stockfish made the "right" move? What determines whether or not it's "right" is if Stockfish made it.


Your post sent me down a rabbit hole reading about the history of computers playing chess. Notable to me is that AI advocates were claiming that a computer would be able to beat the best human chess players within 10 years as far back as the 1950s. It was so long ago they had to clarify they were talking about digital computers.

Today I learned that AI advocates being overly optimistic about its trajectory is actually not a new phenomenon - it's been happening for more than twice my lifetime.


There are clear win conditions in chess. There are not for most software engineering tasks. If you don't get this, it's probably a safe bet that you're not an engineer.


Right, which is why Deep Blue won in the early 90's and now years later, AI is moving on to far more complicated tasks, like engineering software.

The fact that you gave me the "you just don't understand, you're not a chess grandmaster" emotional response helps indicate that I'm pretty much right on target with this one.

FWIW I have been engineering software for over 15 years.


Its hard to imagine now but the code won't matter. We will have other methods of validating the product I think; like before tech. There are many ways to validate something; this is an easier problem than creation (which these AI models are somewhat solving right now)

All very demoralizing but I can see the trend. In the end all "creative" parts of the job will disappear; AI gets to do the fun stuff.

We invented something that devalues the human craft and contribution -> if you weren't skilled in that and/or saw it as a barrier you win and are excited by this (CEO types, sales/ideas people, influencers, etc). If you put the hard yards in and did the work to build hard skills and built product; you lose.

Be very clear: AI devalues intelligence and puts more value on what is still scarce (political capital, connections, nepotism, physical work, etc). It mostly destroys meritocracy.


It's crucial to be able to do some processing locally to filter out sensitive/noisey logging sources.


Let's have AI generate the same vulnerable code across hundreds of projects, most of which will remain vulnerable forever, instead of having those projects all depend on a central copy of that code that can be fixed and distributed once the issue gets discovered. Great plan!



You're attacking a straw man. No one said not to use dependencies.


At one stage in my career the startup I was working at was being acquired, and I was conscripted into the due-diligence effort. An external auditor had run a scanning tool over all of our repos and the team I was on was tasked with going through thousands of snippets across ~100 services and doing something about them.

In many cases I was able to replace 10s of lines of code with a single function call to a dependency the project already had. In very few cases did I have to add a new dependency.

But directly relevant to this discussion is the story of the most copied code snippet on stack overflow of all time [1]. Turns out, it was buggy. And we had more than once copy of it. If it hadn't been for the due diligence effort I'm 100% certain they would still be there.

[1]: https://news.ycombinator.com/item?id=37674139


Sure, but that doesn't contradict the case for conservatism in adding new dependencies. A maximally liberal approach is just as bad as the inverse. For example:

* Introducing a library with two GitHub stars from an unknown developer

* Introducing a library that was last updated a decade ago

* Introducing a library with a list of aging unresolved CVEs

* Pulling in a million lines of code that you're reasonably confident you'll never have a use for 99% of

* Relying on an insufficiently stable API relative to the team's budget, which risks eventually becoming an obstacle to applying future security updates (if you're stuck on version 11.22.63 of a library with a current release of 20.2.5, you have a problem)

Each line of code included is a liability, regardless of whether that code is first-party or third-party. Each dependency in and of itself is also a liability and ongoing cost center.

Using AI doesn't magically make all first-party code insecure. Writing good code and following best practices around reviewing and testing is important regardless of whether you use AI. The point is that AI reduces the upfront cost of first-party code, thus diluting the incentive to make short-sighted dependency management choices.


> Introducing a library with two GitHub stars from an unknown developer

I'd still rather have the original than the AI's un-attributed regurgitation. Of course the fewer users something has, the more scrutiny it requires, and below a certain threshold I will be sure to specify an exact version and leave a comment for the person bumping deps in the future to take care with these.

> Introducing a library that was last updated a decade ago

Here I'm mostly with you, if only because I will likely want to apply whatever modernisations were not possible in the language a decade ago. On the other hand, if it has been working without updates in a decade, and people are STILL using it, that sounds pretty damn battle-hardened by this point.

> Introducing a library with a list of aging unresolved CVEs

How common is this in practice? I don't think I've ever gone library hunting and found myself with a choice between "use a thing with unsolved CVEs" and "rewrite it myself". Normally the way projects end up depending on libraries with lists of unresolved CVEs is by adopting a library that subsequently becomes unmaintained. Obviously this is a painful situation to be in, but I'm not sure its worse than if you had replicated the code instead.

> Pulling in a million lines of code that you're reasonably confident you'll never have a use for 99% of

It very much depends - not all imported-and-unused code is equal. Like yeah, if you have Flask for your web framework, SQLAlchemy for your ORM, Jinja for your templates, well you probably shouldn't pull in Django for your authentication system. On the other hand, I would be shocked if I had ever used more than 5% of the standard library in the languages I work with regularly. I am definitely NOT about to start writing my rust as no_std though.

> Relying on an insufficiently stable API relative to the team's budget, which risks eventually becoming an obstacle to applying future security updates (if you're stuck on version 11.22.63 of a library with a current release of 20.2.5, you have a problem)

If a team does not have the resources to keep up to date with their maintenance work, that's a problem. A problem that is far too common, and a situation that is unlikely to be improved by that team replicating the parts of the library they need into their own codebase. In my experience, "this dependency has a CVE and the security team is forcing us to update" can be one of the few ways to get leadership to care about maintenance work at all for teams in this situation.

> Each line of code included is a liability, regardless of whether that code is first-party or third-party. Each dependency in and of itself is also a liability and ongoing cost center.

First-party code is an individual liability. Third-party code can be a shared one.


> I'd still rather have the original than the AI's un-attributed regurgitation.

If what you need happens to be exactly what the library provides — nothing more, less, or different — then I see where you're coming from. The drawback is that the dependency itself remains a liability. With such an obscure library, you'll have fewer eyes watching for supply chain attacks.

The other issues are that 1) an obscure library is more likely to suddenly become unmaintained; and 2) someone on the team has to remember to include it in scope of internal code audits, since it may be receiving little or no other such attention.

> On the other hand, I would be shocked if I had ever used more than 5% of the standard library in the languages I work with regularly.

Hence "non-core". A robust stdlib or framework is in line with what I'm suggesting, not a counterexample. I'm not anti-dependency, just being practical.

My point is that AI gives developers more freedom to implement more optimal dependency management strategies, and that's a good thing.

> unlikely to be improved by that team replicating the parts of the library they need into their own codebase

At no point have I advised copying code from libraries instead of importing them.

If you can implement a UI component that does exactly what you want and looks exactly how you want it to look in 200 lines of JSX with no dependencies, and you can generate and review the code in less than five minutes, why would you prefer to install a sprawling UI framework with one component that does something kind of similar that you'll still need to heavily customize? The latter won't even save you upfront time anymore, and in exchange you're signing up for years of breaking changes and occasional regressions. That's the best case scenario; worst case scenario it's suddenly deprecated or abandoned and you're no longer getting security updates.

It seems like you're taking a very black-and-white view in favor of outsourcing to dependencies. As with everything, there are tradeoffs that should be weighed on a case-by-case basis.


> A robust stdlib or framework is in line with what I'm suggesting, not a counterexample.

Maybe I didn't argue this well, but my point is that it's a spectrum. What about libraries in the java ecosystem like Google's Guava and Apache Commons? These are not stdlbibs, but they almost might as well be. Every non-trivial java codebase I've worked in has pulled in Guava and at least some of the Apache commons libraries. Unless you have some other mitigating factor or requirement, I think it'd be silly not to pull these in as dependencies to a project the first time you encounter something they solve. They're still large codebaes you're not using 99% of though.

I don't feel like my position on this is black-and-white. It is not always correct to solve a problem by adding a new dependency - and in the situation you describe - adding a sprawling UI framework would be a mistake. Maybe the situation is different in front-end land, but I don't see how AI really shifts that balance. My colleagues were not doing the bad or wrong thing by copying that incorrect code - tasked with displaying a human-readable file size I would probably either write out the boundaries by hand or copy-paste the first reasonable looking result from stack overflow without much thought too.

> At no point have I advised copying code from libraries instead of importing them.

I didn't say copying, though. I said replicating. If you ask AI to implement something that appears in its training data, there is a high probability it will produce something that looks very similar and even a non-zero possibility it will replicate it exactly. Setting aside value judgements, this is functionally the same as a copy, even if what was done to produce it was not copying.


Sure, by all means use whatever is the best tool for the job. I never said not to; I've consistently said the opposite of that.

My position is that where a developer might have historically said "ideally I'd do X, but given my current timeline and resource constraints, doing Y with some new dependency Z would be the better short-term option", today that tradeoff would be influenced by the lower and decreasing cost of ideal solution X.

Maybe you understood my initial comment differently. If you are saying you disagree with that, then either you believe that X is never ideal — with X being any given solution to a problem that doesn't involve installing a new dependency — which is a black-and-white position; or you disagree that AI is ever capable of actually reducing the cost of X, in which case I can tell you from experience that you would be incorrect.

> If you ask AI to implement something that appears in its training data

This qualifier undermines everything that comes after. Based on what are you assuming that an exact implementation of X would always appear in the training data? It's a hypothetical unspecified widget; it could be anything.

> Maybe the situation is different in front-end land

Frontend definitely has more obvious examples of X. There are many scenarios where it wouldn't be that complicated to implement an isolated UI component that does exactly what you need without any clear vulnerabilities, where in the past it would have saved time to build on top of a third-party subset or variation of that UI even when it wasn't the optimal long-term solution.

It's not a frontend-specific comment, but maybe frontend better illustrates the principle. While backend examples might be more niche and system-specific, the same tradeoff logic applies there too; e.g. in areas like custom middleware or data processing utilities.

Ultimately, the crux of what I'm saying has nothing to do with what those X and Y scenarios are. Continuing to bring up scenarios where dependencies are useful is a non sequitur to my original comment, which was that AI gives us a lot more optionality on this front.


I've seen this argument made frequently. It's clearly a popular sentiment, but I can't help feel that it's one of those things that sounds nice in theory if you don't think about it too hard. (Also, cards on the table, I personally really like being able to pull in a tried-and-tested implementation of code to solve a common problem that's also used by in some cases literally millions of other projects. I dislike having to re-solve the same problem I have already solved elsewhere.)

Can you cite an example of a moderately-widely-used open source project or library that is pulling in code as a dependency that you feel it should have replicated itself?

What are some examples of "everything libraries" that you view as problematic?


Anything that pulled in chalk. You need a very good reason to emit escape sequences. The whole npm (and rust, python,..) ecosystem assumes that if it’s a tty, then it’s a full blown xterm-256color terminal. And then you need to pipe to cat or less to have sensible output.

So if you’re adding chalk, that generally means you don’t know jack about terminals.


In the Python world, people often enough use Rich so that they can put codes like [red] into a string that are translated into the corresponding ANSI. The end user pays several megabytes for this by default, as Rich will also pull in Pygments, which is basically a collection of lexers for various programming languages to enable syntax highlighting. They also pay for a rather large database of emoji names, a Markdown parser, logic for table generation and column formatting etc. all of which might go unused by someone who just doesn't want to remember \e[31m (or re-create the lookup table and substitution code).


Exactly! ANSI escape codes are old and well defined for all the basic purposes.

Pulling in a huge library just to set some colors is like hiring a team of electrical contractors to plug in a single toaster.


Some people appreciate it when terminal output is easier to read.

If chalk emits sequences that aren't supported by your terminal, then that's a deficiency in chalk, not the programs that wanted to produce colored output. It's easier to fix chalk than to fix 50,000 separate would-be dependents of chalk.


I appreciate your frustration but this isn't an answer to the question. The question is about implementing the same feature in two different ways, dependency or internal code. Whether a feature should be added is a different question.


Chalk appears to be a great example.

I wonder how many devs are pulling in a whole library just to add colors. ANSI escape sequences are as old as dirt and very simple.

Just make some consts for each sequence that you intend to use. That's what I do, and it typically only adds a dozen or so lines of code.


The problem isn't the implementation of what I want to do. It's all of the implementations of things I never cared about doing. And the implementation of what I want to do that is soooo much more complex than it needs to be that I could easily have implemented it myself in less time.

The problem is also less about the implementation I want, it's about the 10,000 dependencies of things I don't really want. All of those are attack surface much larger than some simple function.


No-one's forcing you to use crates published on the crates.io registry - cargo is perfectly happy to pull dependencies from a different public or private registry, elsewhere in the same repo, somewhere else on the filesystem, pinned to a git hash, or I think some other ways.

"We shouldn't use the thing that has memory safety built in because it also has a thriving ecosystem of open source dependencies available" is a very weird argument.


> "We shouldn't use the thing that has memory safety built in because it also has a thriving ecosystem of open source dependencies available" is a very weird argument.

I don't see anyone anywhere in this thread saying that we shouldn't use rust, or C for that matter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: