Hacker Newsnew | past | comments | ask | show | jobs | submit | straydusk's commentslogin

> Put in whatever level of rigor matches your project needs, personal interest, and schedule!

This is the most refreshing, grounded response I've gotten in awhile <3


Well, when you clickbait/lie about your own premise you can’t really expect a decent conversation lol

?? What

You claim you don’t read the code. People believe you. Later you reveal that actually you do read the code, as well as metrics about the code. You just don’t read line by line and scrutinize them individually. Then you want to say their opinions weren’t grounded, but all that happened is you misrepresented your own argument

In certain, extenuating circumstances, I will read the code. It is not in my common / critical path. It's not how I'd describe my workflow.

All I’m saying is that

by ‘I don’t read code,’ I mean: I don’t do line-by-line review as my primary verification method for most product code. I do read specs, tests, diffs selectively, and production signals - and I would advocate escalating to code-reading for specific classes of risk.

Is not at all what people consider “not reading the code” to be


Last week I wrote a post that was ostensibly about the direction of IDEs and AI-assisted coding... and one specific sentence (reasonably) generated a lot of discussion.

The line that got the discussion going was, “I don’t read the code anymore.”

I thought a lot about these arguments, and I still don’t read the code. Here, I defend that. Have at it!


Here's the previous post that generated the hubbub: https://news.ycombinator.com/item?id=46891131

> Checkpoints are a new primitive that automatically captures agent context as first-class, versioned data in Git. When you commit code generated by an agent, Checkpoints capture the full session alongside the commit: the transcript, prompts, files touched, token usage, tool calls and more.

This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.


What kind of barrier/moat/network effects/etc would prevent someone with a Claude Code subscription from replicating whatever "innovation" is so uniquely valuable here?

It's somewhat strange to regularly read HN threads confidently asserting that the cost of software is trending towards zero and software engineering as a profession is dead, but also that an AI dev tool that basically hooks onto Git/Claude Code/terminal session history is worth multiples of $60+ million dollars.


There’s a difference between “this concept has value” and “a company can capture that value”.

I do see value in this, but like you I think it’s too trivial to implement to capture the value unless they can get some kind of lead on a model that can consume these artifacts more effectively. It feels like something Anthropic will have in Claude Code in a month.


GitHub doesn't have a "moat" either besides network effect. Just like most SaaS.

And it was sold to Microsoft at $7B.


Mostly because "Microsoft <3 FOSS" phase, and what better manouver than owning Github and dump Codeplex?

Look at Xamarin, almost everything that they had is now gone in modern .NET.


Well that was in the era of free money for one. And the primary value was in all the human made content for AI training.

I’m sure there’d be some value to extract from the agent produced code in this thing, but I doubt it’s anywhere near as much.


If they had wanted a moat for this part of their offering, they wouldn’t have open-sourced it.

This is not their offering, this is a tool to raise interest.


There's no way this company is just a few git and claude hooks with a CLI. They're definitely working on a SASS - something else that isn't open source that this primitive is the basis of. Like a GitHub for agent code

Impressive seeing as last week we heard that AI had killed SAAS.

haha

github for agent code is dropbox final_final2.zip


> What kind of barrier/moat/network effects/etc would prevent someone with a Claude Code subscription from replicating whatever "innovation" is so uniquely valuable here?

You are correct, that isn't the moat. Writing the software is the easy part


The same moat that git had on svn, a better mental paradigm over the same fundamental system, more suited to how SWE changed over a decade.

git didn't succeed based on the mental model. It got a foot in the door with better tooling and developer experience then blew the door open when GitHub found a way to productize it.

Git doesn't have a moat. Git isn't commercial software, and doesn't need to strong arm you into accepting bad license terms.

I wouldn’t characterize it as a moat exactly. svn/cvs just had a braindead data model. Linus started git with a fundamentally better one.

I definitely see the potential of AI-native version control, it will take a bit more to convince me this is a similar step-level improvement though.


> HN threads confidently asserting

I have never seen any thread that unanimously asserts this. Even if they do, having HN/reddit asserting something as evidence is wrong way to look at things.


  > if you can't see the value in this, I don't know what to tell you
Okay, but I'm legitimately unclear on the argument for $60M - $300M value here, given it isn't articulated at all.

HN is full of AI agents hype posts. I am yet to see legitimate and functional agent orchestration solving real problems, whether it is scale or velocity.

You're confused. If you hype AI here you lose karma.

The top comment here is one by straydusk. Their profile says: data expert, AI explorer.

HN is full of anti hype posts as well. If I were to estimate there are more posts of anti hype than of hype.

This comment feels word-for-word the legendary DropBox critique on HN.

It was only legendary because DropBox hit it out of the park. In hindsight it is easy to see this. And it's the default HN response to anything.

A broken forum is right twice a day

I sort of agree with you. But the sentiment reminds me of the hacker news dropbox launch response. Which was pretty much

"pfft! I could set all this up myself with a NAS xyz".

https://news.ycombinator.com/item?id=8863


[dead]


>what's the value of paying someone for a product like this vs just building it myself?

Same thing it’s always been. Convenience. Including maintenance costs. AI tools have lowered the bar significantly on some things, so SaaS offerings will need to be better, but I don’t want to reinvent every wheel. I want to build the thing I want to build, not the things to build that.

Just like I go to restaurants instead of making every meal myself.


Right, but paying for food at a restaurant doesn't get me any closer to owning a restaurant. If promises are delivered on for these agentic coding platforms (which I do believe in), it seems the most reasonable path forward is to use those platforms to build your own platform.

You cannot test your software without Claude Code?

The software collects the network traffic of distributed Claude code instances.

I currently develop small utilities with the help of AI, but am far from vibe coding or using agents. I review every single suggestion and do some refactoring at each step, before any commit (sometimes heavy refactoring; sometimes reorganizing everything).

In my experience LLMs tend to touch everything all of the time and don't naturally think about simplification, centralization and separation of concerns. They don't care about structure, they're all over the place. One needs to breathe on their shoulders to produce anything organized.

Maybe there's a way to give them more autonomy by writing the whole program in pseudo-code with just function signatures and let them flesh it out. I haven't tried that yet but it may be interesting.


Yours matches my own experience and work habits.

My mental model is that LLMs are obedient but lazy. The laziness shows in the output matching the letter of the prompt but with as high "code entropy" as possible.

What I mean by "code entropy" is, for example, copy-paste-tweak (high entropy) is always easier (on the short term) for LLMs (and humans) to output than defining a function to hold concepts common across the pastes with the "tweak" represented by function arguments.

LLMs will produce high entropy output unless constrained to produce lower entropy ("better") code.

Until/unless LLMs are trained to actually apply craft learned by experienced humans, we must be explicit in our prompts.

For example, I get good results from say Claude Sonnnet when my instruction include:

- Statements of specific file, class, function names to use.

- Explicit design patterns to apply. ("loop over the outer product of lists of choices for each category")

- Implementation hints ("use itertools.product() to iterate over the combinations")

- And, "ask questions if you are uncertain" helps trigger an iteration to quickly clarify something instead of fixing the resulting code.

This specificity makes prompting a lot more work but it pays off. I only go this far when I care about the resulting code. And, I still often "retouch" as you also describe.

OTOH, when I'm vibing I'll just give end goals and let the slop flow.


Sure... you `git add` the context text generated by AI and `git commit` it, could be useful. Is that worth 60 million?

It’s good to know that a few decades later the same generic Dropbox-weekend take can be made.

99% of projects the take applies to are massive flops. The Dropbox weekend take is almost always correct.

Sorry for by ignorance, but what's the Dropbox Weekend

Many, many years ago, when ~persia came ashore~ dropbox was announced on HN [0] The top comment was quick to point out "For a Linux user, you can already build such a system yourself quite trivially

[0] https://news.ycombinator.com/item?id=8863


Yeah I guess why would anyone build anything, 99% of projects are flops.

I mean a dev tool that's seemingly failing to resonate with developers as to why they would pay for this is a pretty good way to tell if it's going to fall in the 1%.

The Dropbox take was wrong because they didn't understand the market for the product. This time the people commenting are the target audience. You even get the secondary way this product will lose even if turns out to be a good idea, existing git forges won't want to lose users and so will standardize and support attaching metadata to commits.


> This time the people commenting are the target audience.

Nah. People post about k8s on here all the time, but that doesn't mean I'm the target audience. Just because _someone_ on HN has a bad take doesn't mean they're the person who needs this. Nor does it mean they even understand it.


Survivorship bias. How many failed and commenters were right?

predicting that a startup will fail is.. well, you got a ton of probability on your side there. so it isn't a particularly impressive thing to be right about.

Unimpressive doesn't mean incorrect, sometimes it's good to take the side of the most probable. And yet at the same time I am reminded of this quote:

> The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. - George Bernard Shaw


Sometimes adapting oneself is, in fact, progress.

I'm not disagreeing, just soliciting. Does anyone have examples of products that failed in the early stages because their implementation was too trivial?

How exactly are we supposed to hear about something that failed in the early stages?

There are a number of ways. Obviously Dropbox would be one case of "early and didn't fail" that could have been "early and failed", and we would have heard about it.

By listening to your friends and circle

People keep saying that, but it's hardly the same thing. We're talking about developer workflow here. It's like someone coming up with Brancher. It's a git branch manager. Use `brancher foo` to replace `git checkout -b foo`. "Remember that comment about rsync and dropbox? Brancher is to git, what dropbox is to rsync"

How is LangChain doing? How about OpenAI's Swarm or their Agent SDK or whatever they called it? AWS' agent-orchestrator? The crap ton of Agent Frameworks that came out 8-12 months ago? Anyone using any of these things today? Some poor souls built stuff on it, and the smart ones moved away, and some are stuck figuring out how to do complex sub-agent orchestration and handoffs when all you need apparently is a bunch of markdown files.



Just saw a Discord-weekend take on reddit! Haha. Guy was saying he could create it in a day and then self-host it on his servers so that he doesn't have to put Nitro ads on top of it

> It’s good to know that a few decades later the same generic Dropbox-weekend take can be made.

The dropbox-weekend take wasn't made by the intended target for the product.

This is.


It's funny how HN'ers frequently judge ideas based on complexity of implementation, not value.

I still remember the reaction when Dropbox was created: "It's just file sharing; I can build my own with FTP. What value could it possibly create".


It's a common trope. (Some) artists will often convey the same message; art should be judged on how hard it was to create. Hence why some artist despise abstract art or anything "simplistic".

We forget that human consumption doesn't increase with manufacturing complexity (it can be correlated, but not cause and effect). At the end of day, it's about human connection, which is dependent on emotion, usefulness, and availability.


I mean that's the beauty of a form full of engineers

Dropbox value was instantly recognizable, but I feel I have zero use for Entire.

I mean, I CAN see the value in pushing the context summary to git. We already have git blame to answer "who", but there is no git interrogate to answer the "why". This is clearly an attempt to make that a verb git can keep track of. It's a valuable idea.

I also seen examples of it before. I've got opencode running right now and it has a share session feature. That whole idea is just a spinoff on the concept of the same parent that led to this one.


Isn't "why" what commit message bodies are for?

That's actually not a bad idea. Idk about any tools that do that tho

They raised 60 million. The investors think it’s worth 600M+

It's the valuation that is wild to me (I think the idea itself has merit). But these are the new economics. I can only say "that's wild" enough before it is in fact no longer wild.

These aren't new economics, it's just VC funds trying to boost their holdings by saying it's worth X because they said so. Frankly the FTC should make it illegal.

That's not how it works at all. Why stop at $300M, why didn't they just say $1BN out the gate?

Of course it's how it works, how else do you justify a company that is making negative profit into some how being worth $300M? Like that's just the game, IDK why people accept it. It's not democratic and all it does it prime the population for fraud and abuse.

Yeah, that's not the game at all. Have you ever invested in startups?

I don't need to "play the game" to realize private valuations are just marketing fluff not based in reality. It's literally "the greater fool" theory in action. When the incentives are to just lie and not put out something with some actual scrutiny like a 409A, it's quite clear what game is being played.

But yes, I would totally love to invest in startups with people's pension funds. It seems like the perfect scam where the only losers are the public that allows such actions.


Almost all of the world's most valuable companies were once VC investments that I'm sure you would have dismissed as just greater fool nonsense.

But more to the point, I was talking about how you apparently think VCs just make up super high valuations of the companies they invest in to justify those investments? Are you not aware that VC is a competitive market?


That is where I’m shocked being in a position of raising for a startup myself, what was in their pitch deck/data room that convinced them of this valuation? Or is it due to the founders reputation and not the substance?

Its like github - with the word Ai. <end>

I LOVE THIS FOUNDER - I am a 10 out of 10 - YES!!!

Take my (investors) money


That's not impressive. That's an incredible amount concentrated in the hands of a few looking for a place to live. It has to end up somewhere. Some of it goes everywhere.

Discord is not prized because you can send a message to a chatroom, or any of the hooks and functions.

It's because of everybody there.

Currently no one is on Entire - the investor are betting they will be.


I think discord became popular in the first place because it was so much better than the alternatives, at least for the gaming/ hanging out with friends use case. Discord was initially competing with a bunch of self hosted stuff, vent/ mumble etc with higher barrier to entry and less features and Skype which was terrible.

Discord really became big because it had 0 obstacle onboarding. In an age of Skype, Ventrillo, Teamspeak and Mumble, all "installation" software with "server addresses" and "setup your user config", Discord shows up, says "press this link", and done, you're ready to go. Install link? No, it's in the browser. Account? No, you literally got a temp account made for you. You just talked. Yes, with a button in the corner that says "Claim this account" which just wants an email and a name, but point is, you didn't even have to do that much. This is why the comparison to it is IRC despite the two being so far apart, IRC was the only other chat software with this small of a barrier to entry.

Everything else about the featureset was copy pasted from Slack. No one cares about that part.


We have had this for ages now.... I just don't have access to the sort of people willing to pass me 60m for that. I never thought it to be worth anything really ; it was a trivial to implement afterthought.

Well a famous name is attached, could be the start of the product that replaces github, building github2 would give oppertunity to fix mistakes that are too entrenched to change at github, and who better to try? I'm uncharacteristically optimistic on this one, I'd give it a try!

I love this one so much! The arbitrary decision to cherry-pick critique a particular product to this degree, when it’s something that could be said about 99% of the stuff SV churns out, including in all likelihood anything you’ve ever worked on.

Good thing the comment you're replying to does not lionise 99% of the stuff SV churns out, including in all likelihood anything they've ever worked on. I guess we should just not critique anything out of SV because it's all shit?

That is their first feature.

If it were also their last, I would be inclined to agree.


The unannounced web collaboration platform in-progress might be.

Couldn't we capture this value with a git hook?

300 million, apparently.

The most active HNers are just extremely negative on AI. I understand the impulse (you spend years honing your craft, and then something comes along and automates major portions of it) but it's driven by emotion and ego-defense and those engaged in it simply don't recognize what's motivating them. Their ego-defense is actually self-fulfilling, because they don't even try to properly learn how to leverage LLMs for coding so they give it a huge task they want it to fail on, don't properly break it into tasks, and then say "i told you it sucks" when it fails to one shot it.

Even this response shows why the most active ones are outwardly negative on AI.

I use AI a ton, but there are just way too many grifters right now, and their favorite refrain is to dismiss any amount of negativity with "oh you're just mad/scared/jealous/etc. it replaces you".

But people who actually build things don't talk like that, grifters do. You ask them what they've built before and after the current LLM takeoff and it's crickets or slop. Like the Inglourious Basterds fingers meme.

There's no way that someone complaining about coding agents not being there yet, can't simultaneously be someone who'd look forward to a day they could just will things into existence because it's not actually about what AI might build for them: it's about "line will go up and I've attached myself to the line like a barnacle, so I must proselytize everyone into joining me in pushing the line ever higher up"

These people have no understanding of what's happening, but they invent one completely divorced from any reality other than the reality them and their ilk have projected into thin air via clout.

It looks like mental illness and hoarding Mac Minis and it's distasteful to people who know better, especially since their nonsense is so overwhelmingly loud and noisy and starts to drown out any actual signal.


The negativity is driven by outrageous claims how AIs will replace programmers, or how english is the PL of the future. T

> if you can't see the value in this, I don't know what to tell you.

You could perhaps start by telling what value you see in this? And what this company does that someone can't easily do themselves while committing to GH?


I know about "the entire developer world has been refactored" and all, but what exactly does this thing do?

Runs git checkpoint every time an agent makes changes?


100% agree because there’s a lot of value in understanding how and why past code was written. It can be used to make better decisions faster around code to write in the future.

E.g., if you’ve ever wondered why code was written in a particular way X instead of Y then you’ll have the context to understand whether X is still relevant or if Y can be adopted.

E.g., easier to prompt AI to write the next commit when it knows all the context behind the current/previous commit’s development process.


But that's not what is in the whole context. The whole context contains a lot of noise and false "thoughts". What the AI needs to do is to document the software project in an efficient manner without duplication. That's not what this tool is doing. I question the value in storing all the crap in git.

I wonder. How often will that context actually be that valuable vs just more bloat to fill up future API calls with to burn tokens.

For the last three or four months, what I've been doing is anytime I have Claude write a comment on an issue, it just adds a session ID, file path and the VM it is on. That way, whenever we have some stuff that comes up, we just search through issues and then we can also retrace the session that produced the work and it's all traceable. In general, I just work through gitea issues and sometimes beads. I couldn't stand having all these MD files in my repo because I was just drowning in documentation, so having it in issues has been working really nicely and agents know how to work with issues. I did have it write a gitea utility and they are pretty happy using/abusing it. Anytime I see that they call it in some way that generates errors, I just have them improve the utility. And by this point, it pretty much always works. It's been really nice.

A year ago I added memory to my Emacs helper [0]. It was just lines in org-mode. I thought it was so stupid. It worked though. Sort of.

That's how a trillion dollar company also does it, turns out.

0: https://github.com/karthink/gptel


How does this differ from what Github Copilot does when writing its .github/copilot-instructions.md? That doesn't keep the transcript or prompts, but it does keep quite a bit of the context and a declarative state of the decisions/design considerations made so another AI bot can pickup and have enough context to understand the rationale. I'm also not really convinced that any AI agent wouldn't still parse the code to understand more about the context vs. just using the checkpoint.

Wow, read through the comments and you weren't joking. I attribute this to crossroads of "this release is v0.1 of what we are building" and the HN crowd who have been scrolling past 120 AI frameworks and hot takes daily and have no patience for anything that isn't immediately 100% useful to them in the moment.

I find the framing of the problem to be very accurate, which is very encouraging. People saying "I can roll my own in a weekend" might be right, but they don't have $60M in the bank, which makes all the difference.

My take is this product is getting released right now because they need the data to build on. The raw data is the thing, then they can crunch numbers and build some analysis to produce dynamic context, possibly using shared patterns across repos.

Despite what HN thinks, $60M doesn't just fall in your lap without a clear plan. The moat is the trust people will have to upload their data, not the code that runs it. I expect to see some interesting things from this in the coming months.


Didn’t Juicero get more than a $100M? Do you think they had a clear plan? How much did Rome get? Did they have a clear plan?

I haven't read the article yet but this conversation reminds me of Docker. Lots of people "didn't get it." I told them at the time: if you don't get it you aren't ready for it yet so don't worry about it. When you do need it, you'll get it and then you'll use it and never look back. Look at where we are with containers now.

And look where Docker Inc is now (which is one of the points some critics are making)

Sure, but long term business success aside, I’m sure most of the folks working at this company would die for a fraction of the adoption curve docker had.

I'm trying it out now. If it works, I think it'd be great for my agentic workflows where I need to figure out why something was done a specific way.

I have a lot of concurrent agents working on things at the same time, so I'm not always sure why a piece of code is the way it is months later.


I've used it for a couple of hours. A few observations:

- It's nice to see conversation context alongside the change itself. - I wasn't able to see Claude Code utilise past commit context in understanding code. - It's a tad unclear (and possible unreliable) in what is called 'checkpointing'. - It mucked up my commit messages by replacing the first line with a sort of AI request title or similar.

Sadly, because of the last point (we use semantic release and git-cz) I've had to uninstall it.


This sounds like you're using the auto-commit strategy instead of the default manual-commit strategy. manual-commit does not automatically commit. It just adds a trailer to the git commit message to link the checkpoint to the commit.

I think this is neat; in fact, I orchestrated my AI agents to do something similar (keep a log of files touched and why). And I have agents refer to the work log as well when they are unclear on why something exists.

It's not 1:1 with checkpoints, but I find such things to be useful.


I've found immense value in this, am already doing it with Pi(https://github.com/badlogic/pi-mono) and it's very easy to replicate

Do you mean the value in this specific tool, or in the concept? You don't need a dedicated tool to store agent session transcripts and link them to commits. This can be accomplished by a 10-line bash script.

I built out the same thing in my own custom software forge. Every single part of the collaborative development process is memoized.

And how are you using it now? Have you seen real value weeks or months on?

It's in active development in my free time. I've built various agent orchestration systems over the years for different reasons, ever since the GPT-3 API. I can tell you utility has continually risen, the models are just getting better, and late 2025 was an inflection point, which is why we're seeing all of these orchestration solutions pop up now.

I still have kinks to work out in mine but it's already usable for building software. Once I get to v1 I think it will provide enough value to be useful for me in particular. I don't have enough data to speak about months on yet, but if I think the experiment is a success then I will do a Show HN or something.

The gist is you can clone a repo or start a project from scratch, each engineering agent gets a worktree, you work with the manager agent and it dispatches and manages other agents. there are playbooks which agents contextually turn into specific tasks, each of which is tracked much like CI/CD. You can see all the tool calls, and all of the communication between both agents and humans.

The application model is ticket-based. Everything revolves around the all-holy ticket. It's like a prompt, but it becomes a foundation for tying together every bit of work related to the process of developing the feature. So you can see the progress of the ticket through the organization kanban style, or watch from a dashboard, or look at specific tickets.

There are multiple review steps where human review and intervention are required. Agents are able to escalate to humans whenever they think they need to. There is a permission system, where agents have to seek permissions from other agents or humans in a chain of command in order to do certain tasks. Everything is audited and memoized, allowing for extreme accountability and process refinement stages.

Additionally, every agent "belongs" to either another agent or a human, so there is always a human somewhere in the chain of command who is responsible and accountable for the actions of his agent team. This team includes the manager agent, engineering agents, test agents, QA agents, etc, each loaded with different context, motivations and tools to keep them on track and attempt to minimize the common failure modes I experience while working closely with these tools all day.


> This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.

This sounds a lot like that line from Microsoft's AI CEO "not understanding the negativity towards AI". And Satya instructing us to not use the term "slop" any more. Yes we don't see value in taking a git primitive like "commit" and renaming it to "checkpoint". I wonder whether the branches going to be renamed to something like "parallel history" :)


This is literally what claude code already does minus the commit attachment. It’s just very fancy marketing speak for the exact same thing.

I’m happy to believe maybe they’ll make something useful with 60M (quite a lot for a seed round though), but Maybe not get all lyrical about what they have now.


Claude Code captures this locally, not in version control alongside commits.

I wonder how difficult it would be for Claude Code to have such a feature in a future release.

>. This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.

It's almost a meme: whenever a commercial product is criticized on HN, a prominent thread is started with a classic tone-policing "why are you guys so negative".

(Well, we explained why: their moat is trivial to replicate.)


ehhhh is it really that useful though? Sounds way more noisy than anything, and a great way to burn through tokens. It's like founding a startup to solve the problem of people squashing their commits. Also, it sounds like something Claude Code/Codex/etc could quickly add an extension for.

How would this use any extra tokens? Just seems like it's serializing the existing context

I see the utility in this as an extension to git / source control. But how do VCs make money of it?

Is that sarcasm ? Dump a bunch of JSON from llm proxy and commit it ? Sounds like billion dollar secret sauce to me.

Maybe use critical thinking instead of a mindless dismissal?

The fact that you aren't haven't offered a single counterargument to any other posters' points and have to resort to pearl-clutching is pretty good proof that you can't actually respond to any points and are just emotionally lashing out.


> if you can't see the value in this, I don't know what to tell you.

"I can't articulate why this is valuable."


Please don't use quotation marks to make it look like you're quoting someone when you aren't. That's an internet snark trope and we're trying to avoid those on HN.

https://news.ycombinator.com/newsguidelines.html


Look it’s obvious at this point to anyone who is actually using the tools.

We can articulate it but why should we bother when it’s so obvious.

We are at an inflection point where discussion about this, even on HN, is useless until the people in the conversation are on a similar level again. Until then we have a very large gap in a bimodal distribution, and it’s fruitless to talk to the other population.


Not really, because those details aren't actually relevant to code archaeology.

You could have someone collect and analyze a bunch of them, to look for patterns and try to improve your shared .md files, but that's about it


[flagged]


I think if you add some more emotional vitriolic language to your reply you’ll finally, finally get your point across. /s

[flagged]


I will never use this platform. i didn't even click into it. pathetically, i did click to view comments

But I think commenting on someone's bio is the kinda harshness you only do in the moment. the kinda thing I'd approach differently in hindsight (one that isn't an attempt to be cruel)


Harsher than the OP who is ridiculing everybody who criticizes the product being presented here in this thread?

Yes, much harsher

> This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.

Just an opinion and not ridiculing or attacking someone specifically


amongst all the threads, it appears they are quite outnumbered lol

Have you considered that betting against the models and ecosystem improving might be a bad bet, and you might be the one who is in for a rude awakening?

My favorite open source projects still have zillions of open bugs. Lots of projects are shutting down accepting external contributions because the PRs are so terrible. Wipro still isn't bankrupt.

Cheerleading is nice. Seeing the bug counts from my favorite projects significantly decrease would be evidence.


I'm not betting against them, I use them every day (but I don't "vibe code"—there's more intent). I'm just not treating them as a deity or other prayer-candle worthy entity. They're business tools. It's just a chat bot bro.

I agree. We've been assured by these skeptics that models are stochastic parrots, that progress in developing them was stalling, and that skills parity with senior developers was impossible - as well as having to listen to a type of self-indulgent daydreaming relish about the eventual catastrophes companies adopting them would face. And perhaps eventually these skeptics will turn out to be right. Who knows at this stage. But at this stage, what we're seeing is just the opposite: significant progress in model development last year, patterns for use being explored by almost every development team without widespread calamity and the first well-functioning automated workflows appearing for replacing entire teams. At this stage, I'd bet on the skeptics being the camp to eventually be forced to make the hard adjustments.

Pray tell, how has the world benefited from a flood of all these superhuman developers? Where is the groundbreaking software that is making our lives better?

Is this reply meant to me? Because what I wrote was:

> But at this stage, what we're seeing is just the opposite: significant progress in model development last year, patterns for use being explored by almost every development team without widespread calamity and the first well-functioning automated workflows appearing for replacing entire teams.


I mean, if it ever gets good, eh, I suppose I'll use it? Pre-emptively using it in case it one day works properly seems rather perverse, tho.

I've written code since 2012, I just didn't put it online. It was a lot harder, so all my code was written internally, at work.

But sure, go with the ad hominem.


You're completely right and I wish I had in retrospect... I was honestly just talking mostly in broad terms, but people really (maybe rightly) focused on the "not reading code" snippet.

I'm mostly developing my own apps and working with startups.


It's nano banana - I actually noticed the same thing. I didn't prompt it as such.

Here's the prompt I used, actually:

Create a vibrant, visually dynamic horizontal infographic showing the spectrum of AI developer tools, titled "The Shift Left"

Layout: 5 distinct zones flowing RIGHT TO LEFT as a journey/progression. Use creative visual metaphors — perhaps a road, river, pipeline, or abstract flowing shapes connecting the stages. Each zone should feel like its own world but connected to the others.

Zones (LEFT to RIGHT):

1. "Specs" (leftmost) - Kiro logo, VibeScaffold logo, GitHub Spec Kit logo

   Label: "Requirements → Design → Tasks"


2. "Multi-Agent Orchestration" - Claude Code logo, Codex CLI logo, Codex App logo, Conductor logo

   Label: "Parallel agents, fire & forget"


3. "Agentic IDE" - Cursor logo, Windsurf logo

   Label: "Autonomous multi-file edits"


4. "Code + AI" - GitHub Copilot logo

   Label: "Inline suggestions"


5. "Code" (rightmost) - VS Code logo

   Label: "Read & write files"


Visual style: Fun, energetic, modern. Think illustrated tech landscape or isometric world. NOT a boring corporate chart. Use warm off-white background (#faf8f5) with amber/orange (#b45309) as the primary accent color throughout. Add visual flair — icons, small illustrations, depth, texture, but don't make it visually overloaded.

Aspect ratio: 16:9 landscape


I'm glad you wrote this comment because I completely agree with it. I don't think that there is not a need for software engineers to deeply consider architecture; who can fully understand the truly critical systems that exist at most software companies; who can help dream up the harness capabilities to make these agents work better.

I just am describing what I'm doing now, and what I'm seeing at the leading edge of using these tools. It's a different approach - but I think it'll become the most common way of producing software.


I actually think this is fair to wonder about.

My overall stance on this is that it's better to lean into the models & the tools around them improving. Even in the last 3-4 months, the tools have come an incredible distance.

I bet some AI-generated code will need to be thrown away. But that's true of all code. The real questions to me are - are the velocity gains be worth it? Will the models be so much better in a year that they can fix those problems themselves, or re-write it?

I feel like time will validate that.


When I talk with people in the space, go to meetups, present my work & toolset, I am usually one of the more advanced, but usually not THE most, people in the conversation / group. I'm not saying I'm some sort of genius, I'm just saying I'm relatively near the leading edge of how to use these tools. I feel like it's true.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: