Hacker Newsnew | past | comments | ask | show | jobs | submit | ozten's commentslogin

Luckily, the author will face no frustrations with Ubuntu or whatever Linux OS they migrate to. Flawless UX. Zero compromises.

It's really gotten better in the last few years. Try a spin of Fedora sometime to see the latest polish.

I have been an active user for almost 30 years. You can tell by my comment.

Oh, sorry, I misread your comment as sarcasm.

But you can onshore pests... wait, Nutria pest control and generate demand by ... introducing Nutria to untapped markets!

William Burroughs on 1959 HN: I wanted to write Naked Lunch, so I took a pest control job.

Raining, misting. The plane may have been up in cloud cover when they looked and decided to enter the runway. Hard to judge the distance and trajectory of planes. Once the firetruck started moving, they may have seen it without time to gun it or back up.

> It would have prevented this.

Wow.


Look at the video. The plane was already on the ground.

They should mirror on GitHub for marketing purposes

How would they do that if they don't use git for version control? Does GitHub allow other forms of version control other than git?

SQLite does it despite using Fossil - their mirror is at https://github.com/sqlite/sqlite

Git is so established now that it's sensible for alternative VCS to have a mode where they can imitate the Git protocol - or seven without that you can still checkout the latest version of your repo and git push that on a periodic basis.


Git is not a protocol, it is a data format. That only makes sense when your VCS system is similar enough to git to easily allow converting between the two representations.

I mean things like git-svn, hg-git, git-p4, git-remote-fossil, git-tfs, jj.

Every single one of those is following variations on the exact same data structure, or is actually git in a trenchcoat.

Similarly, CUE uses Gerrit and has two way sync. If you are building a VCS today, git interop is a must.

What if the whole point of your VCS is that it its core data structure is nothing like git's at all?

As a user, why do I care how the internals work?

What I do care about is an easy path to progressive adoption and migration. Without that, I cannot convince my team / org to force everyone over.


It solves problems that you dont encounter if you are asking that question. I’ve lost a literal year or more of my life, in aggregate, to rebasing changes against upstream that could have been handled automatically by a sufficiently smart VCS.

An alternative explanation is that I already have a tool that helps me with these situations. The question was a bit rhetorical, because the vast majority of devs don't care what language many of their tools are written in or what algos are used.

A different example, Go's MVS algo can be considered much better for dependency management. What are your thoughts on the SAT solver being replaced in your preferred language tooling? It would mean the end of lock files


If you have a tool for better rebasing, I’d love to hear it.

pijul clone https://nest.pijul.org/pijul/pijul

pijul log --hash-only > all_changes.txt

pijul unrecord --all

git init

``` for HASH in $(cat all_changes.txt); do pijul apply "$HASH" pijul reset # sync working copy to channel state git add -A git commit -m "pijul change: $HASH" done ```

git remote add origin git@github.com:you/pijul-mirror.git git push -u origin main


For me Bun’s dramatic entrance and the a lack of any Deno response that reached my attention effectively evaporated any interest I would have in switching my runtime. I’m already set with my tooling and hosting.

I’m building this for pay as you go agent first GoToMarket engine, but I wonder if I should offer fractional GTM services.


Hey HN — I'm Austin in Seattle, a solo technical founder with 20+ years of engineering experience and 4 years of biz/marketing. I've shipped a lot of software. Selling it has always been the hard part.

I see a lot of Indie Hacker / bootstrap founders follow the same arc: build something Your proud of, Google "how to find your first customers," get a wall of generic advice, context-switch between six tabs of ChatGPT conversations that forget everything by tomorrow, and eventually just go back to writing code because at least that feels productive.

I built Cantrip to fix this. It's a go-to-market engine — not a chatbot, not a consultant replacement, not an all-in-one platform that wants to own your stack.

What it actually does: You describe your product (paste a README, explain it in plain English, whatever). Cantrip builds a Context Graph — a persistent, structured map of your business: ideal customer profiles, pain points, value propositions, channels where your customers hang out, competitors, and experiments to validate your assumptions.

The system runs continuous gap analysis. No ICP defined? That's a gap. Competitors not analyzed? Gap. Channels identified but untested? Gap. Each gap becomes a prioritized opportunity with three options: let Cantrip research it with AI, get a context-rich prompt to use in any LLM, or fill in what you already know.

Everything AI-generated enters a review queue. Nothing is treated as ground truth until you accept it. The graph deepens over time — each cycle builds on the last.

What's technically interesting: Cantrip is agent-native from day one. Every capability is exposed as one of 17 MCP tools — cantrip_snapshot, cantrip_next, cantrip_review, etc. Any

MCP-compatible agent (Claude Code, OpenClaw, Cursor, custom workflows) can read and write to your Context Graph programmatically.

This means your coding agent can finish a feature, call cantrip_snapshot, discover no value proposition covers the new capability, call cantrip_next to find the gap, and surface it in your review queue — without you leaving your IDE. The Context Graph becomes shared state for multi-agent coordination, with human review as the trust boundary.

The daemon handles identity, auth, and credit gating transparently. Agents read .cantrip.json from the working directory — no project IDs as parameters, no auth ceremony. One API surface powers the MCP server, dashboard, and CLI with zero drift between interfaces.

Business model: Credits, not subscriptions. GTM work is bursty — intense before a launch, quiet for weeks. Credits match that reality. Every feature available from day one; credits unlock depth, not access. Reads are always free.

This is super rough around the edges but if you want to kick the tires on the MCP server or CLI, join the Discord and I'll give you some free credits if paying is a barrier.

I've seen some "Autonomous company" launches, this isn't that. Check out the FAQ...

I'd genuinely love feedback on the architecture, the MCP tool design, or the overall approach. What am I missing? What would make this useful to you?


Gas Town Wasteland is an elaborate growth hack for DoltDB


This is really great and important progress, but Lean is still an island floating in space. Too hard to get actual work done building any real world system.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: