It may indeed be the case that the candidate promised one thing and the voters acting irrationally (or correctly assuming he's a liar) voted with an expectation of him doing the exact opposite. The GP, however didn't say anything about voting. He was talking specifically about the mismatch between campaign promises and actions taken once in office.
I am working on a P2P VPN app that lets you use a friend abroad as your VPN provider with no special setup: https://spora.to
It's mainly for censorship evasion (should be much harder to block than the regular centralized VPNs), but also for expats to access geo-blocked domestic services.
It's at the MVP stage and honestly it evoked much less interest in people than I hoped it would, but I'm still going on despite my better judgement.
Same at 42. I've been making software for 30 years and the gap between what I can envision and what I can code in a single day is so huge that it takes all the steam out of me. With agentic coding I can move at a pace that feels right again.
If every server has its own IP then the privacy leak comes from merely connecting to it. It makes ECH useless, but that's not the same thing as making it unnecessary.
Why would anyone want to become a Chinese citizen? How's everyone discussing linguistics while completely ignoring the authoritarian elephant in the room?
Because we hn people are used to reduce the world to a set of technical parameters. I am not intending to blame or shame anyone here, but to take it more broadly, the discussion around Doge showcased many such problems that arise from unawareness about the limits of our approach: context blindness, taking narratives at face value, narrow focus on technicalities, no consideration for ethics etc.
Tech people need to reduce complexity to make it computable, that's our job. Our strong points are the weak points too. Again: no blame or shame. Just wanted to point out we are susceptible in these matters.
How is it that the form of government comes up so often when discussing the decisions of ordinary people?
I would think for most people, you care about whether you can fit economically before you consider something that is unlikely to matter.
Obviously don't go and try to immigrate to China if you are planning to be a political commentator.
But for most people in most places, what will you notice? Are there jobs, how is tax, are the streets clean, are there homeless people, can I see a doctor, is there a lot of paperwork? Will I find friends?
Comparatively few people live under worse authoritarianism than the one in China. Definitely not enough to form a talent pool that would make any dent in whatever China already has. Especially when you factor in education quality.
It's not exactly a linguistics discussion, it's a discussion about attracting talent to live/work somewhere. Im not saying whether it's good or bad on China's part, that's a separate issue. Im saying that the possibility of integration is harder than just learning the language.
Weird title, but worthy of discussion. From the little info available so far this appears to be little more than political posturing. If you want to fight censorship, an "online portal" to access all the censored content is the wrongest possible way to go about it. But we'll see.
Why is it depressing? Personally, unless the alternative is literally starving, I wouldn't want to do a job that a robot could do instead just so that I could be kept busy. That sounds like an insult to human dignity tbh.
You know what is an insult? Supermarket on my street putting on display sloppy ads with ramen bowl that has 3 different thickness chopsticks and cartoon characters with scrambled faces. Now that is an insult, because there was a human being doing that job, and I am sure there was a great "productivity boost" related to that change.
I am a heavy AI user myself, and sure as hell I am not putting my foot in that place again.
Yeah but it's the job of the elected governments to build and maintain housing, education, social and welfare systems for their population that keep up with the challenges of the times, not the responsibility of the private sector to hold back progress and inefficiency just so more people can stay in employment even if they're not needed anymore.
The governments however have been and continue to be ill prepared to the rising increases of globalisation labor offshoring and automation.
There was a news article yesterday in my EU country about a 50 year old laid off CEO of a small company that continues to be unemployed after a year because nobody will hire him anymore so he lives off welfare and oddjobs and the government unemployment office has no solution.
What happens in the future when AI and offshoring culls more white collar jobs and there will be thousands or tens of thousands of unemployable 50 year old managers with outdated skills that nobody will want to hire or re-train due to various reasons, but they still need to keep working somehow till their 70s to qualify for retirement? Sure you then go to re-train yourself to become a licensed plumber or electrician, but who will want to hire you to gain experience when they can hire the 20-something fresher rather than the 50 year old with bad knees?
> but it's the job of the elected governments to build and maintain housing, education, social and welfare systems for their population that keep up with the challenges of the times
I'd say those things are the job of the population itself, via a wide range of pluralistic institutions. The job of governments, which are just specific organizations within a much larger society, is primarily to maintain public order.
>I'd say those things are the job of the population itself, via a wide range of pluralistic institutions.
I'd agree ONLY IF I'd pay no taxes to the government. But since most middle class people pay 40%+ of their income to the state, then the state now has the responsibility to handle those challenges for us.
But if the state wants me to handle it, then sure I'd do it gladly, they just need to reimburse 90% of my tax payments so I'd have the financial resources to proactively invest in my future security.
But right now we have the worst of both worlds in the west: a huge tax burden on the middle class funding an incompetent state that takes your money, spends it like drunken sailors on bullshit, and when the shit hits the fan, just tells you it's your fault when you fall down, instead of having used your money for societal wide preemptive solutions.
> I'd agree ONLY IF I'd pay no taxes to the government. But since most middle class people pay 40%+ of their income to the state, then the state now has the responsibility to handle those challenges.
Well, no, we pay taxes for the government to fund the things government is supposed to do and is competent at. Paying the government doesn't make them responsible for or competent to handle anything every problem arising anywhere in society, any more than paying for a Netflix subscription makes Netflix responsible for or capable of handling those problems.
This is really important, because political institutions aren't just bad at handling complex social problems, but when made responsible for them, often get in the way of other individuals, communities, and institutions trying to solve those problems with much better approaches.
> But if the state wants me to handle it, then sure I'd do it gladly, they just need to reimburse all my tax payments so I'd have the financial resources to invest in my future.
Agreed. We should drastically lower taxes, and ensure that most of the resources necessary to improve society are left in the hands of society itself, and not monopolized by a single institution that's subject to perverse incentives.
But if we assume that we're stuck paying the same level of taxes for the time being, and treat those taxes merely as losses, the question reduces to whether we want a monopolistic organization run by people with ulterior motives exercising a controlling influence over our lives and livelihoods -- and often failing to solve those complex problems in the first place -- or whether we would still prefer to solve those problems for ourselves with the resources we have left. And to my mind, the latter is still preferable, even if unhelpful strangers are stealing a good chunk of my resources.
> But right now we have the worst of both worlds: a huge tax burden on the middle class funding an incompetent state that takes your money, spends it like drunken sailors on bullshit, and when the shit hits the fan just tells you it's your fault when you fall down, instead of having used your money for societal wide preemptive solutions.
Yes, that's all true. But to my point above, the only way out of this is not to expect that the incompetent grifters will somehow start behaving like competent philanthropists, but rather to contain them and minimize the grift -- either way, it's still on us to solve our own problems.
>Well, no, we pay taxes for the government to fund the things government is supposed to do and is competent at.
Which also includes the education system training you for the labor market. How is the state good at that if what they're training you for is now useless? Also includes the welfare safety net which is now failing to catch everyone falling.
>This is really important, because political institutions aren't just bad at handling complex social problems, but when made responsible for them, often get in the way of other individuals, communities, and institutions trying to solve those problems with much better approaches.
If we know they're bad at this and often responsible for the issues we have, why are we funding them so much?
Norway has their sovereign fund as a premprive solution in case the country hits a rough path in the future.
>but rather to contain them and minimize the grift
And this can only be done peacefully by defunding the incompetent state apparatus.
>either way, it's still on us to solve our own problems
Yeah but you need money for that. And we don't have money because the state is taking half of it.
> Which also includes the education system training you for the labor market.
Does it? That's an assumption many people make, but I'm not sure that this was either the original intent -- public schooling was driven largely as a tool for "liberal arts" and to assimilate immigrants -- nor something that public schooling has ever proven to be particularly good at.
> If we know they're bad at this and often responsible for the issues we have, why are we funding them so much?
Well, most people's main incentive for paying taxes is the threat of being punished for failing to do so.
> And this can only be done peacefully by defunding the incompetent state apparatus.
Agreed entirely.
> Yeah but you need money for that. And we don't have money because the state is taking half of it.
Agreed entirely, and doing away with confiscatory taxation is an important goal. But whether or not the state takes our money is not directly relevant to the question of whether the state is sufficiently trustworthy and competent to assign monopolistic control of critical aspects of our lives to.
And my position on that is that even if we can't roll back taxation, we still shouldn't trust the state with unilateral control over key aspects of our lives and livelihoods, and we'd be better off making do with the resources we retain despite taxation to provide those things for ourselves via other forms of organization or community.
Is it an insult to human dignity? Let’s go through the thought process.
Commodities are used in an enterprise. Some of the commodities are labor. That labor commodity does work. Involving automation. Eventually (so we are told) those labor commodities manage to automate some forms of labor. Making those other labor commodities redundant.
The labor commodities are discarded. Because why (sigh) use a cart when you now have a car? And you don’t even own a horse.
All of the above is presumably not an insult to human dignity. No. The insult to human dignity is being “kept busy” instead of letting billionaires hoard automation made through human labor.
Of course the real solution is not busywork. But the part about busywork was not on the top of my mind with regards to dignity in this context.
> Personally, unless the alternative is literally starving,
Assuming large-scale automation[1]: workers have in aggregate automated themselves. It takes labor to automate. And yet those former workers are now a “burden”? We’re assuming automation, so was the making of the food stuff, the transportation of the food stuff, the automation of the infrastructure maintenance... was that done or not? Where is the burden being felt?
You’re gonna call the people that built everything a burden?
Either we are talking in terms of propagandistic guilt assignment, or we’re talking realpolitics. Either:
1. we can trivially support the “burden” because of automation (no burden); or
2. billionaire resource hoarders (a burden?) do not need the vast majority of their underlings (maybe just a few for Epstein 2.0) and can let them fend for themselves or die off. (It’s literally not even a question of whether they have a big red Automation Button that would sustain the “burden” indefinitely. What incentive do they have to press it?)
More jobless are a burden in a capitalism based social security system.
Has nothing to do if those build something useful or not. Caputalism doesn't care.
In the end the upper 0.1% get the profit and those who still have jobs have to finance the social security systems. More jobless and less working means the jobless become a burden and in the long run the system will fail.
So you either need to tax automation or the rich.
Guess if that will happen.
I know you are speaking in real terms about how the system works. But I don’t need to describe that system in such utterly system-serving (for lack of words) terms.
> In the end the upper 0.1% get the profit and those who still have jobs have to finance the social security systems. More jobless and less working means the jobless become a burden and in the long run the system will fail.
Emacs is an interesting analogy. I've switched from IDEs to Emacs at some point in my career, and inertia obviously wasn't the reason. Then another 15 years later I went back to using IDEs (inspired by Carmack's interview). 2 years in I realized it destroyed my ability to generate and maintain mental maps of the codebase and generally remember things about it, although I think it's still a net gain so I stick with it. Agentic coding poses the exact same problem and the exact same tradeoff. And I think the jury is still out on whether it's worth it. At the very minimum you need to take proactive measures least you end up with a codebase no one can maintain (other than maybe future AGI).
> I deeply appreciate hand-tool carpentry and mastery of the art, but people need houses and framing teams should obviously have skillsaws.
Where are all the new houses? I admit I am not a bleeding edge seeker when it comes to software consumption, but surely a 10x increase in the industry output would be noticeable to anyone?
This weekend I tried what I'd call a medium scale agentic coding project[0], following what Anthropic demonstrated last week autonomously building a C-compiler [1]. Bottom line is, it's possible to make demos that look good, but it really doesn't work well enough to build software you would actually use. This naturally lends itself to the "everybody is taking about how great it is but nobody is building anything real with it" construct we're in right now. It is great, but also not really useful.
Related, this reminds me of the time Cursor spent millions of dollars worth of tokens to write a new browser with LLMs and ended up with a non-functioning wrapper of existing browser libraries.
Org processes have not changed. Lots of the devs I know are enjoying the speedup on mundane work, consuming it as a temporary lifestyle surplus until everything else catches up.
You can't saw faster than the wood arrives. Also the layout of the whole job site is now wrong and the council approvals were the actual bottleneck to how many houses could be built in the first place... :/
Basically this. My last several tickets were HITL coding with AI for several hours and then waiting 1-2 days while the code worked its way through PR and CI/CD process.
Coding speed was never really a bottleneck anywhere I have worked - it’s all the processes around it that take the most time and AI doesn’t help that much there.
True story; I wanted to make a tiny update to our CI / CD to upload copies of some artifacts to S3. It took 1min for the LLM to remind me of the correct syntax in aws cli to do the upload and the syntax to plug it into our GitHub Actions. It then took me the next 3 hours to figure out which IAMs needed to be updated in order to allow the upload before it was revealed that Actually uploading to the S3 requires the company IT to adjust bucket policies and this requires filing a ticket with IT and waiting 1-5 business days for a response then potentially having a call with them to discuss the change and justify why we need it. So now it's four days later and I still can't push to S3.
AI reduced this from a 5-day process to a 4.9-day process
I’m seeing it slightly differently. So much of our new slowdown is rework because we’ve seen a bunch more API and contract churn. The project I’m on has had more rework than I care to contemplate and most of it stems from everyone’s coding agents failing to stay synced up with each other on the details and their human handlers not noticing the discrepancies until we’re well into systems integration work.
If I may hijack your analogy, it would be like if all the construction crews got really fast at their work, so much so that the city decided to go for an “iterative construction” strategy because, in isolation, the cost of one team trying different designs on-site until they hit on one they liked became very small compared to the cost of getting city planners and civil engineers involved up-front. But what wasn’t considered was the rework multiplier effect that comes into play when the people building the water, sewage, electricity, telephones, roads, etc. are all repeatedly tweaking designs with minimal coordination amongst each other. So then those tweaks keep inducing additional design tweaks and rework on adjacent contractors because none of these design changes happen in a vacuum. Next thing you know all the houses are built but now need to be rewired because the electricity panel is designed for a different mains voltage from the drop and also it’s in the wrong part of the house because of a late change from overhead lines in the alleys to underground lines below the street.
Many have observed that coding agents lack object permanence so keeping them on a coherent plan requires giving them such a thoroughly documented plan up front. It actually has me wondering if optimal coding agent usage at scale resembles something of a return to waterfall (probably in more of a Royce sense than the bogeyman agile evangelists derived from the original idea) where the humans on the team mostly spend their time banging out systems specifications and testing protocols, and iteration on the spec becomes somewhat more removed from implementing it than it is in typical practice nowadays.
To me the hard problem isn’t building things, it’s knowing what to build (finding the things that provide value) and how to build it (e.g. finding novel approaches to doing something that makes something possible that wasn’t possible before).
I don’t see AI helping with knowing what to build at all and I also don’t see AI finding novel approaches to anything.
Sure, I do think there is some unrealized potential somewhere in terms of relatively low value things nobody built before because it just wasn’t worth the time investment – but those things are necessarily relatively low value (or else it would have been worth it to build it) and as such also relatively limited.
Software has amazing economies of scale. So I don’t think the builder/tool analogy works at all. The economics don’t map. Since you only have to build software once and then it doesn’t matter how often you use it (yeah, a simplification) even pretty low value things have always been worth building. In other words: there is tons of software out there. That’s not the issue. The issue is: what it the right software and can it solve my problems?
> To me the hard problem isn’t building things, it’s knowing what to build (finding the things that provide value) and how to build it (e.g. finding novel approaches to doing something that makes something possible that wasn’t possible before).
The problem with this that after doing this hard work someone can just copy easily your hard work and UI/UX taste. I think distribution will be very important in the future.
We might end up that in future that you have already in social media where influencers copy someones post/video and not giving credits to original author.
>The problem with this that after doing this hard work someone can just copy easily your hard work and UI/UX taste.
Or indeed, somebody might steal and launder your work by scooping them up into a training set for their model and letting it spit out sloppy versions of your thing.
I agree. It’s really easier to build low-impact tools for personal use.
I managed to produce tools I would never have had time to build and I use them everyday. But I will never sell them because it’s tailored to my needs and it makes no sense to open source anything nowadays.
For work it’s different, product teams still need to decide what to build and what is helpful to the clients. Our bugs are not self-fixed by AI yet.
I think Anthropic saying 100% of their code is AI generated is a marketing stunt. They have all reasons to say that to sell their tool that generates code. It sends a strong signal to the industry that if they can do it, it could be easier for smaller companies.
We are not there yet from a client perspective asking a feature and the new feature is shipped 2 days later in prod without human interactions
I wonder what happened to the old addage of "only 10% of the time you actually spend coding, the rest of the time is figuring out what is needed".
At the same time I see people claiming 100x increases and how they produce 15k lines of code each day thanks to AI, but all I can wonder is how these people managed to find 100x work that needed to be done.
For m, I'm demotivated to work on many ideas thinking that anyone can easily copy it or OpenClaw/Nanobot will easily replicate 90% of that unctionality.
So now need to think of different kind of ideas, something on line of games that may take multiple iteration to get perfected.
I mean this is how it's always been throughout history.
Creating something new is hard, copying something in terms of energy spent, is far easier. This is software or physical objects that don't require massive amounts of expensive technology to reproduce.
At my $work this manifests as more backlog items being ticked off, more one-off internal tooling, features (and tools) getting more bells-and-whistles and much more elaborate UI. Also some long-standing bugs being fixed by claude code.
Headline features aren't much faster. You still need to gather requirements, design a good architecture, talk with stakeholders, test your implementation, gather feedback, etc. Speeding up the actual coding can only move the needle so much.
I feel like we work at the same place. IT Husbandry/Debt Paying/KTLO whatever you call it is being ground into dust. Especially repetitive stuff that I originally would've needed a week to automate and never could get to the top of the once quarterly DevOps sprint...bam. GitHub Action workflow runs weekly to pull in the latest OS images, update and roll over a smoke test VM, monitor, roll over the rest or rollback and ping me in Slack. Done in half a day.
I've got a couple Claude Code skills set up where I just copy/paste a Slack link into it and it links people relevant docs, gives them relevant troubleshooting from our logs, and a hook on the slack tools appends a Claude signature to make sure they know they weren't worth my time.
That said, there's this weird quicksand people around me get in where they just spend weeks and weeks on their AI tools and don't actually do much of anything? Like bro you burned your 5 hour CC Enterprise limit all week and committed...nothing?
I'm sure there's plenty of new software being released and built by agents, but the same problem as handcrafted software remains - finding an audience. The easier and quicker it is to build software, or the more developers build software, the more stuff is thrown at a wall to see what sticks, but I don't think there's more capacity for sticktivity, if my analogy hasn't broken down by now.
According to SteamDB (and Reddit), 2024 and 2025 both saw about 19.000 games released on Steam - there's a big jump between '23 and '24 of about 5000 games, but oddly it plateaued then.
It's ironic to me because I'm the Luddite who refuses to adopt agentic AI and still using only the Chat interface with Codex and Claude inside the VS Code extensions to help me with both work projects and personal projects. And I've had amazing results with only this. "Look at this codebase and tell me the best ways to integrate some new feature", "look at this source code file and tell me what's wrong with it", "show me how to implement this thing I want". Then I copy and adapt the results as needed and integrate it with the rest of my work. This has worked great and I've shipped a ton of projects much faster and easier. Clearly the AI could have written a lot of it itself but I'm not sure I'm really lacking in any benefits with this method. So this makes the whole agentic push especially seem like some kinda over hyped gimmick.
Quite a few - and I know I am only speaking for myself - live on my different computers. I created a few CLI tools that make my life and that of my agent smoother sailing for information retrieval. I created, inspired by a blog post, a digital personal assistant, that really enables me to better juggle different work contexts as well as different projects within these work contexts.
I created a platform for a virtual pub quiz for my team at my day job, built multiple pandingpages for events, debugged dark table to recognize my new camera (it was to new to be included in the camera.xml file, but the specs were known). I debugged quite a few parts of a legacy shitshow of an application, did a lot of infrastructure optimization and I also created a massive ton of content as a centaur in dialog with the help of Claude Code.
But I don't do "Show HN" posts. And I don't advertise my builds - because other than those named, most are one off things, that I throw away after this one problem was solved.
To me code became way more ephemeral.
But YMMV - and that is a good thing. I also believe that way less people than the hype bubble implies are actually really into hard core usage like Pete Steinberger or Armin Ronacher and the likes.
> Quite a few - and I know I am only speaking for myself - live on my different computers
I use AI/agents in quite similar ways, and even rekindled multiple personal projects that had stalled. However, to borrow OPs parlance, these are not "houses" - more like sheds and tree-houses. They are fun and useful, but not moving the needle on housing stock supply, so to speak.
People haven't noticed because the software industry was already mostly unoriginal slop, even prior to LLMs, and people are good at ignoring unoriginal slop.
The real outcome is mostly a change in workflow and a reasonable increase in throughput. There might be a 10x or even 100x increase in creation of tiny tools or apps (yay to another 1000 budget assistant/egg timer/etc. apps on the app/play store), but hardly something one would notice.
To be honest, I think the surrounding paragraph lumps together all anti-AI sentiments.
For example, there is a big difference between "all AI output is slop" (which is objectively false) and "AI enables sloppy people to do sloppy work" (which is objectively true), and there's a whole spectrum.
What bugs me personally is not at all my own usage of these tools, but the increase in workload caused by other people using these tools to drown me in nonsensical garbage. In recent months, the extra workload has far exceeded my own productivity gains.
For the non-technical, imagine a hypochondriac using chatgpt to generate hundreds of pages of "health analysis" that they then hand to their doctor and expect a thorough read and opinion of, vs. the doctor using chatgpt for sparring on a particular issue.
reply