> I still haven't found one to equal Railroad Tycoon 3, which has this kind of neat reactive diffusion field pricing engine.
My feelings exactly. I still play it regularly for this reason alone. A really amazing game under the hood.
The Rhodes Unfinished scenario is probably my favorite. On the highest-difficulty level, it starts out as a hard map, then becomes an insatiable resource grab. By the end you're building vanity suspension bridges over chasms and digging tunnels the width of Kilimanjaro.
I think AI "slop" will improve medical diagnoses dramatically. Let's assume for a second that the first specialist did not graduate at the top of their class.
The year is 2030, when LLMs are more pervasive. The first specialist now asks you to wait, heads into the other room and double-checks their ET diagnosis with AI. Doing so has become standard practice to avoid malpractice suits. The model persuades them to diagnose PV, avoiding a Type-II error.
But let's say the model gets it wrong too. You eventually visit the second specialist, who did graduate at the top of their class. The model says ET, but the specialist is smart enough to tell that the model is wrong. There is some risk that the second specialist takes the CYA route, but I'd expect them not to. They diagnose PV, avoiding a Type-I error.
How about personal canisters of chaff that get fired off whenever I enter a room? Before long, folks will get so annoyed with all of the metal fibers I leave behind, that I simply won’t be invited anywhere and my anonymity will have been protected.
People focused on the flaws are missing the picture. Opus wasn't even trained to be "a member of a team of engineers," it was adapted to the task by one person with a shell script loop. Specific training for this mode of operation is inevitable. And model "IQ" is increasing with every generation. If human IQ is increasing at all, it's only because the engineer pool is shrinking more at one end than the other.
This is a five-alarm fire if you're a SWE and not retiring in the next couple years.
> This is a five-alarm fire if you're a SWE and not retiring in the next couple years.
I’m sorry, but this is such a hype beast take. In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving from Tesla. How is that going?
Every single line of code produced is a liability. This idea that you’re going to have “gas town” like agents running and building apps without humans in the loop at any point to generate liability free revenue is insane to me.
Are humans infallible? Obviously not. But if you are telling me that ‘magic probability machines’ are creating safe, secure, and compliant software that has no need for engineers to participate in the output- first I’d like to see a citation and second I have a bridge to sell you.
> In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving
Self-driving has different economics. We're reading tea leaves, true, but it's also true that software has zero marginal cost and that $20K pays for an engineer-month in SF.
> Every single line of code produced is a liability.
Do you have a hard spec and rock-solid test cases? If you do, you have two options to a working prototype: 2-6 engineer-years, or $20K. The second option will greatly increase in quality and likely decrease in price over the next few years.
What if the spec and the test cases are the new software? Assembly programmers used to make an argument against compiled code that's somewhat parallel to yours: every instruction is a (performance) liability.
> without humans in the loop
There will be humans, just fewer and fewer. The spec and test cases are AI-eligible too.
> safe, secure, and compliant software
I'm not sure humans' advantage here is safe, if it even exists still.
So let’s say you fund a single engineer for an open‑source project with $20k. The outcome will be a prototype with some interesting ideas. And yes, with a few hundred bucks' worth of AI assistance that single engineer might get much further than without (but not using any of the techniques presented in this blog). People can coalesce around the project as contributors. A seed was planted and watered a bit.
In this case, the $20k has been burned and produced zero value. Just look at the repo issues: looks like someone trying to get attention by spamming the issue tracker and opening hundreds of PRs. As an open source project, it’s a dead end.
So it doesn’t matter that this is “likely decrease in price over the next few years”? The value is zero, so even if superintelligence can produce this in an instant at zero cost in six months, the outcome is still worth zero.
You’re assuming a kind of inverse relationship between production cost and value.
In terms of quality, to anyone using those coding agents, it should be clear by now that letting them run autonomously and in parallel is a bad idea. That’s not going to change unless you believe LLMs will turn into something entirely different over time.
Note that what works with humans—social interaction creating some emergent properties like innovation—doesn’t translate to LLM agents for a simple reason: they don’t have agency, shared goals, or accountability, so the social dynamics that generate innovation can’t form.
I agree that there's not a lot of value in your example, but it's the wrong example. AI writing code and humans refining it and maintaining it is probably an inferior proposition, more so if the project is FOSS.
The model I'm referring to is: "if it walks like software and quacks like software, it's software." Its writers and maintainers are AI. It has a commercial purpose. Its value comes from fulfilling its requirements.
There will be human handlers, including some who will occasionally have to dig through the dung and fix AI-idiosyncratic bugs. Fewer Ferrari designers, more Cuban 1956 Buick mechanics. It's an ugly approach, but the conjecture that, economically _or_ technically, there must be something fundamentally broken with it is very hand-wavy and dubious.
I agree that there will be less code-level innovation overall, just like artistic value production took a big hit when we went from portraits to photographs.
> its value comes from fulfilling its requirements.
The requirements will have to come from somewhere, and they will have to be quite precise although probably higher-level than code written today. You're talking about just a new kind of software engineer. The kind of stuff described at https://martin.kleppmann.com/2025/12/08/ai-formal-verificati... (note the "the challenge will move to correctly defining the specification")
Unless what you have in mind is some sort of Moltbook add-on that the bots would write for themselves.
Can a hacked phone (such as one that was not in Lockdown Mode at one point in time) persist in a hacked state?
Obviously, the theoretical answer is yes, given an advanced-enough exploit. But let's say Apple is unaware of a specific rootkit. If each OS update is a wave, is the installed exploit more like a rowboat or a frigate? Will it likely be defeated accidentally by minor OS changes, or is it likely to endure?
This answer is actionable. If exploits are rowboats, installing developer OS betas might be security-enhancing: the exploit might break before the exploiters have a chance to update it.
Forget OS updates. The biggest obstacle to exploit persistence: a good old hard system reboot.
Modern iOS has an incredibly tight secure chain-of-trust bootloader. If you shut your device to a known-off state (using the hardware key sequence), on power on, you can be 99.999% certain only Apple-signed code will run all the way from secureROM to iOS userland. The exception is if the secureROM is somehow compromised and exploited remotely (this requires hardware access at boot-time so I don't buy it).
So, on a fresh boot, you are almost definitely running authentic Apple code. The easiest path to a form of persistence is reusing whatever vector initially pwned you (malicious attachment, website, etc) and being clever in placing it somewhere iOS will attempt to read it again on boot (and so automatically get pwned again).
But honestly, exploiting modern iOS is already difficult enough (exploits go for tens millions $USD), persistence is an order of magnitude more difficult.
That's how you get off such charges. I'll work for you, if you drop charges. There was a reddit post I can't find when EMPRESS had one of their episodes where she was asked if she wanted to work for. It's happened in the cracking scene before.
> The jailbreaking community is fractured, with many of its former members having joined private security firms or Apple itself. The few people still doing it privately are able to hold out for big payouts for finding iPhone vulnerabilities. And users themselves have stopped demanding jailbreaks, because Apple simply took jailbreakers’ best ideas and implemented them into iOS.
Re: reboots – TFA states that recent iPhones reboot every 3 days when inactive for the same reasons. Of course, now that we know that it's linked to inactivity, black hatters will know how to avoid it...
You should read into IOS internals before commenting stuff like this. Your answer is wrong, and rootkits have been dead on most OS's for years, but ESPECIALLY IOS. Not every OS is like Linux where security is second.
Even a cursory glance would show it's literally impossible on IOS with even a basic understanding.
My feelings exactly. I still play it regularly for this reason alone. A really amazing game under the hood.
The Rhodes Unfinished scenario is probably my favorite. On the highest-difficulty level, it starts out as a hard map, then becomes an insatiable resource grab. By the end you're building vanity suspension bridges over chasms and digging tunnels the width of Kilimanjaro.
reply