Hacker Newsnew | past | comments | ask | show | jobs | submit | eloisius's commentslogin

It's bizarre to me how much attention this site pays to ~influencers~ self-promoters who aren't even AI researchers. Anything substantial on this topic is likely going to be presented at siggraph, or written by someone who does actual research in the field. We're acting like "all-round web tech enthusiasts" are real authorities and letting them suck all the air out of the room in this constant barrage of AI hype.

Well said, I fully agree.

I respect Simon's work, and have no desire to disparage him. He's an accomplished engineer who has made great contributions to the industry.

But he's not an authority on this subject. And even if he were, this community has a strange obsession with authority figures. Appeals to authority are frequently thrown around to back up an argument. Opinions made by dang, pg, tptacek, sama, et al, are held in higher regard than those made by random accounts. It's often not about the value of what is being said, but about who says it, which is detrimental to open discourse.

I would argue that opinions from authority figures should be challenged more than anyone else's. They are as fallible as everyone, but they're also in a position to influence and mislead a lot of people. So many of humanity's problems were caused by deranged leaders wielding their power and influence over masses. I think this is especially important in the case of this technology, where many people have a lot to gain from certain narratives, and many more have a lot to lose depending on how all this plays out.


I’ve been thinking the same thing lately. It’s hard to tell if I’m just old and want everyone off my lawn, but I really feel like IT is a dead end lately. “Vintage” electronics are often nicer to use than modern equivalents. Like dials and buttons vs touch screens. Most of my electronics that have LCDs feel snappy and you sort of forget that you’re using them and just do what you were trying to do. I’m not necessarily a Luddite. I know tech _could_ be better theoretically but it’s distressing to know that it’s also not possible for things to be different for some other reasons. Economically, culturally? I don’t know.

Cloudflare’s ubiquity makes bootstrapping a search index via crawler virtually impossible, but what about data sources like Common Crawl?

E2E encryption probably isn’t enough to protect activists trying to organize. Without doing onion routing where you pre-compute some nodes it in the network that it MUST transit prior to delivery and having them decrypt it until it arrives to the recipient (like Tor) you still leak who’s talking to who.

Neither E2EE or Tor are enough to protect someone being targeted by state level actors. They're helpful, but if you're a high enough value target, they only slow down your adversary. If you're relying on algorithms on your computer to protect you, you should be prepared to meet the hacking wrench. [1]

[1] https://xkcd.com/538/


If the political environment gets bad enough, you may expect to die anyway, and the TTL difference that obfuscation provides means the difference between making a small improvement before the inevitable, or not.

Part of me feels the same way, and ~2015 me was full on SPA believer, but nowadays I sigh a little sigh of relief when I land on a site with the aesthetic markers of PHP and jQuery and not whatever Facebook Marketplace is made out of. Not saying I’d personally want to code in either of them, but I appreciate that they work (or fail) predictably, and usually don’t grind my browser tab to a halt. Maybe it’s because sites that used jQuery and survived, survived because they didn’t exceed a very low threshold of complexity.

Facebook is PHP ironically.

It was once upon a time, hence them moving to HHVM to interpret it, but it’s been all but replaced with a PHP spinoff named Hacklang now.

I think in 2026 Facebook is a conglomeration of a bunch of things... Definitely not just PHP anymore.

That doesn’t really address much of the criticism in this thread. No one is shocked that it’s not as good as production web browsers. It’s that it was billed as “from scratch” but upon deeper inspection it looks like it’s just gluing together Servo and some other dependencies, so it’s not really as impressive or interesting because the “agents” didn’t really create a browser engine.


Upon deeper inspection? Someone checked the Cargo file and proclaimed it was just Servo and QuickJS glued together without actually bothering to look if these dependencies are even being used.

In reality while project does indeed have Servo in its dependencies it only uses it for HTML tokenization, CSS selector matching and some low level structures. Javascript parsing and execution, DOM implementation & Layout engine was written from scratch with only one exception - Flexbox and Grid layouts are implemented using Taffy - a Rust layout library.

So while “from scratch” is debatable it is still immensely impressive to be that AI was able to produce something that even just “kinda works” at this scale.


> So while “from scratch” is debatable it is still immensely impressive to be that AI was able to produce something that even just “kinda works” at this scale.

“From scratch” is inarguably wrong given how much third-party code it depends on. There’s a reasonable debate about how much original content there is but if I was a principal at a company whose valuation hinges on the ability to actually deliver “from scratch” for real, I would be worried about an investor suing for material misrepresentation of the product if they bought now and the value went down in the future.


Thanks for the feedback. I agree that for some parts that use dependencies, the agent could have implemented them itself. I've begun the process of removing many of these and developing them within the project alongside the browser. A reasonable goal for "from scratch" may be "if other major browsers use a dependency, it's fine to do so too". For example: OpenSSL, libpng, HarfBuzz, Skia.

I'd push back on the idea that all the agents did was glue dependencies together — the JS VM, DOM, CSS cascade, inline/block/table layouts, paint systems, text pipeline, chrome, and more are all being developed by agents as part of this project. There are real complex systems being engineered towards the goal of a browser engine, even if not fully there yet.


> generate a bunch of code that seems mostly correct and then gradually tweak it until it's closer and closer to compiling/working

The diffusion model of software engineering


This is great. If you packaged it as a docker-compose YAML and maybe added a periodic task to poll automatically id drop it into Container Station in my NAS today.


They could throw some small portion of their billions of dollars into proper quality control and reproduce it themselves if they wanted to. It’s an industry-wide malaise, but it isn’t inevitable. It’s amazing that every year it becomes more and more economically nonviable for basic shit to meet the most modest standards of usability, yet we can use the power consumption of a small country to have Copilot in Notepad.


The way I see it, money can’t buy one of the most important ingredients: the motivation to do the best work of your life. No matter how much cash you throw at a problem, you’re likely just going to get people who want to "do their job" from 9 to 5. Those are exactly the kind of workers that companies like the Apple of 2026 are looking for. It’s a big ship, and it needs to stay steady and predictable. People who want to achieve something "insanely great" or "make a dent in the universe" are just a distraction.

In my experience, shipping a product as polished as Mac OS X 10.6.8 Snow Leopard requires a painful level of dedication from everyone involved, not just Quality Assurance.

As long as neither the New York Times nor the Wall Street Journal writes about how bad Apple’s software has gotten, there’s even no reason for them to think about changing their approach.

The drama surrounding Apple’s software quality isn’t showing up in their earnings. And at the end of the day, those earnings are the "high order bit," no matter what marketing tries to tell us.


Well, if there's one thing history has shown us (including the history of Apple's own insurgency against the PC), it's that complacency and stagnation make the incumbent a target for every newcomer who does have the drive to make a dent in the universe. And there are always a lot of people with that drive. This is how we keep ending up in the cycle of chaos > new paradigm > perfect software that probably should not be improved upon > collapse under weight of new features > chaos > new paradigm... repeat.


> They could throw some small portion of their billions of dollars into proper quality control and reproduce it themselves if they wanted to.

How?

How do you reproduce something when you have no idea of the cause and it's not happening on any of your machines?

And remember they don't have just this one unreproducible bug reported. They have thousands.

If you have experience writing software, you're going to end up with a lot of unreproducible bug reports. They're a genuine challenge for everyone, not just Apple.


It’s any interesting thread for sure, but while reading through this I couldn’t help but think that the point of these ideas are for a person to read and consider deeply. What is the point of having a machine do this “thinking” for us? The thinking is the point.


And that’s the problem with a lot of chatbot usage in the wild: it’s saving you from having to think about things where thinking about them is the point. E.g. hobby writing, homework, and personal correspondence. That’s obviously not the only usage, but it’s certainly the basis for some of the more common use cases, and I find that depressing as hell.


so consider them deeply. Why does the value diminish if discovered by a machine as long as the value is in the thinking?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: