Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Web browsers are a moving target, just like operating systems. Anyone can 'build there own'; but I'd also say that it is close to impossible to build a secure and viable competing web browser that correctly implements the specification better than Chrome.

The two most important keywords in Drew's blogpost is *serious* and *security*. There is not one mention of either of those words in this blog post; hence Drew's points still stands unchallenged.

You can try, but so did Servo which was a 'serious attempt' and not even the Rust hype could convince the masses that it was better than Chrome.



> Web browsers are a moving target

It moves slower than people assume; I installed Opera 12 last year for the craic – the last version built on their Presto engine, released almost ten years ago – and it works surprisingly well with many sites. I did have to use mitmproxy to rewrite some trivial stuff like some CSS prefixes and s/(let|const)/var/ in JS. Flexboxes are supported, but grid isn't so that failed for some sites.

> Drew's points still stands unchallenged

His points are based on a faulty assumption to start with: he counts all sorts of documents, but that count is spectacularly wrong as it counts many things it shouldn't. I mentioned this at the time: https://news.ycombinator.com/item?id=22617721

Is it a large project? Sure, as many software projects are. But "impossible" and "comparable to the Manhattan project"? Certainly not; it's just that there's not a whole lot of money to be made with a new browser engine or other broadly shared motivation.


>I mentioned this at the time

And Drew himself answered you at the time on points made to your comment, that is things you said shouldn't be included that should and others you said were incorrectly included but were instead excluded in first place.


Those replies are handwavy and offer no convincing defence at all. In just a few minutes I was able to reduce the 1,217 URLs to 434 by simply excluding outdated or non-applicable stuff. That's about a third and includes some pretty large documents, and that's just with a quick check. The list is unambiguously categorically wrong and anyone who seriously looks at it and comes to a different conclusion is suffering from serious confirmation bias.

Whether the web is "too complex" is a different matter and open to interpretation as "too complex" is subjective. But the data very wrong and therefore the article is wrong. A "correct" conclusion with faulty arguments is just as worthless as an incorrect conclusion: any possible solution depends on a correct understanding of the situation. "Global warming happens because of pornography, therefore we must ban pornography" is just as useless as "global warming is a fake fraud" even though the conclusion of the first is correct.


>I was able to reduce the 1,217 URLs to 434

Why not post this list for the rest of us to see (and check) then? But beforehand those "outdated" stuff (such as your HTML 3.1 example) may still be applicable and excluding them means your approach is incorrect right off the bat.


The HTML 3.1 specification is essentially irrelevant for implementing a modern browser; and you certainly don't need HTML 5.0, and, HTML 5.1, and HTML 5.2, and HTML 5.3, and, HTML 4.0, and, HTML 4.01, and, HTML 3.2, and, XHTML 1.0, and, XHTML 1.1, and, XHTML 2. These are large documents; possibly the largest in the set.

It takes a minute to spot-check; some specific examples were provided in the previous thread. I don't have the list any more and can't be bothered to recreate it; what value is there if you can just check Drew's list – which is really not that hard? I also have no idea how correct it is, exactly; I suspect the actual number would be even lower still.


Sure, keeping up with all the new fancy Chrome standards is hard.

Even Mozilla can't do it.


"Chrome standards"? Those are not standards. Those are things Chrome builds on their own.


Unfortunately, given its market share, they become standards.


Sadly, this is true. And some are eventually drafted into the standard.


Isn't that the real problem here? Nobody cares if your browser fails to render that page, because you strictly adhere to the standard while Chrome just happily deals with broken documents (or worse: Chrome requiring documents to be slightly broken).

Chrome is the sole benchmark. If it works in Chrome, it's fine, if it doesn't the site is broken. Standards never enter the discussion.


Maybe for you. A lot of people use safari. Ever iPhone owner for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: