JustHTML https://github.com/EmilStenstrom/justhtml is a neat new Python library - it implements a compliant HTML5 parser in ~3,000 lines of code that passes the full existing 9,200 test HTML5 conformance suite.
Emil Stenström wrote it with a variety of coding agent tools over the course of a couple of months. It's a really interesting case study in using coding agents to take on a very challenging project, taking advantage of their ability to iterate against existing tests.
Thanks for sharing simon! Writing a parser is a really good job for a coding agent, because there's a clear right/wrong answer. In this case, the path there is the challenging part. The hours I've spent trying to convince agents to implement adoption agency well... :)
Depending on your perspective, you can take away any of the two points.
The first iteration of the project created a library from scratch, from the tests all the way to 100% test coverage. So even without the second iteration, it's still possible to create something new.
In an attempt to speed it up, I (with coding agent) rewrote it again based on html5ever's code structure. It's far from a clean port, because it's heavily optimized Rust code, that isn't possible to port to Python (Rust marcos). And it still depended on a lot of iteration and rerunning tests to get it anywhere.
I'm not pushing any agenda here, you're free to take what you want from it!
Thank you for the clarification, that was not entirely clear to me from the post.
You also mention that the current "optimised" version is "good enough" for every-day use (I use `bs4` for working with html), was the first iteration also usable in that way? Did you look at `html5ever` because the LLM hit a wall trying to speed it up?
It was usable! Yeah, the handler based architecture that I had built on was very dependent on object lookups and method calls, and my hunch was that I had hit a wall trying to optimize the speed. I was slower than html5lib still, so decided to go with another "code architecture" (html5ever) that was closer to the metal. Worked out in getting me ~60% faster than html5lib.
As for bs4, if you don't change the default, you get the stdlib html.parser, which doesn't implement html5. Only works for valid HTML.
It seems this the parser is creating errors even when none are expected:
=== INCOMING HTML ===
<math><mi></mi></math>
=== EXPECTED ERRORS ===
(none)
=== ACTUAL ERRORS ===
(1,12): unexpected-null-character
(1,1): expected-doctype-but-got-start-tag
(1,11): invalid-codepoint
This "passes" because the output tree still matches the expected output, but it is clearly not correct.
The test suite also doesn't seem to be checking errors for large swaths of the html5 test suite even with --check-errors, so it's hard to say how many would pass if those were checked.
Thanks for flagging this. Found multiple errors that are now fixed:
- The quoted test comes from justhtml-tests, a custom test suite added to make sure all parts of the algorithm are tested. It is not part of html5lib-tests.
- html5lib-tests does not support control characters in tests, which is why some of the tests in justhtml-tests exist in the first place. In my test suite I have added that ability to our test runner to make sure we handle control character correctly.
- In the INCOMING HTML block above, we are not printing control characters, they get filtered away in the terminal
- Both the treebuilder and the tokenizer are outputting errors for the found control character. None of them are in the right location (at flush instead of where found), and they are also duplicate.
- This being my own test suite, I haven't specified the correct errors. I should. expected-doctype-but-got-start-tag is reasonable in this case.
All of the above bugs are now fixed, and the test suite is in a better shape. Thanks again!
Hi! The expected errors are not standardized enough for it to make sense to enable --check-errors by default. If you look at the readme, you'll see that the only thing they're checking is that the _numbers of errors_ are correct.
run_tests.py does not appear to be checking the number of errors or the errors themselves for the tokenizer, encoding or serializer tests from html5lib-tests - which represent the majority of tests.
There's also something off about your benchmark comparison. If one runs pytest on html5lib, which uses html5lib-test plus its own unit tests and does check if errors match exactly, the pass rate appears to be much higher than 86%:
These numbers are inflated because html5lib-tests/tree-construction tests are run multiple times in different configurations. Many of the expected failures appear to be script tests similar to the ones JustHTML skips.
I've checked the numbers for html5lib, and they are correct. They are skipping a load of tests for many different reasons, one being that namespacing of svg/math fragments are not implemented. The 88% number listed is correct.
Is it really too much to do a little more editing of the LLM output for the blog post? There's 17 numbered and titled section headings, all of which are linkable to with anchors, and which mostly have two sentences each.
Hi! Yes, the headers were LLM generated and the text were not. I didn't want the blog post to go on for ages, so I just wrote a few lines under each heading. Any ideas how to make it better, while not being too long?
I'd start by deleting all the numbered section headings, and add either a transition word (then, so) or a transition sentence (why you went from step n to step n+1 or after how much time or whatnot).
New iteration up. I kept the headings because they make the text easier to scan, but made them more descriptive. Added some transition words. Slight improvement I think.
if it isnt too much to ask, since you are already insanely familiar with the html parser semantics, can you write a postgres extension that can parse html inside postgres? usecase: cleaning rss feed items while storing
Emil Stenström wrote it with a variety of coding agent tools over the course of a couple of months. It's a really interesting case study in using coding agents to take on a very challenging project, taking advantage of their ability to iterate against existing tests.
I wrote a bit more about it here: https://simonwillison.net/2025/Dec/14/justhtml/