Sadly, it seems like nobody is considering the best optimization: make DOM operations fast. I think if you could batch DOM operations together you could avoid a lot of wasted relayout and duplicate calculations.
> Document fragments are like transactions for the DOM.
About the same way innerHTML is which is completely unhelpful: during reconciliation you need to copy, update, and reset the subtree which contains all the update points, which is almost certainly a lot more than you need.
You also likely need to reconcile document state (e.g. focus) by hand.
You are correct, but if you think about it, you're talking about parsing and tokenizing before the operation can even occur. That's really heavy. I think it could be better than reading .innerHTML
My feeling is that the browser already does this in that it considers all DOM apis within a single 16ms (requestAnimationFrame?) as a single transaction.
The trouble for browsers, is if certain DOM apis have a dependency on the layout of another element. My naive and unvalidated understanding:
// Good: These DOM calls in a single frame will trigger layout-paint-composite (1 loop)
- e.style.backgroundColor = "red";
- e.style.width = "20px";
- e.style.transform = "translateX(10px);
// Bad: These DOM calls in a single frame will trigger layout-?-layout-paint-composite (2 loops)
- ...
- e.style.height = otherElement.offsetWidth + 200 + "px"
- ...
The reason being that without knowing the width of "otherElement", there's no way for the js runtime to execute the "e.style.height" line and execution needs to be paused while layout occurs.
If you're looking for a transactional syntax (similar to what you've proposed) that also addresses this though, fastdom looks like a good option:
I actually think the former example is more clear. It's a bit verbose but every part is simple. The second example is very "magic", it takes a lot of thought to understand
I don't think that's what OP is saying. They're saying that (e.g.) calculations are made on each appendChild() call when it would be more efficient (when you know you're going to be inserting a ton) to suspend all calculation, insert 1000 nodes, then resume calculations. Something akin to setNeedsLayout() on iOS:
I good parallel might be database development, the difference between taking a cursor and looping through to make changes vs a set based operation that understands how to specify all the needed changes at once.
My point is that if you change the DOM api itself thats the win. Right now its just individual property updates, so the browser can't know when to delay a computation. So definitely not part of the web today, but it seems worth considering.