I was hoping this would be a guide on managing architectures, common use case traps for handling things like scalability and state. Instead this is just a long-winded way of recommending an opinionated front-end tech stack with resources to learn those stacks. Was not exactly what I was hoping for.
Yep, this is just rehashing the same stuff you can find on any one of 1000 other blog posts, articles or pieces of documentation.
The current literature on front-end development in React is woefully short on real world case studies. It seems like everyone is building 30k loc throwaways with a 40 library boilerplate and slathering the internet with masturbatory praise for all the tooling and architectural patterns involved. If you try and Google for any actual details about said libraries, it's buried under mounds of beginner tutorials and faux praise.
For example, we just made the decision to use ImmutableJS in a project. Everyone else on the team seems to be happy enough just reading a few blog posts about how great it is and then throwing it into the mix. I have a bunch of questions though, where are all the benchmarks? The docs, and everyone else, claims massive speedups from using immutable data structures, but it's never accompanied by any actual metrics, at least as far as my Google-fu gets me. All of our data lists are paginated server side, we're not doing any mass inserts on large data structures anywhere, and I've never noticed a performance problem on anything else I've written with similar requirements, so I find the claim that ImmutableJS is somehow going to speed up our app to be dubious at best. Moreover, every article under the sun craps on about how great it is to "enforce immutability", but nobody seems to be able to tell me when they last encountered a bug due to an inadvertant mutable update, how much time it cost them, or why they're hiring people who make such elementary mistakes in the first place; or if they do, it's some airy parable with no code example of the bug or the ImmutableJS code that could have fixed it.
You don't need a library. Just have a policy that everything should be immutable by default and only turn into mutable objects when optimizing, for example for-loops. The benefit with immutability is not performance (it's actually slower) but manageability and debug-ability. It's easier to reason about code when the variables doesn't change, and you'll get all the variable-states when debugging.
Example:
The "Performance" section of Facebook's own React docs has this line at the end:
"Immutable data structures provide you with a cheap way to track changes on objects, which is all we need to implement shouldComponentUpdate. This can often provide you with a nice performance boost."
What I don't really get is, if you've implemented PureComponent and shouldComponentUpdate wherever possible, how can adding Immutable possibly improve performance? I.e. if your components are ASSUMING their inputs are immutable, how does throwing errors on mutation attempts actually speed things up?
What I don't really get is, if you've implemented PureComponent and shouldComponentUpdate wherever possible, how can adding Immutable possibly improve performance?
The argument is that if you prevent mutation then you can rely on only a shallow comparison of the object references as a reliable test for whether any of the data has changed. Your shouldComponentUpdate implementation becomes a one-line equality test for each prop that might change, or the equivalent. This may be significantly faster (and potentially easier to maintain) than any more detailed comparison of props to decide whether anything significant has changed in shouldComponentUpdate.
Of course, this doesn't address the performance implications of maintaining your state in some sort of immutable data structures rather than just mutating it. Nor does it address the performance implications of using a library like React that declaratively renders your content and does the whole vDOM diff algorithm thing instead of just poking the DOM in exactly the required places. Both of those strategies can be orders of magnitude slower than the alternatives, and both of them can cause architectural and maintenance headaches of their own, and so the questions in those cases are whether the performance is still good enough and whether the benefits in other respects outweigh those costs.
With immutable objects, you can compare them (obj1 === obj2) and assume that if they're the same, none of the properties have changed. You don't have that sort of guarantee with normal JavaScript objects, so you need to do a deep comparison.
If your component's state and props are immutable objects, then shouldComponentUpdate becomes a much easier problem to solve.
The standard, non-library way of handling state in React is to use plain old mutable Javascript objects, and just pretend they're immutable (i.e never perform any mutable operations on them). The popular claim is that Immutable "enforces" immutability, leading to less "accidental mutation" bugs. (A claim which is bogus IMO. 0 !< 0)
No, I'd say it's a _fairly_ valid claim. You have to interact with Immutable.js objects using its API, and every update API call returns a new instance. So, it _does_ generally enforce immutability. As far as I know, the only way to accidentally mutate stuff with Immutable.js is if you insert plain JS objects inside an Immutable.js object, and possibly also use one of its "update this using a callback function" methods and misuse things inside the callback.
Yeah, adding Immutable.js will not magically give you a speed boost.
Immutable.js gives you three primary benefits:
- Its object API largely prevents accidental mutations (although I think it's still possible to make mistakes if you use some of the updater callback methods)
- Internally, it uses specialized data structures that allow "structural sharing" of values. That means that creating a new object based on an existing one doesn't have to copy every single key/value pair onto the new object. This does show perf improvements for copying very large objects (thousands or tens of thousands of keys).
- React perf optimization generally relies on implementing `shouldComponentUpdate` to skip unneeded re-rendering. The standard approach to implementing `sCU` is a shallow equality check that relies on you having handled your data immutably so it can just compare references. React's built-in PureComponent class also implements that approach. As mentioned in other comments, you don't _have_ to use Immutable.js to handle data immutably, but it is one way to do it.
My links list does have links to several discussions of Immutable.js perf [0], and there's specifically one excellent article I've seen that does benchmarks of Immutable.js usage [1] [2]. I also wrote a Reddit comment a while back discussing the reasons why I generally advise against using Immutable.js [3].
As for your comments and questions about app architecture... based on your app description, I don't think Immutable.js would provide any particular speed benefit for you in terms of copying/updating objects. "Enforced immutability", though, _is_ key if you're using Redux, as it's a prerequisite for proper time travel debugging and correct React-Redux UI updates (per my blog post "The Tao of Redux, Part 1 - Implementation and Intent" [4]).
Other than that, my links list does have pointers to many articles about practical usage and lessons learned from real-world React and Redux applications [5] [6].
Your complaints are a bit on the general side, but as always, I'm happy to try to answer any specific questions you might have regarding React and Redux usage.
Anyone know of a good article/series dealing with this? Tooling is fine and well, but I just got bumped to a lead role, with no one more experienced than me around, and could desperately use some more guidance.
This guide is more for beginners who just joined or want to join big tech companies. But I can see that your suggestions would make a great "power up as beginners" guide.