These are some good tips but I've not hit these performance issues. I work on smaller scale applications. One has been in production since 2012 the database performs very well. I guess I need to get out and work for bigger companies to experience this.
One doesn’t need to work for bigger companies to have lots of data these days. Has been true for many years already.
In our Postgres DB we have more than 4 TB of data, which I don’t think is too big. We didn’t need any special sauce, no vendors chiming in, only a better understanding of the technology than average Joe.
On the big company part - I have yet to employ more than five developers.
This may be irrational but it's something that worries me about using postgres in production. Sure as a developer I love all the features, but the fact that the query planner can suddenly decide on a radically different (and incredibly inefficient) query plan makes it hard to trust. In some ways, a dumber query planner that needs coercing into the right query plan is more reassuring, in that you know it'll keep using that query plan unless explicitly told otherwise.
But that dumber query planner will bite you when your data changes. If the relative size of multiple tables change the query might have to change to be still efficient. Postgres query planner handles that just fine. I've used Postgres for years multi TB databases and I've experienced a problem with Postgres suddenly changing plans.