Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These are some good tips but I've not hit these performance issues. I work on smaller scale applications. One has been in production since 2012 the database performs very well. I guess I need to get out and work for bigger companies to experience this.


One doesn’t need to work for bigger companies to have lots of data these days. Has been true for many years already.

In our Postgres DB we have more than 4 TB of data, which I don’t think is too big. We didn’t need any special sauce, no vendors chiming in, only a better understanding of the technology than average Joe.

On the big company part - I have yet to employ more than five developers.


Yes, there's nothing quite like the query planner deciding to try something new and suddenly 100 application servers are DDOSing your primary :)


This may be irrational but it's something that worries me about using postgres in production. Sure as a developer I love all the features, but the fact that the query planner can suddenly decide on a radically different (and incredibly inefficient) query plan makes it hard to trust. In some ways, a dumber query planner that needs coercing into the right query plan is more reassuring, in that you know it'll keep using that query plan unless explicitly told otherwise.


But that dumber query planner will bite you when your data changes. If the relative size of multiple tables change the query might have to change to be still efficient. Postgres query planner handles that just fine. I've used Postgres for years multi TB databases and I've experienced a problem with Postgres suddenly changing plans.


Is there a “never” missing from your last sentence?


Yes :( And it is too late for editing it.


haha, this example is actually from Mongo, but it's pretty rare.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: