Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indexing in Postgres is legitimately painful, I don’t think “get moar ram” is a good response to that particular critique.


Neither is it a good demonstration of things that people who currently maintain postgres are experienced in doing. Companies should be reluctant to manage their own vector indexes until this becomes more a more mainstream skillset.

This excellent blog post[1] demonstrates the complexities of scaling HNSW indexes and shows that at a certain point, you need to switch to ivfpq with vastly different performance and accuracy characteristics.

https://aws.amazon.com/blogs/big-data/choose-the-k-nn-algori...


Author here.

> I don’t think “get moar ram” is a good response to that particular critique.

I do not think the blog post suggested "get more ram" as a response, but happy to clarify if you could share more details!

> Indexing in Postgres is legitimately painful

Lantern is here to make the process seamless and remove most of the pain for people building LLM/AI applications. Examples:

1. We build tools to remove the guesswork of HNSW index sizing. E.g. https://lantern.dev/blog/calculator

2. We analyze typical patterns people use when building LLM apps and suggest better practices. E.g. https://lantern.dev/blog/async-embedding-tables

3. We build alerts and triggers into our cloud database that automate the discovery of many issues via heuristics.


Compared to what?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: