Without the purchase price, it is unclear whether this deserves congratulations or condolences.
Two years in the LLM race will have definitely depleted their seed raise of $4m from 2023, and with no news of additional funds raised it's more than likely this was a fire sale.
It was not a fire sale I'm pretty sure. Langfuse has been consistently growing, they publish some stats about sdk usage etc so you can look that up.
They also say in the announcement that they had a term sheet for a good series a.
I think the team just took the chance to exit early before the llm hype crashes down. There is also a question of how big this market really is they mostly do observability for chatbots but there are only so many of those and with other players like openais tracing, pydantic logfire, posthog etc they become more a feature than a product of its own. Without a great distribution system they would eventually fall behind I think.
2 years to a decent exit (probably 100m cash out or so with a good chunk being Clickhouse shares) seems like a good idea rather than betting on that story to continue forever.
I don’t know about that. I looked at them a couple of months back for prompt management and they were pretty behind in terms of features. Went with PromptLayer
Ohh. Reply from Shawn. I love your work.
As far as I recall, we were not looking for a net of features but specifically a git like API that could manage and version the prompts. Meta data tagging, Jinja2, release labels and easy rollback. Add that up with Rest, Typescripts and Python support and it worked pretty well. Langfuse seemed way better at tracing though.
hi! :) thanks for supporting my work. ok yea that makes sense - i was a langfuse only user but sounds like i might want to check out promptlayer (tbh i was until recently of the opinion that you should always check in your prompts into git... flipped recently)
Anecdotally, from the AI startup scene in London, I do not know folks who swear by Langfuse. Honestly, evals platforms are still only just starting to catch on. I haven't used any tracing/monitoring tools for LLMs that made me feel like, say, Honeycomb does.
I'd say out of many generative AI observability platforms, Langsmith and Weave (Weights&Biases) are probably the ones most enterprises use, but there's definitely space for Langfuse, Modelmetry, Arize AI, and other players.
While I get what you’re saying, “most enterprises” barely use gen AI in any meaningful sense, and AI observability is an even smaller niche technology.
Agreed, Sentry, Posthog, and many more are all doing the exact same thing now, I'd be surprised if this was a good deal for Langfuse. I personally migrated away from it to use Sentry, their software was honestly not that great.
Also, Longbridge, who seem to be using this GPUI component library for their Longbridge Pro [1] app, look to me like a regular online brokerage company. What is your issue with that?
I recently experimented with using pglite for API integration tests in one of my side projects. It worked pretty well and has much better DX than using something like testcontainers[0] to spin up postgres running in docker.
I see unit and integration testing as a massive opportunity for PGlite. I know of projects (such as Drizzle ORM) that are already making good use of it.
The team at Supabase have built pg-gateway (https://github.com/supabase-community/pg-gateway) that lets you connect to PGlite from any Postgres client. That's absolutely a way you could use it for CI.
One thing I would love to explore (or maybe commenting on it here will inspire someone else ;-) ) is a copy on write page level VFS for PGLite. I could imagine starting and loading a PGlite with a dataset and then instantly forking a database for each test. Not complexities with devcontainers or using Postgres template databases. Sub ms forks of a database for each test.
I would also love to use it this way in Go. Currently we dump SQLite compatible schema to use for SQLite powered integration tests even though we use postgres when live. It allows us to have zero dependency sub-second integration tests using a real database. Being able to use postgres would be even better, allowing us to test functionality requiring PG specific functionality like full text search.