Hacker Newsnew | past | comments | ask | show | jobs | submit | bezbac's commentslogin

Congratulations to everyone involved, quite remarkable considering Langfuse was only founded as part of YC 23.


Without the purchase price, it is unclear whether this deserves congratulations or condolences.

Two years in the LLM race will have definitely depleted their seed raise of $4m from 2023, and with no news of additional funds raised it's more than likely this was a fire sale.


It was not a fire sale I'm pretty sure. Langfuse has been consistently growing, they publish some stats about sdk usage etc so you can look that up.

They also say in the announcement that they had a term sheet for a good series a.

I think the team just took the chance to exit early before the llm hype crashes down. There is also a question of how big this market really is they mostly do observability for chatbots but there are only so many of those and with other players like openais tracing, pydantic logfire, posthog etc they become more a feature than a product of its own. Without a great distribution system they would eventually fall behind I think.

2 years to a decent exit (probably 100m cash out or so with a good chunk being Clickhouse shares) seems like a good idea rather than betting on that story to continue forever.


I don’t know about that. I looked at them a couple of months back for prompt management and they were pretty behind in terms of features. Went with PromptLayer


say more about what you were looking for that promptlayer had/langfuse didnt?


Ohh. Reply from Shawn. I love your work. As far as I recall, we were not looking for a net of features but specifically a git like API that could manage and version the prompts. Meta data tagging, Jinja2, release labels and easy rollback. Add that up with Rest, Typescripts and Python support and it worked pretty well. Langfuse seemed way better at tracing though.


hi! :) thanks for supporting my work. ok yea that makes sense - i was a langfuse only user but sounds like i might want to check out promptlayer (tbh i was until recently of the opinion that you should always check in your prompts into git... flipped recently)


Anecdotally, from the AI startup scene in London, I do not know folks who swear by Langfuse. Honestly, evals platforms are still only just starting to catch on. I haven't used any tracing/monitoring tools for LLMs that made me feel like, say, Honeycomb does.


I'd say out of many generative AI observability platforms, Langsmith and Weave (Weights&Biases) are probably the ones most enterprises use, but there's definitely space for Langfuse, Modelmetry, Arize AI, and other players.


While I get what you’re saying, “most enterprises” barely use gen AI in any meaningful sense, and AI observability is an even smaller niche technology.


I love langfuse, it is my goto.


Agreed, Sentry, Posthog, and many more are all doing the exact same thing now, I'd be surprised if this was a good deal for Langfuse. I personally migrated away from it to use Sentry, their software was honestly not that great.


The fact that all metrics are relative doesn't suggest they got an amazing deal


gpui itself is maintained by the folks at https://zed.dev.

Also, Longbridge, who seem to be using this GPUI component library for their Longbridge Pro [1] app, look to me like a regular online brokerage company. What is your issue with that?

1: https://longbridge.com/desktop/


zed looks nice, but I am going to wait until the American port to use it.


May I ask what you mean by this? For all I know, Zed Industries Inc is incorporated in the US and funded by US venture capital.

BTW, I am not associated with zed in any way.


Its a stupid joke. Americans say Z==Zee, rest of the Anglosphere says Z==Zed


Thanks for the explanation :)


thanks for explaining, i was really lost on what this person meant by that


https://anytype.io is not open source, but source available, and even calls itself the "Everything App".


This reads like ai slop


I hope this one was written by a human, would be terrible to read such critique of the author didn't read them


well techincally an LLM could -read- (have in context) these books before critizing them.

i wonder now if -unbiased- llm based reviews could have a place for such

most reviewers are just stating what they experience/about their taste, but there's no objectivity on reviewing


I've read that the results improve if you ask them to write a program that creates the desired ASCII art. Haven't tried it myself yet so far.


I can second this. VisX will be a little more effort but it will let you build anything you could build using d3.


If anything, please let Apple buy Raycast instead of Alfred.


There is also https://httpie.io


I've been using it for a while. Coming from postman it's exactly what I was looking for. Just the features I need, no bloat.


I recently experimented with using pglite for API integration tests in one of my side projects. It worked pretty well and has much better DX than using something like testcontainers[0] to spin up postgres running in docker.

[0]: https://testcontainers.com


I see unit and integration testing as a massive opportunity for PGlite. I know of projects (such as Drizzle ORM) that are already making good use of it.

The team at Supabase have built pg-gateway (https://github.com/supabase-community/pg-gateway) that lets you connect to PGlite from any Postgres client. That's absolutely a way you could use it for CI.

One thing I would love to explore (or maybe commenting on it here will inspire someone else ;-) ) is a copy on write page level VFS for PGLite. I could imagine starting and loading a PGlite with a dataset and then instantly forking a database for each test. Not complexities with devcontainers or using Postgres template databases. Sub ms forks of a database for each test.


I would also love to use it this way in Go. Currently we dump SQLite compatible schema to use for SQLite powered integration tests even though we use postgres when live. It allows us to have zero dependency sub-second integration tests using a real database. Being able to use postgres would be even better, allowing us to test functionality requiring PG specific functionality like full text search.


I'm bothering the PGLite team a lot to help us enable this :-)

We have different options like embedded-postgres or integreSQL, but none match the simplicity of PGLite. I hope this wish comes true soon.

https://github.com/fergusstrange/embedded-postgres/tree/mast...

https://github.com/allaboutapps/integresql



I do exactly the same thing. I’m even running SQLite in wasm so I don’t have any C dependencies. Switching out for Postgres would be awesome


AFAIK, Ollama supports most of these models locally and will expose a REST API[0]

[0]: https://github.com/ollama/ollama/blob/main/docs/api.md


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: