Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Why are LLM UIs so slow?
1 point by rishikeshs 9 months ago | hide | past | favorite | 4 comments
I love Claude Sonnet but the user interface is super slow. After chatting sometime, it becomes too slow and its even hard to scroll.

To counter this, I tried Openrouter's chat interface, but that is painfully slow. I'm trying now Gemini 2.5 in Google AI studio and it is also slow.

What is the underlying reason for this? I understand the backend takes a lot of computation, but frontend?



It's the big bummer with reasoning models, although they are improving a lot. I experimented with various reasoning models for my AI security scanner product but found the performance to just be far too slow.


But is it a front end issue?

I can understand the time taking to process, but the whole tab becomes unusable after a while!


Yeah sounds like a lack of ‘virtual scrolling’. Behind these scenes, they are often creating a tremendous number of markdown renderer components, which are expensive and not meant to have hundreds of instances on a page. It’s a tricky problem imo, and involves tradeoffs with the scrollbar ergonomics.


I think it's both. If you look at the response coming back from claude you can get an idea for how the front end library probably works. (When updating an artifact, claude's response is inefficient. Perhaps this is more reliable at scale but... why not use a diff format similar to git?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: