Hacker Newsnew | past | comments | ask | show | jobs | submit | lomlobon's commentslogin

How's the latency? I've had to keep using xterm even though it kind of sucks just because it's got the lowest latency by quite a bit.

According to this (at least 11 months old) benchmark, Ghostty has the worst input latency across all contenders: https://github.com/moktavizen/terminal-benchmark?tab=readme-...

More benchmarks from 4 months ago: https://news.ycombinator.com/item?id=45253927

14 months old discussion of input latency in Ghostty with comments from the author: https://github.com/ghostty-org/ghostty/discussions/4837


I'm generally very sensitive to input latency and there's no way Ghostty has 41ms. I've only been using it for a couple of months though, so I guess it's fixed now.

Edit: just saw your second link from 4 months and yes, it's now avg 13ms which feels about right to me. Not perfect but acceptable. So what's even the point of sharing the old benchmark?


The first link is a proper end-to-end benchmark with external camera (kudos to the author for making those), the second link is a more faulty software emulation

I don’t type more than 100 characters a second so I’ve never ran into bits limits.

If you're typing just one character per second you'll still feel the difference. Latency is stress inducing.

I have been using computers and terminal for a long time, and this kind of comment makes me think I must have missed a whole bunch of things which can be done with a terminal

It feels amazing on linux. I find it very noticably better than konsole.

Foot/Alacritty also feel amazing but they don't have some features of Ghostty.

My benchmark is opening neovim and scrolling with the mousewheel so not sure how represantative that is though


Since people are mentioning latency I’ll mention throughput. Basically the idea is that you accidentally cat a large file to your terminal and we are measuring how much time it takes for the terminal to finish displaying it. This test generally favors GPU-accelerated terminals.

Ghostty performs very well on this regard, among the same league as Alacritty and Ptyxis.


Rather, what will win is a terminal that internally builds an efficient, symbolic representation of what is on the display, rather than a pixel representation with all the font glyph, and which efficiently sychronizes that symbolic representation to the graphical canvas, skipping intermediate updates when the abstract display is changing too fast.

That’s already happening I think. Newer terminals redraw at a fixed rate equal to the display refresh rate, usually 60Hz. But if there are more than 60 new characters being printed per second, some of these intermediate states are never rendered on screen.

Have you tried kitty with more aggressive settings? It feels very responsive out of the box, but the defaults are balanced for sane energy use on portable machines.

  repaint_delay 5
  input_delay 1
  sync_to_monitor no

on my machine, noticeable. I seriously tried it, but went back because I could notice a small end-end latency, between keypress and action. But I'm also 240hz user.

Where are you measuring the keypress from? The nerve signal to your finger muscles?> Or the time the keycap hits bottom? What if the switch closes before the cap hits bottom: then we are getting a latency figure that looks better than it really is.

I've had a keyboard like that and with it, xterm (and nothing else) felt like it was displaying the characters even slightly before I had pressed them. It was a weird sensation (but good)

Yes, I know this feeling, it's like typing on air. The Windows Terminal has this same feeling. 8 years ago I opened this issue https://github.com/microsoft/terminal/issues/327 and the creators of the tool explained how they do it.

xterm in X11 has this feeling, ghostty does not. It's like being stuck in mud but it's not just ghostty, all GPU accelerated terminals on Linux I tried have this muddy feel. It's interesting because moving windows around feels really smooth (much smoother than X11).

I wish this topic was investigated in more depth because inputting text is an important part of a terminal. If anyone wants to experience this with Wayland, try not booting into your desktop environment straight into a tty and then type. xterm in X11 and the Windows Terminal feel like this.


Nerve signals yes. I just try them side by side, usually running vim on both terminals and measuring how it feels. If you can feel difference, the latency is bad.

Kimi K2 is notably direct and free of this nonsense.


Well, if it has a very microprose feel then it was by luck, because they picked it up late in development and it's a single russian guy's brainchild. He also made hammerfight, another excellent (if janky as hell) game.

Highfleet really is a great game though.


Nu-MicroProse has been seeking out games similar in spirit to those published by old MicroProse trying to become the continuation of what it used to be. It might have been luck that there was a guy developing that kind of game at the time, or it might have been that there's a growing desire for that sort of game that MicroProse is coming along at the right time to pick up


Probably they are using 'Kagi Assistant', which is essentially kagi acting as an intermediary to the major LLMs. You get a catalog and a monthly quota.

Pretty handy. You can also make your assistants use the same custom 'lenses' you do to constrain their searches.


Interesting. The lenses sound similar to “Spaces” on Perplexity, where you can segment searches to a specific prompt every go and upload files etc for context. Safe to assume that’s a pretty common feature now, maybe I should look at Kagi again - it’s been a few years since I’ve last peeked at it.


The lenses are more about what results will be returned by Kagi's search. They were originally (and still are) a search feature you can use in regular searches.

For example, one of their default lenses is "Academic". It searches research institutions/scientific journals/universities/etc. So as an example, if I search "ulcer risk of ibuprofen":

The "Academic" lens returns the NIH with "Research summaries -- Preventing peptic ulcers"[0] and ScienceDirect with a paper on "The gastrointestinal effects of nonselective NSAIDs and COX-2-selective inhibitors"[1].

Searching without the lens, I get Healthline's "Ibuprofen and Ulcers: Why They Happen and How To Avoid Them"[2] and Medical News Today's "Ibuprofen ulcers: Effects, symptoms, causes, and more"[3].

You can apply these lenses to searches the AI assistant performs separately from the prompts/context/etc. So, for instance, I can set up an "assistant" based on Claude called "Research" restricted to the "Academic" lens. When I ask that assistant questions and it performs a web search, only results from the academic lens (research institutes, universities, etc) will be returned to the AI.

You could do similar with, e.g., setting up an assistant for "Coding with Python" and creating a lens that's restricted to the Python documentation, one for "Local Knowledge" that's restricted to sites from your region, "Recent Developments" that only considers sites published in a certain timeframe, "eBooks" that only returns epub results, etc.

Your prompt/etc is configured separately as part of the "assistant" you're using. So you could have a research assistant with a prompt that asks it to approach the problem step-by-step and evaluate the veracity of the sources, a coding assistant whose prompt includes the language/framework, etc. But there's nothing stopping you hooking your "research assistant" up to your "Coding with Python" lens.

[0] https://www.ncbi.nlm.nih.gov/books/NBK310269/ [1] https://www.sciencedirect.com/science/article/abs/pii/S00490... [2] https://www.healthline.com/health/ibuprofen-ulcer [3] https://www.medicalnewstoday.com/articles/ibuprofen-ulcers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: