I've been on something of a quest to find a really good chat interface for LLMs.
Most import feature for me is that I want to be able to chat with local models, remote models on my other machines, and cloud models (OpenAI API compatible). Anything that makes it easier to switch between models or query them simultaneously is important.
Here's what I've learned so far:
* Msty - my current favorite. Can do true simultaneous requests to multiple models. Nice aesthetic. Sadly not open source. Have had some freezing issues on Linux.
* Jan.ai - Can't make requests to multiple models simultaneously
* LM Studio - Not open source. Doesn't support remote/cloud models (maybe there's a plugin?)
* GPT4All - Was getting weird JSON errors with openrouter models. Have to explicitly switch between models, even if you're trying to use them from different chats.
Still to try: Librechat, Open WebUI, AnythingLLM, koboldcpp.
I've been in the same quest for a while. Here's my list, not a recommendation or endorsement list, just a list of alternative clients I've considered, tried or am still evaluating:
- chatbox - https://github.com/chatboxai/chatbox - free and OSS, with a paid tier, supports MCP and local/remote, has a local KB, works well so far and looks promising.
- macai - https://github.com/Renset/macai simple client for remote APIs, does not support image pasting or MCP or anything really, very limited, crashes.
- typingmind.com - web, with a downloadable (if paid) version. Not OSS, but one-time payment, indie dev. One of the first alt chat clients I've ever tried, not using it anymore. Somewhat clunky gui, but ok. Supports MCP, haven't tried it it.
- Open WebUI - deployed for our team so that we could chat through many APIs. Works well for a multi-user web-deployment, but image generation hasn't been working. I don't like it as a personal client though, buggy sometimes but gets frequent fixes fortunately.
- jan.ai - it comes with popular models pre-populated listed, which makes it harder to plug into custom or local model servers. But it supports local model deployment within the app (like what ollama is announcing) which is good for people who don't want to deal with starting a server. I haven't played with it enough, but I personally prefer to deploy a local server (ie ollama, litellm...) and then just have the chat gui app give me a flexible endpoint configuration for adding custom models to it.
I'm also wary of evil actors deploying chat GUIs just to farm your API keys. You should be too. Use disposable api keys, watch usage, refresh with new keys once in a while after trying clients.
do you have any screenshots? the home page shows a picture of a tamagotchi but none of the actual chat interface, which makes me wonder if I’m outside of the target audience
Last I tried OpenWebUI (A few months ago), it was pretty painful to connect non-OpenAI externally hosted models. There was a workaround that involved installing a 3rd party "function" (or was it a "pipeline"?), but it didn't feel smooth.
Is this easier now? Specifically, I would like to easily connect anthropic models just by plugging in my API key.
No, still the same, otoh, it works perfectly fine for Claude, and that is the only one I use. I just wished they would finally add native support for this ...
CherryStudio is a power tool for this case https://github.com/CherryHQ/cherry-studio -- has MCP, search, personas, and reasoning support too. i use it heavily with llama.cpp + llama-swap
I've been using AnythingLLM for a couple months now and really like it. You can organize different "Workspaces" which are models + specific prompts and it supports Ollama along with the major LLM providers.
I have it running in a docker container on a raspberry pi and then I use Tailscale to make it accessible anywhere. It looks good on mobile too so it's pretty seamless.
I use that and Raycast's Claude extension for random questions and that's pretty much does everything I want.
I like webUI but it’s weird and conplicated how you have to set up the different models (via text files in the browser, the instructions contains a lot of confusing terms). Librechat is nice but I can’t get it to not log me out every 5 min which makes it unusable. I’ve been told it keeps you logged in when using https but I use tailscale so that is difficult (when doing multiple services on a single host).
Build your own! It's a great way to learn, keeps you interested in the latest developments. Plus you get to try out cool UX experiments and see what works. I built my own interface back in 2023 and have been slowly adding to it since. I added local models via MLX last month. I'm surprised more devs aren't rolling their own interface, they are easy to make and you learn a lot.
Open WebUI is definitely what you want. Supports any OpenAI-compatible provider, lets you manually configure your model list and settings for each model in a very user-friendly way, switching between models is instant, and it lets you send the same prompt to multiple models simultaneously in the same chat and displays them side by side.
gptel in emacs does this. You can run the same prompt against different models in separate emacs windows (local or via api w/ keys) at the same time to compare outputs. I highly recommended it. https://github.com/karthink/gptel
Our team has been using openwebui as the interface for our stack of open source models we run internally at work and it’s been fantastic! It has a great feature set, good support for MCPs, and is easy to stand up and maintain.
Most import feature for me is that I want to be able to chat with local models, remote models on my other machines, and cloud models (OpenAI API compatible). Anything that makes it easier to switch between models or query them simultaneously is important.
Here's what I've learned so far:
* Msty - my current favorite. Can do true simultaneous requests to multiple models. Nice aesthetic. Sadly not open source. Have had some freezing issues on Linux.
* Jan.ai - Can't make requests to multiple models simultaneously
* LM Studio - Not open source. Doesn't support remote/cloud models (maybe there's a plugin?)
* GPT4All - Was getting weird JSON errors with openrouter models. Have to explicitly switch between models, even if you're trying to use them from different chats.
Still to try: Librechat, Open WebUI, AnythingLLM, koboldcpp.
Would love to hear any other suggestions.