I thought that project got turned into the animated film that got released not too long back? I got the impression JMS was done with B5 after he went back to focusing on comics again with his run on Captain America.
I’ve been SSHing into my dev server off of my phone to run Claude Code while commuting, so this is a product that I would love to switch to. I can’t use the Claude iOS app due to the testing set up I have. That said I do have a couple of questions:
- Is it possible to completely disable or not use the remote sandbox features? I would never use them and would prefer my code stays on my device.
- For those of us that are using subscriptions, does it show our remaining usage? I would hate to run out of tokens in the middle of a session.
- One feature of the CC TUI I sorely missed on mobile is the ability to look up and directly reference files via “@“. Is any functionality like this planned?
- (This likely won’t affect my decision to use the service as I’ll just put it on a company card.) $20 per month for a service that runs CC on a remote machine in a convenient matter is steep but doable. Asking that same amount for a running code on my own server seems a bit unjustified, especially since this is pricier than the cost of a Claude pro subscription. Are there any plans to offer a cheaper tier for those of us that just want to run this on our own machines?
Is it possible to completely disable or not use the remote sandbox features? I would never use them and would prefer my code stays on my device.
Yes, the remote sandbox feature is disabled by default, and you have to manually enable it for the syncing to start.
For those of us that are using subscriptions, does it show our remaining usage? I would hate to run out of tokens in the middle of a session.
Currently Omnara doesn't show your usage limits, you would have to check that at claude.ai. I'll look into add that though.
One feature of the CC TUI I sorely missed on mobile is the ability to look up and directly reference files via “@“. Is any functionality like this planned?
Yes, this exists in Omnara already!
Are there any plans to offer a cheaper tier for those of us that just want to run this on our own machines?
That's a good idea, we'll think about doing this where we don't offer sandbox + voice, and just have the messaging service.
Just tried out Handy. This is much better and lightweight UI than the previous solutions I've tried out! I know it wasn't you intention, but thank you for the recommendation!
That said, I now agree with your original statement and really want Voxtral support...
Handy is awesome! and easy to fork. I highly recommend building it from source and submitting PRs if there are any features you want. The author is highly responsive and open to vibe-coded PRs as long as you do a good job. (Obviously you should read the code and stand by it before you submit a PR, but I just mean he doesn't flatly reject all AI code like some other projects do.) I submitted a PR recently to add an onboarding flow to Macs that just got merged, so now I'm hooked
I've got a i5-7500T box running as a Proxmox Backup Server, and it idles at 6-7W. It runs at near idle most of the time (it's just running PBS and a few network services), so I'm not expecting it to cause much difference to my power bills. Even under full load, it only draws ~30W, so it's not _that_ much power.
I have an N95 mini PC (32GB DDR4, 250GB SSD, 1TB NVMe), a 4 disk USB enclosure, an access point, and a 16 port switch plugged into a UPS.
The UPS says 35W for all of it, but I’ve always been too lazy to unplug devices to see how it breaks down. I’m also not sure how accurate the measurements are, especially under a load that low.
I’d be willing to believe the mini PC draws less than the other components at this point.
I recently got a junk M2 MacBook Air (16GB/512GB) with a broken screen for $250.
It idles at just 0.2W (!) when accessed via SSH. While it offers zero expandability, lacks wired LAN, and runs on a non-free OS, it's an interesting candidate for an ultra-low-power inference server.
I’ve been pretty content with Postmark but I decided to compare pricing out of curiosity.
The free tier Remails offers is pretty generous in my opinion (3k a month) and I really like when the people making a service make blog posts like this. However having the next tier up (100k a month) start at €100 is a no go for me. It’s not that I think the price is bad, it’s that for my service I expect to send only 30k per month by the end of the year. Having a smaller capacity tier would be great for small but not tiny projects.
An alternative is Lettermint[1] which has much more gradual (and slightly lower, but with a less generous free plan) pricing. It's also pretty new (I think it launched last year) but more fully featured. I haven't used it but it seems good.
+1 I feel like Remails should try to have a more flexible pricing as well (I would love it if they can have 1000 mails per x euros or similar to lettermint pricing), it seems decent but remails has a higher free tier than lettermint so I hope that remails can revamp its pricing to include middle points (similar to lettermint in this instance) while still being price competitive.
Most people working office jobs are scared of the terminal though. I see this as not being targeted at the average HN user but for non-technical office job workers. How successful this will be in that niche I'm not certain of, but maybe releasing an app first will give them an edge over the name recognition of ChatGPT/Gemini.
I can second this. I'm an online coding instructor and within our company Replit was the website/environment we were told to use with our students. I really didn't like it due to all the AI features (I believe that when you're learning to code you shouldn't use LLMs) but the collaboration features were really good.
Unfortunately they added a limit to the number of collaborators per account and we had to stop using it.
Just yesterday I was trying to figure out a method to accurately estimate my remaining usage for the five hour sessions for a shell script. It wasn't until I pointed Claude at your repo and had it make something based off of that that I got it to work well.
reply