Hacker Newsnew | past | comments | ask | show | jobs | submit | rc1's commentslogin

> have inconsistent style

You mean incongruent styles? As in, incongruent to the host OS.

There is no doubt electron apps allow the style to be consistent across platforms.


No, they are also inconsistent: slack, vscode, zed, claude, chatgpt, figma, notion, zoom, docker desktop, to quote some that i use daily. They have all different UI patterns and design. The only thing they have in common is that are slow, laggy, difficult to use and don’t respond quickly to the Window manager.

Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.


XCode and Pages are a delight in comparison to VSCode and Notion is certainly one of the takes of all time.

XCode is usually the first example that comes to mind of a terrible native app in comparison to the much nicer VSCode.


Well put. What world are folks living in where it wouldn’t be the obvious choice.

Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.

> They are often laggy or unresponsive. They don’t integrate well with OS features.

> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land

Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.

Claude offer a CLI tool. Like what product manager would say no to electron in that situation.

This article makes no sense in context. The author surely gets that.


Isn’t it still? Antidotally, I work with lots of creators who still prefer it because of its subjective qualities.

How long until this can be run on consumer grade hardware or a domestic electricity supply I wonder.

Anyone have a projection?


You can run it on consumer grade hardware right now, but it will be rather slow. NVMe SSDs these days have a read speed of 7 GB/s (EDIT: or even faster than that! Thank you @hedgehog for the update), so it will give you one token roughly every three seconds while crunching through the 32 billion active parameters, which are natively quantized to 4 bit each. If you want to run it faster, you have to spend more money.

Some people in the localllama subreddit have built systems which run large models at more decent speeds: https://www.reddit.com/r/LocalLLaMA/


High end consumer SSDs can do closer to 15 GB/s, though only with PCI-e gen 5. On a motherboard with two m.2 slots that's potentially around 30GB/s from disk. Edit: How fast everything is depends on how much data needs to get loaded from disk which is not always everything on MoE models.


Would RAID zero help here?


Yes, RAID 0 or 1 could both work in this case to combine the disks. You would want to check the bus topology for the specific motherboard to make sure the slots aren't on the other side of a hub or something like that.


You need 600gb of VRAM + MEMORY (+ DISK) to fit the model (full) or 240 for the 1b quantized model. Of course this will be slow.

Through moonshot api it is pretty fast (much much much faster than Gemini 3 pro and Claude sonnet, probably faster than Gemini flash), though. To get similar experience they say at least 4xH200.

If you don't mind running it super slow, you still need around 600gb of VRAM + fast RAM.

It's already possible to run 4xH200 in a domestic environment (it would be instantaneous for most tasks, unbelievable speed). It's just very very expensive and probably challenging for most users, manageable/easy for the average hacker news crowd.

Expensive AND hard to source high end GPUs, if you manage to source for the old prices around 200 thousand dollars to get maximum speed I guess, you could probably run decently on a bunch of high end machines, for let's say, 40k (slow).


You can run it on a mac studio with 512gb ram, that's the easiest way. I run it at home on a multi rig GPU with partial offload to ram.


I was wondering whether multiple GPUs make it go appreciably faster when limited by VRAM. Do you have some tokens/sec numbers for text generation?


The Oracle Org Chart by Manu Cornet springs to mind reading this: https://www.globalnerdy.com/2011/07/03/org-charts-of-the-big...


Cursor opened in config/ + HomeAssistant MCP is exceptionally good. I have blundered along with Home Assistant over the years, but it lit up with the above setup for me the other day.

For giggles, I had it set all the lights into a disco.

Next, we vibed a markdown file containing a to-do list of all my upstairs lights that are abstractly named by the different integrations. I put an x against a name and it turned the light off.

Once I identified it, I wrote a better name next to it. It updated the system.

We vibed dashboards and routines.

The problem with Home Assistant is that once it works, you don't touch it for a year and are back to square one with the layers of concepts. But I am left satisfied knowing I have backed up the conversation/context that we can pick up next year or whenever again.

A memorable computer experience.


If Europe isn’t a continent, on what continent are the EU member states sitting on?


Eurasia is the widely accepted answer.


Nit: its $800 billion in interest, your comment starts with $8 billion


right. My goof. That adds two more zeroes across all the math. More crazy, but I think in the realm of "maybe, if we squint hard." But my eyes are hurting from squinting that hard, so I agree that it's just crazy.


https://icons8.com/lunacy

Not open source however


Thank you both, had no idea about existence of lunacy (the app).


If fairness to toon, the alternative json your giving doesn’t include hints on structure.

Not sure LLM are more “tuned” to JSON.

That said, your general point holds that toon maybe unnecessary. Especially in the examples given. But perhaps plan text would suffice. Toon could be useful when automating inputs with many different shapes.


Yea exactly. The LLMs are tuned to natural language. I don't think anything will beat good ol' templating (a.k.a. plain text). In Go I do something like this:

  // mytemplate.tmpl
  Description="The following data is for the users in our application."
  Format="id,name,role"
  length=2
  Data:
  {{range .}}
  {{.ID}}, {{.Name}}, {{.Role}}
  {{end}}
This way you're able to change the formatting to something the LLM understands for each struct. The LLM might understand some structs better as JSON, others as YAML, and others in an arbitrary format. Templating gives you the most flexibility to choose which one will work best.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: