Hacker Newsnew | past | comments | ask | show | jobs | submit | pants2's commentslogin

Movie theaters around me (even the high end ones) have 30 minutes of commercials and previews before the movie, so I typically arrive 25 minutes after showtime, no problem.

However it is still frustrating that they expect you to tolerate 30 minutes of commercials after paying so much money.


To be fair it's very rare for an accountant today to know a proper programming language. I wish that was part of their training since from my perspective accountants could really benefit from git, SQL, and a little Python.

I agree, and would push it further, that almost all professionals need to have a basic understanding of Logic and programming. At least to create a Sort routine (without copying it).

I believe schools should require a Logic (not just philosophy) class.


If the hardwired chips are magnitudes faster couldn't they be manufactured on an older process and still be competitive?

older processes would not be feasible due to hard physics constraint: die size. The weights have to physically fit on the chip. At 6nm, an 8B parameter model already takes up 815mm², which is roughly the maximum size for any process. At 28nm, that same model would require a chip roughly 20x larger in area, which is physically impossible on a single die. So older nodes work fine for very small edge-case models (think embedded AI, IoT, voice assistants), but anything resembling a capable LLM needs at least N6/N7-class density just to fit.

Talaas' best case exit scenario is to get bought out by Intel, AMD, Qualcomm or Nvidia, and even automotive chip guys like NXP (automotive/robotics offline use will likely be major area of application for this). if the Taalas HC1 Technology Demonstrator is actually working and producing the results they are publicly claiming, I'm assuming there is a steady stream of visitors from silicon valley and elsewhere at their toronto offices.


You might be surprised, with NVMe swap 8GB is surprisingly capable. ~1.6GB/s Read/Write.

Apple has a great zram implementation as well.

Flash has finite write endurance. NVMe swap can burn through it pretty quick. Which is isn't that bad because if it wears out you can replace it... unless the drive is soldered.

Mac SSDs are expected to last 8-10 years, even with high use. though Apple don't publish these values specifically, it's possible to start to extrapolate from the SMART data when it starts showing errors.

A good SSD ought to be able to cope with ~600TBW. My ~4.5-year-old MBP gives the following:

    smartctl --all /dev/disk0
    ...
    Data Units Read:                    1,134,526,088 [580.8 TB]
    Data Units Written:                 154,244,108 [78.7 TB]
    ...
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      0
    ...
I'm sure an 8GB RAM machine would use more swap than my 16GB one, but probably not much more, given that mine has had heavy use for development and most people don't use their laptops for anything like that. Even so, that would still put it well within the expectation of 8-10 years, and that's for a $600 laptop.

> I'm sure an 8GB RAM machine would use more swap than my 16GB one, but probably not much more

It's non-linear. If you have a 17GB working set size, a 16GB machine is actively using 1GB of swap, but the 8GB machine is using 9GB. If you have a 14GB working set size, the 16GB machine doesn't need to thrash at all, but the 8GB machine is still doing 6GB.

Meanwhile "SSDs are fast" is the thing that screws you here. Once your actual working set (not just some data in memory the OS can swap out once and leave in swap) exceeds the size of physical memory, the machine has to swap it in and back out continuously. Which you might not notice when the SSD is fast and silent, but now the fact that the SSD will write at 2GB/sec means you can burn through that entire 600TBW in just over three days, and faster drives are even worse.

On top of that, the write endurance is proportional to the size of the drive. 600TBW is pretty typical for the better consumer 1TB drives, but a smaller drive gets proportionally less. And then the machines with less RAM are typically also paired with smaller drives.


Most people using these things aren't going to be using more than 8GB on an ongoing basis, and if they do, they'll not be swapping it like mad as you suggest, because it's only on application-switch that it will matter.

As for 600TB in just over 3 days, I want some of what you're smoking.


> Most people using these things aren't going to be using more than 8GB on an ongoing basis, and if they do, they'll not be swapping it like mad as you suggest, because it's only on application-switch that it will matter.

To begin with, a single application can pretty easily use more than 8GB by itself these days.

But suppose you are using multiple applications at once. If one of them actually has a large working set size -- rendering, AI, code compiling, etc. -- and then you run it in the background because it takes a long time (and especially takes a long time when you're swapping), its working set size is stuck in physical memory because it's actively using it even in the background and if it got swapped out it would just have to be swapped right back in again. If that takes 6GB, you now only have 2GB for your OS and whatever application you're running in the foreground. And if it takes 10GB then it doesn't matter if you're even running anything else.

Now, does that mean that everybody is doing this? Of course not. But if that is what you're doing, it's not great that you may not even notice that it's happening and then you end up with a worn out drive which is soldered on for no legitimate reason.

> As for 600TB in just over 3 days, I want some of what you're smoking.

2GB/s is 8200GB/hour is 172.8TB/day. It's the worst case scenario if you max out the drive.

In practice it might get hot and start thermally limiting before then, or be doing both reads and writes and then not be able to sustain that level of write performance, but "about a week" is hardly much better.


Yeah dude, "Rendering, AI, code compiling,..." is not the target market for this device. It's just not.

> 2GB/s is 8200GB/hour is 172.8TB/day. It's the worst case scenario if you max out the drive.

Right, which is completely and utterly unrealistic. As I said, I want what you're smoking.

I have an 8GB M1 mini lying around somewhere (I just moved country) which was my kids computer for several years before he got an MBP this Xmas. He had the sort of load that would be more typical - web-browsing, playing games, writing the occasional thing in Pages, streaming video, etc. etc. If I can find it (I was planning on making it the machine to manage my CNC) I'll look at the SMART output from that. I'm willing to bet it's not going to look much different from the above...


> Yeah dude, "Rendering, AI, code compiling,..." is not the target market for this device. It's just not.

None of the people who want to do those things but can't afford a more expensive machine will ever attempt to do them on the machine they can actually afford then, is that right?

> Right, which is completely and utterly unrealistic.

"Unrealistic" is something that doesn't happen. This is something that happens if you use that machine in a particular way, and there are many people who use machines in that way.

> He had the sort of load that would be more typical - web-browsing, playing games, writing the occasional thing in Pages, streaming video, etc. etc.

Then you would have a sample size of one determined by all kinds of arbitrary factors like whether any of the games had a large enough working set to make it swap, how many hours were spent playing that game instead of another one etc.

The problem is not that it always happens. The problem is that it can happen, and then they needlessly screw you by soldering the drive.


> The problem is not that it always happens. The problem is that it can happen

Ah. So, FUD, then. Gotcha.

“This ridiculously unlikely scenario is something I’m going to hype up and complain about because I don’t like some aspects of this companies business model”.

600 TBW in 3 days. Pull the other one, it’s got bells on.


I’ve never had an SSD crap out because of read/write cycle exhaustion, and I’ve been using SSD almost exclusively, for over a dozen years. I’ve had plenty of spinning rust ones croak, though. You don’t solder those in, so it’s not really a fair comparison.

I did have one of those dodgy Sandisks, but that was a manufacturing defect.


But how much RAM did you have?

If you have 24GB of RAM and a 12GB working set then it's fine. Likewise if you have 8GB of RAM and a 4GB working set. But 8GB of RAM and a 12GB working set, not the same thing.


Most flash memory will happily accept writes long after passing the TBW 'limit'. If write endurance would be that much of a problem I'd expect the second hand market to be saturated with 8Gb M1 MacBooks with dead SSDs by now. Since that's obviously not the case I think it's not that bad.

> Most flash memory will happily accept writes long after passing the TBW 'limit'.

That's the problem, isn't it? It does the write, it will read back fine right now, but the flash is worn out and then when you try to read back the data in six months, it's corrupt.

> If write endurance would be that much of a problem I'd expect the second hand market to be saturated with 8Gb M1 MacBooks with dead SSDs by now.

That's assuming it's sufficiently obvious to the typical buyer. You buy the machine with a fresh OS install and only newly written data, everything seems fine. Your 30 day warranty/return period expires, still fine. Then it starts acting weird.


> That's the problem, isn't it? It does the write, it will read back fine right now, but the flash is worn out and then when you try to read back the data in six months, it's corrupt.

SSD firmware does patrol reads and periodically rewrites data blocks. It also does error correction. Cold storage is a known issue with any SSD, but I don't have any insight in how bad this problem is in reality. Of course it will wear out eventually, but so will the rest of the system components. There's nothing to be gained by making SSDs that last 30 years when the other components fail in 15.

> Then it starts acting weird.

Is that speculation or do you have any facts to back that up?


the slowest DDR4 is capable of 12.6GB/s~ish per channel .

nowhere near the same performance.


The ratio between RAM speed and SSD speed is unimportant. Useful swap just needs a fast drive.

Nice! Is there a similar option for Logitech Webcams?

Came here looking for this. The Logi+ Options app is, as others have noted, less than stellar. I just want to control the zoom, flip, and coloring on my MX Brio.

This really highlights the impracticality of local models:

My $3k Macbook can run `GPT-OSS 20B` at ~16 tok/s according to this guide.

Or I can run `GPT-OSS 120B` (a 6X larger model) at 360 tok/s (30X faster) on Groq at $0.60/Mtok output tokens.

To generate $3k worth of output tokens on my local Mac at that pricing it would have to run 10 years continuously without stopping.

There's virtually no economic break-even to running local models, and no advantage in intelligence or speed. The only thing you really get is privacy and offline access.


A million tokens is like 5 minutes of inference for heavy coding use.

At work I regularly hit my 7.5mil tokens per hour limit one of our tools has, and have to switch model of tool, and I’m not even really a remotely heavy user. I think people don’t realise how many tokens get burned with CoT and tool calls these days

At 7.5mil per hour hard limit, 84 days to hit the grandparents $3k

That said local models really are slow still, or fast enough and not that great


They already stated they can only generate 57,600 tokens per hour locally (expressed as 16 tokens per second). So that's the limiting factor here.

You're saying it as if privacy was worthless? Also not many people would consider the price of buying a macbook and put it strictly towards running a local model.

Instead if you wanted to get a macbook anyway, you get to run local models for free on top. Very different story.


The privacy angle is not that interesting to me.

- You can find inference providers with whatever privacy terms you're looking for

- If you're using LLMs with real data (let's say handling GMail) then Google has your data anyway so might as well use Gemini API

- Even if you're a hardcore roll-your-own-mail-server type, you probably still use a hosted search engine and have gotten comfortable with their privacy terms

Also on cost the point is you can use an API that's many times smarter and faster for a rounding error in cost compared to your Mac. So why bother with local except for the cool factor?


90% of what you pay in agentic coding is for cached reads, which are free with local inference serving one user. This is well known in r/LocalLLaMA for ages, and an article about this also hit HN front page few weeks ago.

This itself is against the rules:

> Please respond to the strongest plausible interpretation of what someone says

> Please don't post shallow dismissals

Personally I've posted comments with glaring typos that everyone thankfully ignores. I only notice much later when I re-read it.


Oh interesting. Good to know for the next time the they're/their/there police shows up

Definitely worth emailing the mods a link to the derail — one of their tools that they might use is to autocollapse threads that are too far offtopic for the post.

Religion always seems like the default explanation for anything without an obvious use and it seems lazy. Maybe it was a game, a rite of passage, a boundary marker, or perhaps there was a Peruvian Mr. Beast running a competition. Anyone else remember the Cards Against Humanity "Holiday Hole"?

> Religion always seems like the default explanation for anything without an obvious use and it seems lazy.

This is one of the bits I remember from reading A Canticle for Leibowitz as a kid. It's about monks in a post nuclear armageddon world. At one point they find an ancient fallout shelter with a bathroom, and they interpret it as a spiritual space where a priest would sit on the "throne" and read "holy scrolls" held by the metal bar next to the throne...

I think we make that kind of mistake when doing armchair archeology or anthropology a lot.


The same joke is in David Macauley's Motel of the Mysteries (see drawing in https://www.byanyothernerd.com/2020/04/stranger-days-39-myst...).


We can only speculate on evidence we have. The prehistoric chubby dolls (Venus figurines) from archaeological digs that many hypothesized to be fertility totems can be hypothesized to be just idealized symbols of female form as the shape changed depending upon the average temperature - ice age meant fatter dolls, temperate times meant thinner dolls. https://www.sciencealert.com/the-mystery-of-the-enigmatic-ve...

We always want to pretend that we're better and more evolved than those knuckle draggers of ages past -- simply because someone else made a computer for us to use.

Would you rather live then or now?

That does seem like an orthogonal question to me. That we are wealthier and better off now doesn't really say much about the raw capabilities of the people now vs then, when it's obvious that technology has a truly gigantic role in the wealth of modern times (and compounds onto itself: many technologies making developing new technology easier).

Chronological snobbery.

Ive heard that same criticism from working archeologists and anthropologists, especially relooking over old finds but still often used in current unexplained finds. Stick with weird holes drilled in it? Religious scepter. Stone dildo? Religious fertility symbol. Weird hermit hut foundation? Religious monk retreat.

But I think ancient peoples were far more practical and far less concerned over religion and gods than we like to pretend. Sure they might believe lightning are the gods being angry or meteors are the gods taking a dump above the earth or that it is just the nature of existence, they got no real way to explain such things. But that doesn't mean everyone spent all their extra time worrying about such things and furiously producing endless amounts of religious offerings and symbols.


Archeologists consider the ancients to be a game of Dwarf Fortress - once food and security is provided for, you turn everything else into religious trade goods for the caravan.

I understand Amazon wants to protect their customer experience / advertising potential and avoid being commoditized, ie. "hey perplexity order me the cheapest option between Amazon and 17 other online retailers."

However I think they're fighting a losing battle here. Atlas browser for instance can shop on Amazon just fine and they'll have a hard time distinguishing between human and LLM without broadly getting restraining orders against every AI company.


Or how about the curb weight of the car? Higher mass means you're doing a lot more damage in an accident. People might think twice about buying an F250 for their grocery getter.

I'm fine with that as the tax basis but for penalties, this doesn't actually track with what's needed to produce deterrent at every income level

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: