Hacker Newsnew | past | comments | ask | show | jobs | submit | ay's commentslogin

Isn’t that what LoRA does ?


LoRAs are better at steering models to produce correct answers from their data set than imparting new knowledge.


https://arxiv.org/abs/2603.01097

>Overall, our findings position LoRA as the complementary axis of memory alongside RAG and ICL, offering distinct advantages.


Hinduism is probably right. Every system of sufficient complexity is probably sentient - even if in the ways we at our level can not fathom.


I'm a (non-practicing) Dwaitin Hindu. AFAICT, there's no mainstream school of Hindu philosophy (there are three) espouses that view. Although, Advaitins come very close to it with their four mahavakyas.

IMO, Integrated Information theory of consciousness (IIT) is exactly that. Everything is conscious, the difference is only in the degree to which they are conscious.


Oh, thank you very much enlightening me! All the time I misunderstood! I guess then IIT it is for me :-)


I tried qwen3.5:4b in ollama on my 4 year old Mac M1 with my own coding harness and it exhibited pretty decent tool calling, but it is a bit slow and seemed a little confused with the more complex tasks (also, I have it code rust, that might add complexity). The task was “find the debug that does X and make it conditional based on the whichever variable is controlled by the CLI ‘/debug foo’” - I didn’t do much with it after that.

It may be interesting to try a 6bit quant of qwen3.5-35b-a3b - I had pretty good results with it running it on a single 4090 - for obvious reasons I didn’t try it on the old mac.

I am using 8bit quant of qwen3.5-27b as more or less the main engine for the past ~week and am quite happy with it - but that requires more memory/gpu power.

HTH.


What matters for Qwen models, and most/all local MoE models (ie. where the performance is limited) is memory bandwidth. This goes for small models too. Here's the top Apple chips by memory bandwidth (and to steal from clickbait: Apple definitely does not want you to think too closely about this):

M3 Ultra — 819 GB/s

M2 Ultra — 800 GB/s

M1 Ultra — 800 GB/s

M5 Max (40-core GPU) — 610 GB/s

M4 Max (16-core CPU / 40-core GPU) — 546 GB/s

M4 Max (14-core CPU / 32-core GPU) — 410 GB/s

M2 Max — 400 GB/s

M3 Max (16-core CPU / 40-core GPU) — 400 GB/s

M1 Max — 400 GB/s

Or, just counting portable/macbook chips: M5 max (top model, 64/128G) M4 max (top model, 64/128G), M1 max (64G). Everything else is slower for local LLM inference.

TLDR: An M1 max chip is faster than all M5 chips, with the sole exception of the 40-GPU-core M5 max, the top model, only available in 64 and 128G versions. An M5 pro, any M5 pro (or any M* pro, or M3/M2 max chip) will be slower than an M1 max on LLM inference, and any Ultra chip, even the M1 Ultra, will be faster than any max chip, including the M5 max (though you may want the M2 ultra for bfloat16 support, maybe. It doesn't matter much for quantized models)


For comparison, most recent (consumer) NVIDIA GPUs released:

- 5050 - MSRP: 249 USD - 320 GB/s

- 5060 - MSRP: 299 USD - 448 GB/s

- 5060 Ti - MSRP: 379 USD - 448 GB/s

- 5070 - MSRP: 549 USD - 672 GB/s

- 5070 Ti - MSRP: 749 USD - 896 GB/s

- 5080 - MSRP: 999 USD - 960 GB/s

- 5090 - MSRP: 1999 USD - 1792 GB/s

M3 Ultra seems to come close to a ~5070 Ti more or less.


You should really list memory with the graphics cards, and above should list (unified) memory and prices as well with particular price points.


I mean what I was curious (and maybe others) about was comparing it to parent's post, which is all about the memory bandwidth, hence the comparison.


But it doesn't matter if you have 1000GB/s memory bandwidth if you only have 32GB of vram. Well, maybe for some applications it works out (image generation?), but its not seriously competing with an ultra with 128 GB of unified memory or even a max with 64 GB if unified memory.


> but its not seriously competing with an ultra with 128 GB of unified memory or even a max with 64 GB if unified memory.

No one is arguing that either, this sub-thread is quite literally about the memory bandwidth. Of course there are more things to care about in real-life applications of all this stuff, again, no one is claiming otherwise. My reply was adding additional context to the "What matters [...] is memory bandwidth" parent comment, nothing more, hence the added context of what other consumer hardware does in memory bandwidth.


If we are talking about Apple silicon, where we can configure the memory separately from the bandwidth (and the memory costs the same for each processor), we can say something like "its all about bandwidth". If we switch to GPUs where that is no longer true, NVIDIA won't let you buy an 5090 with more 32GB of VRAM, then...we aren't comparing apples to apples anymore.


A 10GB 3080 still beats even an M2 Ultra with 192GB... memory bandwidth is not the only factor.

https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inferen...


If the model is small enough to fit in to 10GB of VRAM the GPU can win.

But the bigger models are more useful, so that’s what people fixate on.


There is also prompt processing that's compute-bound, and for agentic workflows it can matter more than tg, especially if the model is not of "thinking" type.


I am 50, coding since ~12. Started with Apple II, during the uni times wrote my own editor in assembly for BK-0010 (a soviet computer), then 30 years in computer networking with some high performance dataplane stuff more recently;

The last years somehow it felt like there’s nothing new anymore, the same 10 ideas being regurgitated with slight modifications. I tinkered with AI for the past 2 years but it was mostly a “tool for writing boilerplate”. I have tried a few ideas for agents but didn’t see how it could work.

That changed with Opus 4.6 and the subsequent wave of local models - now I try 10 ideas a day and it’s like magic! And if something doesn’t work - jumping into the code and debugging it is huge fun!

Understanding that the era of the almost-free cloud tokens might come to an end, I run my own harness pointing to my own GPUs running Qwen3.5-27B, and the last few days it has been very busy! :)

My harness doesn’t “pressure cook” since it doesn’t make sense to do that with only one GPU (besides many other reasons), it runs everything in a linear fashion, including subagents, and logs everything - reading the logs as they go by is another cool thing - sometimes I pick up interesting things from it !

The distribution of people’s moods related to AI seems indeed bimodal. And I feel lucky somehow ending up in the “enthusiastic” rather than “depressed” part of it. To the folks in the other one: I am sorry. I don’t know why it is this way. If I knew I might have given unsolicited advice.


So you’ve tried at least a hundred ideas by now, care to share fifty of them? I’m very curious as to what they are. Opus is too slow to even complete one idea per day for me, and that’s fine, I don’t have hundreds of them :)


I dont have big ideas. Some of the more interesting ones that I ended up using but can’t share: a streaming radio for my MP3 collection (runs behind the vpn); a lightweight and self contained webrtc conference server for talking with my family; a process-level virtualization based on KVM.

Of the ones I can share:

Browser-based network tester using webrtc unreliable data https://netpoke.com - use magic code “DEMO” to see what’s it about - the source is at https://github.com/ayourtch/netpoke

A port of the SOTA speech generation model from Python to Rust:

https://github.com/ayourtch/fish-audio-experiment

A study on LLM prompting techniques:

https://github.com/ayourtch-llm/kindness

My own coding agent that i use with my locally hosted LLM for experiments:

https://github.com/ayourtch-llm/apchat

Also LLM helped with a lot of code for my packet mangling library: https://github.com/ayourtch/oside - which, among other things, includes a now battle tested SNMPv3 stack.

A true “stochastic parrot” using hash tables: https://github.com/ayourtch/hashmem

These are the ones I remember. Feel free to scout my GitHub for more. Edit: And of course it doesn’t need to be said that out of ideas I try all of them make it to github. Many end up thrown away.


Just use iSH and use the local terminal on the iPhone from which you can connect to the Mac terminal. Works well over tailscale, too.


How do I know iSH app isn’t exfiltrating data?


You don’t know whether your C compiler isn’t doing that either.


Fixing a non-trivial bug is a great way to learn - assuming they don’t give up.

By virtue of being generators subtly broken stuff, LLMs are well positioned to create very nice learning material.

Same thing about growing the project - having to deal with something too big for AI is a very valuable experience.

And, in my experience, some of the purely human made codebases are strictly worse than LLM-made :-)


Isn’t that how a lot of us learned — buy typing the code out of back of a magazine? Then spending hours trying to debug a typo somewhere.

I didn’t realize how close LLMs are to the old magazines. Let it give you a seed, then use that springboard to learn everything else.


I have bought more than 600 books over a decade or so;

But after they decided the ebooks were actually just license to read, I did exactly the same as you, and now rather than happily buying from them, actively discourage everyone in my social circle from using kindle.

I am not going back, whoever they decide to blame.


> But after they decided the ebooks were actually just license to read

They decided that when they launched the Kindle. It's always been that way.


No, it hasn't. Until very recently, their website said "Buy now with 1-Click", minus the new "By placing an order, you're purchasing a content license & agreeing to Kindle's Store Terms of Use." wording underneath it. The process was identical to buying a physical book: you give them money, and you end up with your own physical or electronic copy of it.

Any interpretation of that transaction as anything but a purchase of a copy is delusional. I couldn't care less what their ToS said about it, any more than I'd care what a sign on the wall of a bookstore said.


> No, it hasn't.

Yes, it has. They made it clear right when they launched the store.

> I couldn't care less what their ToS said about it

You're welcome to not care about whatever you feel - your concerns and reality are orthogonal.

This became big news a long time ago:

https://www.theguardian.com/technology/2009/jul/17/amazon-ki...


The linked article is about Amazon's having realized they had no right to sell the books they thought they had sold and reversing the transaction, not revoking a license to something they thought they had licensed to you.

You seem to be missing the importance of that nuance.


Sigh.

OK:

https://goodereader.com/blog/kindle/amazon-changes-licensing...

"Amazon has revised the text when purchasing a Kindle e-book on its online store. You do not own the book you bought but are licensing it. It used to say “By clicking on above button, you agree to Amazon’s Kindle Store Terms of Use.”"

...

"This is not a policy shift from Amazon for the US; they are more upfront about it now. Amazon has always licensed the digital content to users, so anything purchased does not mean the user owns it, they just bought a license"

As the article points out, the change in verbiage was because of a new California requirement that this should be made explicit. It was always a license. They merely changed the verbiage on the button to conform to state rules.

Edit: I have to say, after a bunch of rather pointless arguments today and yesterday on HN, it disappoints me that the average commenter is quick to jump to unsubstantiated conclusions. Both times the facts were trivial to lookup.

Not the HN of yore.


I mean, you're citing goodereader.com as though that's somehow an authoritative source and not just a blog by a guy who likes ereaders, but has no special legal knowledge.

Much more useful would have been if you had linked to an archive of the original Kindle Store Terms of Use, which state:

> Use of Digital Content. Upon your payment of the applicable fees set by Amazon, Amazon grants you the non-exclusive right to keep a permanent copy of the applicable Digital Content and to view, use, and display such Digital Content an unlimited number of times, solely on the Device or as authorized by Amazon as part of the Service and solely for your personal, non-commercial use. Digital Content will be deemed licensed to you by Amazon under this Agreement unless otherwise expressly provided by Amazon.[0] (emphasis mine)

Notice that "or as authorized by Amazon" is part of the clause with "solely on the device," not a separate clause that somehow might be interpreted to apply to the "right to keep a permanent copy" part.

Does it also say that it is considered licensed to you? Sure. But the "license" is the "right to keep a permanent copy."

It's one thing for Amazon to say, "Shit, we sold you a book we weren't authorized to sell. We have to undo the whole transaction." It's quite another to do what the GGGGGGGP comment (I didn't count the G's) is complaining about and delete your permanent copy of a book for which they did validly sell you a license to keep a permanent copy.

Amazon has meaningfully changed the license agreement now. In 2025, it says:

> Use of Kindle Content. Kindle Content is licensed, not sold, to you by the Content Provider. Upon your download or access of Kindle Content and payment of any applicable fees (including applicable taxes), the Content Provider grants you subject to the terms of this Agreement, including without limitation those in “Changes to Service; Amendments” below, a non-exclusive right to view, use, and display such Kindle Content (for Subscription Content, only as long as you remain an active member of the underlying membership or subscription program), solely through Kindle Software or as otherwise permitted as part of the Service, solely on the number of Supported Devices specified in the Kindle Store, and solely for your personal, non-commercial use. Content Provider may include additional terms for use within its Kindle Content. Those terms will also apply, but this Agreement will govern in the event of a conflict. Some Kindle Content, such as interactive or highly formatted content, may not be available to you on all Kindle Software.[1]

They've eliminated the right to keep a permanent copy that was originally part of the license sold. That change matters. Deleting content sold under that license is a violation of the terms of the agreement on their part.

[0] https://web.archive.org/web/20110109000847/http://www.amazon... [1]https://www.amazon.com/gp/help/customer/display.html?nodeId=...


> Yes, it has. They made it clear right when they launched the store.

No one except those who explicitly went looking for this knew it. It wasn't made clear in any way.

> This became big news a long time ago:

Speaking of orthogonal. I remember this well. It was a case where Amazon stole back books people had purchased. The core concern at the time wasn't that Amazon had revoked a license to read a book, but that they had deleted purchased books from users' collections.

But at the end of the day, for many years Amazon had an action button saying "Buy now with 1-Click" with no legal fiction disclaimer. The button was identical to what you'd see when buying a bag of cat food, DVD, or anything else you'd flat-out purchase from them.


I'm neither disputing the verbiage on the button, nor the ignorance of users. None of those affects the fact that you did not own the ebook - it was licensed to you.

What is silly is actually knowing the whole 1984 episode, and still believing you owned the books.


> "These books were added to our catalog using our self-service platform by a third-party who did not have the rights to the books," spokesman Drew Herdener told the Guardian. "When we were notified of this by the rights holder, we removed the illegal copies from our systems and from customers' devices, and refunded customers."

> Amazon refunded the cost of the books, but told affected customers they could no longer read the books and that the titles were "no longer available for purchase".

This has nothing to do with people's having bought a license to the books. It's about Amazon's never having had authorization from the publisher to sell the books. There is no reference at all to people's having licensed the books from Amazon. Amazon referred to people as having bought the books.


What do you do now? I’ve been buying physical books off of Abe Books—not a bad thing at all—but I’d like to use my jailbroken kindle again because the form factor is so convenient.



Buy DRM free when you can. Not only is this convenient for you but will hopefully help nudge the market. When you can't, buy the book from one of the easily cracked sources (Kobo, Google, Adobe DRM).

Or you can save yourself the bother of removing DRM by buying the book from wherever and then downloading a copy from Anna's.


Not the guy but you can just buy your ebooks from someplace else and use calibre to convert/send them to your kindle.

Im kinda cheeky and use Amazons Send-to-Kindle service to send ebooks in epub format to my kindle via wifi


I do this as well and leave the site name in the filename where it was downloaded from if it was part of the filename originally.


I try to buy physical books, and make an effort to buy it elsewhere, with AMZN being the reluctant last resort if I truly can’t find it. I don’t have a specific go to place anymore.

Also, I reduced the buying pace - owning physical books takes up space, so the bar for getting something into the library is now much higher than before.


If you already bought them, just download them off anna's archive.


Use your local library?

I’m amazed to see so many comments focused on everything but libraries.


It’s a shift but I agree. I think we’re used to having instant access to what we want. Waiting 3 weeks on Libby is a change. I do think it’s been healthy and gives me something to look forward to!


Libraries are not great on in demand books, tech books, or erotica.


Reading this article is especially amusing since this bit just hit the news as well:

https://www.business-standard.com/amp/world-news/amazon-euro...


I think it’s just some person clauding around, nothing to do with anthropic. They also opened a massive PR against VPP with a bunch of stuff.


Kimi is noticeably better at tool calling than gpt-oss-120b.

I made a fun toy agent where the two models are shoulder surfing each other and swap the turns (either voluntarily, during a summarization phase), or forcefully if a tool calling mistake is made, and Kimi ends up running the show much much more often than gpt-oss.

And yes - it is very much fun to build those!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: