Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just a reminder that you can run a fairly capable model like qwen2.5:72b locally on an 128 gig M4 Max Macbook Pro.


Or you could buy a slightly less insanely expensive model, split the savings in two, and get many many years of credits on OpenAI.


The thread is about ads in language model results. Do you think OpenAI is going to resist putting ads into their results for the next few years? I hope they do, but I wouldn’t say it’s a given. You definitely won’t get (intentionally injected) ads on a model you run locally.


Yeah I don't believe the will run ads on the API.


X is expensive, but you can get a chinese knockoff for cheaper, is true in so many domains.

Now about quality, usually it's much worse.


Have you used one of the larger qwen2.5's? I've found the quality to be pretty good.

In this case, I wouldn't say it's cheaper, the machine I'm talking about clocks in around $5k. But it's not going to start inserting ads on you.


Lol who has a $5k computer lying around like that?


Well, I work on this stuff, but I’m mostly sharing this to spread awareness that you don’t need a multimillion dollar rack of nvidia gpu machines to do inference with surprisingly powerful models these days. Not that long ago, you’d need a much more expensive multi-kilowatt workstation to run this sort of thing at a useful speed.


this is hacker news. Its coin toss for every person reading this that they have a $5K computer lying around




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: