Hacker Newsnew | past | comments | ask | show | jobs | submit | arisAlexis's commentslogin

Even that you mentioned NFTs in comparison hurts my mind

I mean, it's an apt comparison, given that the Venn diagram between the pro-NFT hucksters and the pro-AI crowd is a circle. When you listen to people who were so publicly and embarrassingly wrong about the future try to sell you on their next hustle, skepticism is the correct posture.

We can't agree on anything if you think AI is a hustle. You are in a world of surprise

Nothing has ever burst when production can't meet demand

It always bursts when demand stops. Current demand is artificially pumped up and financially unsustainable aka a bubble,

How is it artificially pumped everyone is at capacity, I'm using 2 pro subs per day, all my friends are using this new tech and so on. If you are still writing code you are smoked.

[dead]


The free demand is increasing. With price increases hitting the cloud, will there still be demand?

[dead]


At this point all the programmers will stop using ai, hence bubble bursting.

What is the point of paying $5000/month to keep a job which pays $10000/month?


I mean it will be the employers of those programmers and at $5000/dev/month I expect a businesses will start demanding very tangible returns from this spend. And as much as I love the tools I don't think it's generating that much direct business value. It's very obviously not turning $140k devs into $200k devs.

All exponential growth curves are actually S curves, before the inflection point.

The demand being: "trust me bro we will pay you when we're profitable"

All it'll take is one company to go bust, oracle for example, for the whole thing to deflate

Plus you're factually wrong, it happened for fiber optics and railroads


>All it'll take is one company to go bust, oracle for example, for the whole thing to deflate

Provided that of course, the US administration will be incorruptible enough to not bail out these tech companies with taxpayer money when they do eventually fail.

But when you see the connection between Larry Ellison and Trump, you realize the whole "free market competition" is a scam for suckers. Always has been, just that now they don't even bother to hide it via some complex facades and shell games to garner a veneer of legitimacy, it's straight up banana republic style of corruption.


> Provided that of course, the US administration will be incorruptible enough to not bail out these tech companies with taxpayer money when they do eventually fail.

I'd love them to try that because virtually no one on any part of the political spectrum would get behind that besides the most corrupted and soulless ghouls masquerading as politicians


I'd love them to try getting caught on audio asking a governor to find votes, and campaign on pardoning people convicted of treason because virtually no one on any part of the political spectrum would get behind that besides the most corrupted and soulless ghouls masquerading as politicians.

I could have substituted many other things up there. I was very naive when I thought getting caught on audio talking about grabbing women by the pussy and being able to do whatever you want to them because you’re a celebrity was one of those things too.


I dunno between following the party king and "we must bail them out to avoid total economic collapse" (real or imagined), I wouldn't be betting against bailouts.

> besides the most corrupted and soulless ghouls masquerading as politicians

Considering how many politicians in congress are influenced by corporate lobbyists and AIPAC, I think the situation is much more bleak


and run an outdated model for 3 years while progress is exponential? what is the point of that

When output is good enough, other considerations become more important. Most people on this planet cannot afford even an AI subscription, and cost of tokens is prohibitive to many low margin businesses. Privacy and personalization matter too, data sovereignty is a hot topic. Besides, we already see how focus has shifted to orchestration, which can be done on CPU and is cheap - software optimizations may compensate hardware deficiencies, so it’s not going to be frozen. I think the market for local hardware inference is bigger than for clouds, and it’s going to repeat Android vs iOS story.

This is the same justification that was used to ship the (now almost entirely defunct) NPUs on Apple and Android devices alike.

The A18 iPhone chip has 15b transistors for the GPU and CPU; the Taalas ASIC has 53b transistors dedicated to inference alone. If it's anything like NPUs, almost all vendors will bypass the baked-in silicon to use GPU acceleration past a certain point. It makes much more sense to ship a CUDA-style flexible GPGPU architecture.


Why are you thinking about phones specifically? Most heavy users are on laptops and workstations. On smartphones there might be a few more innovations necessary (low latency AI computing on the edge?)

Many laptops and workstations also fell for the NPU meme, which in retrospect was a mistake compared to reworking your GPU architecture. Those NPUs are all dark silicon now, just like these Taalas chips will be in 12-24 months.

Dedicated inference ASICs are a dead end. You can't reprogram them, you can't finetune them, and they won't keep any of their resale value. Outside cruise missiles it's hard to imagine where such a disposable technology would be desirable.


Most consumers do not care about reprogramming or fine-tuning and have no idea what NPU is. For many (including specifically those who still mourn dead AI companions, killed by 4o switch) the long term stability is much more important than benchmark performance of evergreen frontier model. If Taalas can produce a good hardwired model at scale at consumer market price point, a lot of people will just drop their AI subscriptions.

> a lot of people will just drop their AI subscriptions.

For a 2.5 kW Server? I don't see it happening, your money and electricity is better spent on CUDA compute.


>For a 2.5 kW Server?

I don’t see any reason why this should not drop to 100-300W at peak with maybe 100W*h of daily usage on smartphones.


Taalas is more expensive than NPUs not less. You have GPU/NPU at home; just use it.

I feel weird defending Taalas here, but this argument is quite strange: of course it is more expensive now. It is irrelevant - all innovations are expensive at early stage. The question is, what this technology will cost tomorrow? Can it do for consumers what NPUs could not, offering good UX and quality of inference for reasonable price?

It will always be more expensive.

More expensive than what? How much equivalent low latency inference costs today?

I think you completely miss the UX point here. In 1997 CRT screens were mainstream, LCD was in the early stage, phones had antennas. In 2007 an iPhone with LCD touch screen changed the UX of computing forever. This tech that we see today is a precursor of technology that will dominate tomorrow. Today local inference is painful and expensive, it consumes a lot of energy. NPUs/GPUs solve nothing here, and they will always be less effective than hardwired models - by design. So only question is, when the consumer performance expectation for open-weight models will cross the price curve of specialized chips. It may happen earlier than for generic NPUs.


Is progress still exponential? Feels like its flattening to me, it is hard to quantify but if you could get Opus 4.2 to work at the speed of the Taalas demo and run locally I feel like I'd get an awful lot done.

Bake in a Genius Bar employee, trained on your model's hardware, whose entire reason for existence is to fix your computer when it breaks. If it takes an extra 50 cents of die space but saves Apple a dollar of support costs over the lifetime of the device, it's worth it.

Yeah, the space moves so quickly that I would not want to couple the hardware with a model that might be outdated in a month. There are some interesting talking points but a general purpose programmable asic makes more sense to me.

It won’t stay exponential forever.

> what is the point of that

Planned obsolescence? /s

Jokes aside, they can make the "LLM chip" removable. I know almost nothing is replaceable in MacBooks, but this could be an exception.


Amazon went all in robotics they have their ai training chips, they build their own satellite starlinks and run the one if the 3 clouds.

Like he understands tech

Only it's much more exponential

Totally wrong. He has been voicing this for ages and for specific reasons.

How do you know the price of a unit ?

I remembered $1m from when I was in their booth at SC24, but when I just looked, I was wrong. It is worse...

https://www.datacenterdynamics.com/en/news/cerebras-unveils-...


You have no idea of the value this brings. Asml machines cost dozens of times more. So?

Ok, tell me the value.

They were afraid for the untested tech but it looks like a leap in speed now

This is nonsense what do you mean? Mistral uses Cerebras for their LLMs as well. [0]

It's certainly not "untested".

[0] https://www.cerebras.ai/blog/mistral-le-chat


Tested at Mistral’s scale is a very different thing to tested at OpenAI’s scale.

The scale of being "tested" clearly convinced Meta (beyond OpenAI's scale) [0] HuggingFace [1], Perplexity [2] and unsuprisingly many others in the AI industry [3] that require more compute than GPUs can deliver.

So labelling it "untested" even at Meta's scale as a customer (which exceeds OpenAI's scale) is quiet nonsensical and frankly an uninformed take.

[0] https://www.cerebras.ai/customer-spotlights/meta

[1] https://www.cerebras.ai/news/hugging-face-partners-with-cere...

[2] https://www.cerebras.ai/press-release/cerebras-powers-perple...

[3] https://www.cerebras.ai/customer-spotlights


Meta didn't offer it. They offered the free llama version on their cloud. Maybe now Zuck will be conincrto buy their chips though

Denial everyone. Amazon will have the same profits running on AI and robots with minimal expenses. All the other companies will follow. Wake up to reality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: