Hacker Newsnew | past | comments | ask | show | jobs | submit | alfalfasprout's commentslogin

Cerebras is a totally different product though. They can (theoretically) run any frontier model provided it gets compiled a certain way. Like a wafer scale TPU.

This is using hardwired weights with on-die SRAM used for K/V for example. It's WAY more power efficient and faster. The tradeoff being it's hardwired.

Still, most frontier models are "good enough" where an obscenely fast version would be a major seller.


I've been using this for a few months now and it's amazing. Crazy fast and the vast majority of the time I get the result I want in one shot.

They have prebuilt binaries now for fff

The thing is, the barrier isn't near zero. The time to reach an MVP has just decreased. But you still very much need expertise, strategy, etc. to deliver something worthwhile. The bar has just increased.

You'll see on HN itself how many people want to work on this surveillance. How many people want all white collar work eliminated by AI. How many people want a quick buck at anyone's expense, the morality be damned.

Money and only money talks nowadays. It's sad.


This is easily the most spot-on comment I've read on HN in a long time.

The humility of understanding what you don't know and the limitations of that is out the window for many people now. I see time and time again the idea that "expertise is dead". Yet it's crystal clear it's not. But those people cannot understand why.

It all boils down to a simple reality: you can't understand why something is fundamentally bad if you don't understand it at all.


> Similar to the media, I've picked up on vibes from academia that have a baseline AI negative tilt.

The media is extremely pro-AI (and a quick look at their ownership structure gives you a hint as to why). You seem to be projecting your own biases here, no?

And how would those LLMs learn? How would you learn to ask the right questions that further scientific research?


I'm writing a blog post on this very thing actually.

Outsourcing learning and thinking is a double edged sword that only comes back to bite you later. It's tempting: you might already know a codebase well and you set agents loose on it. You know enough to evaluate the output well. This is the experience that has impressed a few vocal OSS authors like antirez for example.

Similarly, you see success stories with folks making something greenfield. Since you've delegated decision making to the LLM and gotten a decent looking result it seems like you never needed to know the details at all.

The trap is that your knowledge of why you've built what you've built the way it is atrophies very quickly. Then suddenly you become fully dependent on AI to make any further headway. And you're piling slop on top of slop.


> It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.

In practice, this isn't bearing out at all though both among my peers and with peers in other tech companies. Just making a blanket statement like this adds nothing to the conversation.


Agreed, it's funny how people have taken unrestrained use of AI as an axiom at this point. There very much is still time to significantly control it + regulate it. Is there enough appetite by those in power (across the political spectrum)? Right now I don't think so.


>There very much is still time to significantly control it + regulate it.

There's also huge financial momentum shoving AI through the world's throat. Even if AI was proven to be a failure today, it would still be pushed for many years because of the momentum.

I just don't see how that can be reversed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: