Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you need any more signs, or is it clear now?

For me the Meta storm of billions in hiring was enough to start selling any tech giant related stock.

It is about to crash, harder than ever.



I thought that when Apple reached a market cap of 1 trillion, but here we are today.... I since then abandoned any such prediction, even if i share your feeling


Hard numbers for market cap is a difficult measure - Apples price earnings is 33 currently, which is high but not over high. Ie. Apple has revenue to back their market cap.

The issue with high salaries is that there is a latent assumption that these people provide the multiples in additional value. That they are so smarter than everyone else.

This is simply not true, and will lead to a competitive disadvantage.


Why aren’t they smarter than others?


Probabiliatically improbable - just like the world's most important cryptographic schemes rely on low probability of hash collision.

But feel free to prove me wrong - I am ammendable.


I guess there’s a lot of loaded language it’s not really worth debating. But I would never claim they’re ALL smarter than EVERYONE else.

But I would expect them to be smart and have relevant experience that everyone else doesn’t have, and I expect the companies offering these salaries aren’t doing it for fun but because they believe their IP or ability to generate IP is very hard to come by, and it’s better to monopolize that talent than let competitors do so. If they could hire 10 people equally as good for 1/10 the price then they would do so. But I’m sure there’s also a large dose of gambling too; even in sports highly anticipated freshman drafts can turn into duds.


> If they could hire 10 people equally as good for 1/10 the price then they would do so.

I think this is where the misunderstanding is. In this context it is not 10 times as much in salary - where it is already highly improbable that a single person provides 10 times as much value as 10 other highly motivated candidates.

You have to increase by orders of magnitude.

Remember that these threads exists in the context of the posted article.


The fact is that the incumbents with all their money are in a good position to defend against anything and counterattack.

When OpenAI was making waves the first time, then Google launched their neutered incapable competitor, I thought it is “over” for Google because why would anyone use search anymore (apart from the 1% of use cases where it gives better results faster), and clearly they are incapable of building good new products anymore…

and now they are there with the best LLMs and they are at the top of the pack again.

Billions of dollars in the bank, great developers, good connections to politicians and institutions mean that you are hard to replace even if you fumble it a couple of times.


it's because they printed $trillions so amount of money is a lot in the system. I mean debt


> It is about to crash, harder than ever.

It is indeed; those people hired at those salaries are not going to "produce" more than the people hired at normal salaries.

Because what we have now is a "good enough" so getting a 10x better LLM isn't going to produce a 10x increase in revenue (nevermind profit).

The problem is not "We need a better LLM" or "We need cheaper/faster generation". It's "We don't know how to make money of this".

That doesn't require engineers who can creat the next generation SOTA in AI, that requires business people who can spot solutions which simply needs tokens.


That’s similar to what people on HN said a few decades ago when Google bought Youtube and Facebook bought Instagram and Whatsapp for billions.


It might be time to sell your USD too, while you're at it. Don't think it won't take it all with it.

EUR:USD has been rising for a reason.


Or it might go up.


Combined with the trump economy its going to be interesting. I pulled out in end of January when they were actually going forward with tariffs in the most stupid way possible.


Where would put their money into though? It’s such a weird economy, especially with the expected decrease in younger population.


Before an imminent recession you might want to focus on funds that primarily cover sectors that enjoy steady demand even during crisis, like utilities, consumer staples, healthcare, maybe some hedge against inflation like precious metals. I would avoid tech and luxuries and would definitely avoid crypto also. There is no historical data to show how it would perform during a serious recession (Bitcoin was basically born during the last one) but I doubt it would be pretty.


European defense stocks seem like a pretty good bet right now.


Unfortunately "defense" almost always seems like a good bet


Stock buybacks and cryptocurrency are ways to circumvent the Fed's monopoly on currency. The US is not a single monetary economy anymore, it's several. None of the economic analysis institutions know work anymore.

We're sailing uncharted waters, all bets are off.


> It is about to crash, harder than ever.

and then immediately bounce back to higher than it was before


what does Meta hiring have to do with a crash? if anything it shows increase because of the amount of investment put into it.


Funny there's trillions of dollars in the span of two years literally pointing to the writing on the wall and you're so arrogant and blinded by cope that you can't see it. You legacy engineers really are something else.


You have exactly the same level of conviction toward an unknowable outcome, I think both of you would be better served by reading the middle ground instead of subscribing to a false dichotomy of boom or bust.

I think the biggest confuser here is that there are really two games being played, the money game and the technology game. Investments in AI are going to be largely driven by speculation on their monetary outcome, not technological outcome. Whether or not the technology survives the Venture Capital Gauntlet, the investment bubble could still pop, and only the businesses that have real business models survive. Heaps of people lose their shirt to the tune of billions, yet we still have an AI powered future of some kind.

All this to say, you can both be certain AI is a valuable technology and also believe the economics around it right now are not founded in a clear reality. These are all bets on a future none of us can be sure of.


You can absolutely be sure of market forces not destroying established behemoths. It simply doesn't happen frequently. Inertia is a real thing. Look at Uber, Tesla, etc. I dont think there necessarily won't be a bust for many fledgeling AI companies though, in fact I'm certain there will be.

But thinking Tech Giants are going to crash is woefully ignorant of how the market works and indicates a clear wearing of blinders. And it's a common one among coders who feel the noose tightening and who are the types of people led by their own fear. And i find that when you mix that with arrogance, these three traits often correlate with older generations of software engineers who are poor at adapting to the new technology. The ones who constantly harp on how AI is full of mistakes and disregard that humans are as well. The ones who insist on writing even more than 70% of their own code rather than learning to guide new tools granularly. It's a take that nobody should entertain or respect.

As for your point on 'future none of us can be sure of.' I'll push back on that: It is not clear how AGI or ASI will come about, ie. what architecture will underpin it. However - it is absolutely clear that AI powered coding will continue to improve, and that algorithmic progress can and will be driven by AI coders, and that that will lead to ASI.

The only way to not believe that is to think there is a special sauce behind consciousness. And I tend to believe in scientific theory, not magic.

That is why there is so much VC. That is why tech giants are all racing. It isn't a bet. It is a race to a visible, clear goal of ASI that again, it takes blinders to not see.

So while AI is absolutely a bubble, this bubble will mark the transition to an entirely new economic system, society, world, etc. (and flip a coin on whether any of us survive it lol, but that's a whole separate conversation)


> However - it is absolutely clear that AI powered coding will continue to improve...

Based on what precedent?


The current trend of continual improvement of LLM coding ability to solve previously unseen problems, handle larger codebases, operate for longer periods of time, and improved agentic scaffolding.

The reward-verifier compatability of programming and RL.

Do you have a stronger precedent for that not being the case?


Is it your view that the improvements have been accelerating or constant?

In my view, improvements have been becoming both less frequent and less impressive.


Accelerating. Below is a list of the SOTA's over time (with some slight wiggle room between similar era models)

gpt4 | 3/2023

gpt4-turbo - 11/2023

opus3 | 3/2024

gpt4o | 5/2024

sonnet3.5 | 6/2024

o1-preview | 9/2024

o1 | 12/2024

o3-minihigh | 1/2025

gemini2pro | 2/2025

o3 | 4/2025

gemini2.5pro | 4/2025

opus4 | 5/2025

??? | 8/2025

This is also not to mention the miniaturization and democratization of intelligence that is the smaller models which has also been impressive.

Id say this shows that improvements are becoming more frequent.

---

Each wave of models was a significant step above what came previously. One needs only to step back a generation to be reminded of the intelligence differential.

Some notable differences have been with o3mh and gemini2.5's ability to spit out 1-3k loc(lines of code) with accurate alterations (most of the time). Though better prompting should be used to not do this in general, the ability is impressive.

Context length with gemini 2.5 pro's intelligence is incredible. To load 20k+ loc of a project and recieve a targeted code change that implements a perfect update is ridiculous.

The amount of dropped imports and improper syntax has dramatically reduced.

I'd say this shows improvements are becoming more impressive.

---

Also note the timespan.

We are only 25 months into the explosion kicked off by GPT-4.

We are only 12 months into the reasoning paradigm.

We have barely scratched the surface of agentic tooling and scaffolding.

There are countless architectural improvements and alternatives in development and research.

Infrastructure buildouts and compute scaling are also chugging along, allowing faster training, faster inference, faster testing, etc. etc.

---

This all paints a picture of an acceleration in time and depth of capability


You claim to believe in scientific theory and not magic, but you are asserting many things without evidence.


If you want to be more specific I'd be happy to supply evidence for any assertions I made


Maybe it's the legacy capitalists that are really something else?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: