Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tell me one unique skill, from any of those individuals you quoted, that is worth even 0,01% of $5 billion, for a non existent company with a non existent product.


It's not a particular skill, it's the experience of having worked with certain people at well-funded and rapidly growing companies.


LOL. So now it's not even their own skill but the "experience of having worked with certain people at well-funded and rapidly growing companies" that would value a nothingburger 12 Billion dollars.

I am sure there must be a better argument that just that. Because I have "worked with certain people at well-funded and rapidly growing companies" a lot and I'm worth nothing whatsoever.


I can be famous actor, when camera is on and you need to smile you smile, when you need to be sad you make sad face, what’s the big deal, right?


They’ve proven they can launch SOTA models. There are only a handful of people in the world who have that track record.

They know OpenAI’s complete history and roadmap.

They can get a call with any AI researcher on earth.


LLMs are a commodity now. It is all about capital. DeepSeek and Grok proved that.

It’s not Klingon cloaking tech.

With minor variations, it is Transformers via autoregressive next-token prediction on text. Self-attention, residuals, layer norm, positional encodings (RoPE/ALiBi), optimized with AdamW or Lion. Training scales with data, model size, and batch size using LR schedules and distributed parallelism (FSDP, ZeRO, TP). For inference KV caching and probabilistic sampling (temperature, top-k/p).

Most differences are in scale, data quality, and marginal architectural tweaks.


Meta had the scale and whiffed so bad the entire Llama team disbanded and Zuck has been recruiting new people with 9 figure offers. Capital is essential but the people deciding when to launch the training run still do matter.


Please investigate the team she has assembled and it will be clear $5B isn't outside the realms of reality.


Looking at the team it's clear its a bunch of brilliant but indisciplined cats. They will be difficult to herd but for what the real plan is here, acquisition, it clearly does not matter.

They will do a couple of tools, then another panicked company like Apple, or Facebook or AWS, needing to justify the millions spent on AI and with nothing to show for, will acquire them for billions. When somebody will point out they don't make money or have no product, somebody here will point to the "team" as you just did :-)

But since we are talking about 12 billion, let's play the VC and do the due diligence first on the leader?

Reporting from The Optimist by Keach Hagey and other open sources, plus the famous thread here when Sam Altman got fired from OpenAI, reveals that Mira Murati had a central role in Sam Altman November 2023 firing.

For VCs, this is a masterclass in corporate survival and a massive red flag. Evidence shows Murati was the primary architect of the case against Altman:

- Collected "screenshots from Murati's Slack channel" documenting alleged toxic behavior

- Wrote private memos questioning Altman's leadership

- Initiated board contact through Ilya Sutskever

- Sent "hefty PDF files of evidence" to board members via self-destructing emails

Then ...When employees revolted, fearing lost equity, immediately switched sides, signed the letter demanding Altman's return, then later claimed she "fought the actions aggressively."

She survived by reading the room perfectly:

- Became interim CEO during the crisis

- Positioned herself as essential to Altman's return

- Exited in September 2024 just before the for-profit restructuring

- Immediately raised $2B for her new startup showing it had it planned all along.

Investment Implications:

This reveals someone who will systematically undermine leadership while maintaining plausible deniability, then switch sides when politically convenient. Remember Paul Graham's famous quote about Sam Altman? That he "could be parachuted into an island full of cannibals and come back in 5 years and be king"? Murati might be even more astute.

She went from Intern → Tesla PM → Leap Motion → OpenAI VP → CTO → interim CEO → $2B startup founder, all with zero AI research background.

Her OpenAI trajectory shows world class political instincts. Join during post-Musk power vacuum (2018), build alliances with both sides, orchestrate a coup when convenient, flip sides when it fails, exit perfectly timed before restructuring.

For VCs:

Whether you see this as exceptional political skill or dangerous executive behavior depends on your risk tolerance.

Let's just hope the 10 millions dollars the government of Albania, the poorest nation in Europe, invested on this have been secured with proper share rights. That could make for some awkward reactions back home.


I wasn't referring to Murati (although she seems capable).


Their head of AI alignment clearly has no idea on how to go on alignment, as you can see here, during this 30 min rambling into nothing, on the subject.

At correct time stamp: https://youtu.be/Wo95ob_s_NI?t=1040

What is in contrast to the published vision of Thinking Machines Lab.


John is not exactly used to interviews.

I was more concretely referring to the level of talent in the engineering team, for example Lilian Weng and Horace He.

Horace can probably produce $50M of revenue personally per year.


Going just from published research, it seems Lilian Weng focus is LLM agents, safety, and alignment, focusing on how models are used, guided, and evaluated, not how they’re built.

It seems Horace He focuses on deep learning systems and compiler optimization, improving the performance of frameworks like PyTorch.

While both are clearly highly capable, and maybe capable of focusing on other areas, again just from published papers, neither seems to have published work on core LLM architectures or foundational model training, that could help bring about a scientific advance on the performance of current models.

Their contributions seem to be on enhancing usability and efficiency, not the underlying design or scaling of modern LLMs.

If that is the core team, I would be worried if they have the researchers capable of producing a breakthrough worth of the billions committed. But maybe that is why they are still hiring?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: