Tell me one unique skill, from any of those individuals you quoted, that is worth even 0,01% of $5 billion, for a non existent company with a non existent product.
LOL. So now it's not even their own skill but the "experience of having worked with certain people at well-funded and rapidly growing companies" that would value a nothingburger 12 Billion dollars.
I am sure there must be a better argument that just that. Because I have "worked with certain people at well-funded and rapidly growing companies" a lot and I'm worth nothing whatsoever.
LLMs are a commodity now. It is all about capital. DeepSeek and Grok proved that.
It’s not Klingon cloaking tech.
With minor variations, it is Transformers via autoregressive next-token prediction on text. Self-attention, residuals, layer norm, positional encodings (RoPE/ALiBi), optimized with AdamW or Lion. Training scales with data, model size, and batch size using LR schedules and distributed parallelism (FSDP, ZeRO, TP). For inference KV caching and probabilistic sampling (temperature, top-k/p).
Most differences are in scale, data quality, and marginal architectural tweaks.
Meta had the scale and whiffed so bad the entire Llama team disbanded and Zuck has been recruiting new people with 9 figure offers. Capital is essential but the people deciding when to launch the training run still do matter.
Looking at the team it's clear its a bunch of brilliant but indisciplined cats. They will be difficult to herd but for what the real plan is here, acquisition, it clearly does not matter.
They will do a couple of tools, then another panicked company like Apple, or Facebook or AWS, needing to justify the millions spent on AI and with nothing to show for, will acquire them for billions. When somebody will point out they don't make money or have no product, somebody here will point to the "team" as you just did :-)
But since we are talking about 12 billion, let's play the VC and do the due diligence first on the leader?
Reporting from The Optimist by Keach Hagey and other open sources, plus the famous thread here when Sam Altman got fired from OpenAI, reveals that Mira Murati had a central role in Sam Altman November 2023 firing.
For VCs, this is a masterclass in corporate survival and a massive red flag. Evidence shows Murati was the primary architect of the case against Altman:
- Wrote private memos questioning Altman's leadership
- Initiated board contact through Ilya Sutskever
- Sent "hefty PDF files of evidence" to board members via self-destructing emails
Then ...When employees revolted, fearing lost equity, immediately switched sides, signed the letter demanding Altman's return, then later claimed she "fought the actions aggressively."
She survived by reading the room perfectly:
- Became interim CEO during the crisis
- Positioned herself as essential to Altman's return
- Exited in September 2024 just before the for-profit restructuring
- Immediately raised $2B for her new startup showing it had it planned all along.
Investment Implications:
This reveals someone who will systematically undermine leadership while maintaining plausible deniability, then switch sides when politically convenient. Remember Paul Graham's famous quote about Sam Altman? That he "could be parachuted into an island full of cannibals and come back in 5 years and be king"? Murati might be even more astute.
She went from Intern → Tesla PM → Leap Motion → OpenAI VP → CTO → interim CEO → $2B startup founder, all with zero AI research background.
Her OpenAI trajectory shows world class political instincts. Join during post-Musk power vacuum (2018), build alliances with both sides, orchestrate a coup when convenient, flip sides when it fails, exit perfectly timed before restructuring.
For VCs:
Whether you see this as exceptional political skill or dangerous executive behavior depends on your risk tolerance.
Let's just hope the 10 millions dollars the government of Albania, the poorest nation in Europe, invested on this have been secured with proper share rights. That could make for some awkward reactions back home.
Their head of AI alignment clearly has no idea on how to go on alignment, as you can see here, during this 30 min rambling into nothing, on the subject.
Going just from published research, it seems Lilian Weng focus is LLM agents, safety, and alignment, focusing on how models are used, guided, and evaluated, not how they’re built.
It seems Horace He focuses on deep learning systems and compiler optimization, improving the performance of frameworks like PyTorch.
While both are clearly highly capable, and maybe capable of focusing on other areas, again just from published papers, neither seems to have published work on core LLM architectures or foundational model training, that could help bring about a scientific advance on the performance of current models.
Their contributions seem to be on enhancing usability and efficiency, not the underlying design or scaling of modern LLMs.
If that is the core team, I would be worried if they have the researchers capable of producing a breakthrough worth of the billions committed. But maybe that is why they are still hiring?