> Btw as an aside, we didn’t announce on Friday because we respected the IMO Board's original request that all AI labs share their results only after the official results had been verified by independent experts & the students had rightly received the acclamation they deserved
> We've now been given permission to share our results and are pleased to have been part of the inaugural cohort to have our model results officially graded and certified by IMO coordinators and experts, receiving the first official gold-level performance grading for an AI system!
I think this is them not being confident enough before the event, so they don't wanna be shown a worse result than competitors. By being private they can obviously not publish anything if it didn't work out.
This reminds me of when OpenAI made a splash (ages ago now) by beating the world's best Dota 2 teams using a RL model.
...Except they had to substantially bend the rules of the game (limiting the hero pool, completely changing/omitting certain mechanics) to pull this off. So they ended up beating some human Dota pros at a psuedo-Dota custom game, which was still impressive, but a very much watered-down result beneath the marketing hype.
It does seem like Money+Attention outweigh Science+Transparency at OpenAI, and this has always been the case.
Limiting the hero pool was fair I'd say. If you can prove RL works on one hero, it's fairly certain it would work on other heroes. All of them at once? Maybe run into problems. But anyway you'd need orders of magnitude more compute so I'd say that was fair game.
It's not even close to the same game as Dota. Limiting the hero (and item) pool so drastically locks off many strategies and counters. It's a bit hard to explain if you haven't played, but full Dota has many more tools and much more creativity than the reduced version on display. The behavior does not evidently "scale up", in the same way that the current SotA of AI art and writing won't evidently replace top-level humans.
I'd never say it's impossible, but the job wasn't finished yet.
That's akin to saying it's okay to remove Knights, or castling, or en passant from chess because they have a complicated movement mechanic that the AI can't handle as well.
Hero drafting and strategy is a major aspect of competitive Dota 2.
When your goal is to control as much of the world's money as possible, preferably all of it, then everyone is your enemy, including high school students.
I am still surprised many people trust him. The board's (justified) decision to fire him was so awfully executed that it lead to him having even more slack
Maybe not a popular sentiment here on HN but I cancelled my Kagi subscription (9+ months) just recently. Increasingly, most of my queries/search have been through LLMs and Google search is just fine (and even better for restaurants, places, and the like). I don't think the improved search experience is worth the subscription anymore.
It’s not a euphemism - every outage, including the 99.9% that don’t end up on HN gets a postmortem document written about it, which is almost always a fascinating discussion of the technical, cultural and organisational situation that led to an unexpected bad thing happening.
Even a few years ago senior management knew to stay the fuck out except for asking for more info.
I think it's most illustrative to see the sample battles (H2H) that LMArena released [1]. The outputs of Meta's model is too verbose and too 'yappy' IMO. And looking at the verdicts, it's no wonder by people are discounting LMArena rankings.
> This will mark the first experimental model with higher rate limits + billing. Excited for this to land and for folks to really put the model through the paces!
Traditionally at Google experimental models are 100% free to use on https://aistudio.google.com (this is also where you can see the pricing) with a quite generous rate limit.
This time, the Googler says: “good news! you will be charged for experimental models, though for now it’s still free”
Right but the tweet I was responding to says: "This will mark the first experimental model with higher rate limits + billing. Excited for this to land and for folks to really put the model through the paces!"
I assumed that meant there was a paid version with a higher rate limit coming out today
> The bottleneck then becomes how to self-host the finetuned model in a way that's cost-effective and scalable
It's not actually that expensive and hard. For narrow usecases, you can produce 4-bit quantized fine-tunes that perform as well as the full model. Hosting the 4-bit quantized version can be done on relatively low cost. You can use A40 or RTX 3090 on Runpod for ~$300/month.
Normally, yes. But there's a couple rendering modes with these frameworks. In this case, the rendering is most likely 'hybrid'. Some routes are statically pre-rendered, some are served via SSR. You'd need a JS server for the SSR ofc.
Other Sources:
[0]: https://aistudio.google.com/status
[1]: https://www.reddit.com/r/GeminiAI/comments/1mycmtk/google_cl...
[2]: https://www.reddit.com/r/GeminiAI/comments/1myg04q/gemini_25...