Hacker Newsnew | past | comments | ask | show | jobs | submit | ekojs's commentslogin

Seems pretty widespread. We got mistakenly charged for ~$800 over the weekend.

Other Sources:

[0]: https://aistudio.google.com/status

[1]: https://www.reddit.com/r/GeminiAI/comments/1mycmtk/google_cl...

[2]: https://www.reddit.com/r/GeminiAI/comments/1myg04q/gemini_25...


> Btw as an aside, we didn’t announce on Friday because we respected the IMO Board's original request that all AI labs share their results only after the official results had been verified by independent experts & the students had rightly received the acclamation they deserved

> We've now been given permission to share our results and are pleased to have been part of the inaugural cohort to have our model results officially graded and certified by IMO coordinators and experts, receiving the first official gold-level performance grading for an AI system!

From https://x.com/demishassabis/status/1947337620226240803

Was OpenAI simply not coordinating with the IMO Board then?


Yes, there have been multiple (very big) hints dropped by various people that they had no official cooperation.


I think this is them not being confident enough before the event, so they don't wanna be shown a worse result than competitors. By being private they can obviously not publish anything if it didn't work out.


They shot themselves in the foot by not showing the confidence that Google did.


As not-so-subtly hinted at by Terry Tao.

Its a great way to do PR but its a garbage way to to science.


True, but openai definitely isn't trying to do public research on science, they are all about money now.


Thats not a contentious statement. Its still a pathetic way to behave at a kids competition no less.


This reminds me of when OpenAI made a splash (ages ago now) by beating the world's best Dota 2 teams using a RL model.

...Except they had to substantially bend the rules of the game (limiting the hero pool, completely changing/omitting certain mechanics) to pull this off. So they ended up beating some human Dota pros at a psuedo-Dota custom game, which was still impressive, but a very much watered-down result beneath the marketing hype.

It does seem like Money+Attention outweigh Science+Transparency at OpenAI, and this has always been the case.


Limiting the hero pool was fair I'd say. If you can prove RL works on one hero, it's fairly certain it would work on other heroes. All of them at once? Maybe run into problems. But anyway you'd need orders of magnitude more compute so I'd say that was fair game.


It's not even close to the same game as Dota. Limiting the hero (and item) pool so drastically locks off many strategies and counters. It's a bit hard to explain if you haven't played, but full Dota has many more tools and much more creativity than the reduced version on display. The behavior does not evidently "scale up", in the same way that the current SotA of AI art and writing won't evidently replace top-level humans.

I'd never say it's impossible, but the job wasn't finished yet.


That's akin to saying it's okay to remove Knights, or castling, or en passant from chess because they have a complicated movement mechanic that the AI can't handle as well.

Hero drafting and strategy is a major aspect of competitive Dota 2.


> Was OpenAI simply not coordinating with the IMO Board then?

You are still surprised by sama@'s asinineness? You must be new here.


When your goal is to control as much of the world's money as possible, preferably all of it, then everyone is your enemy, including high school students.


How dare those high school students use their brains to compete with ChatGPT and deny the shareholders their value?


I am still surprised many people trust him. The board's (justified) decision to fire him was so awfully executed that it lead to him having even more slack


Maybe not a popular sentiment here on HN but I cancelled my Kagi subscription (9+ months) just recently. Increasingly, most of my queries/search have been through LLMs and Google search is just fine (and even better for restaurants, places, and the like). I don't think the improved search experience is worth the subscription anymore.


In Kagi, you can just ad a "?" to your query and get an instant answer, a la LLMs.


Or !ai to route it to kagi.com/assistant with your default model/agent to respond with kagi search results


https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...

> Multiple GCP products are experiencing impact due to Identity and Access Management Service Issue

IAM issue huh. The post-mortem should be interesting at least.


Ha. With all this soviet style euphemism I rather read the onion instead.


It’s not a euphemism - every outage, including the 99.9% that don’t end up on HN gets a postmortem document written about it, which is almost always a fascinating discussion of the technical, cultural and organisational situation that led to an unexpected bad thing happening.

Even a few years ago senior management knew to stay the fuck out except for asking for more info.


Super duper frustrating having the status page being green. Why can't Google do this properly?


Those responsible have been sacked.


Those responsible for sacking the people who have just been sacked, have been sacked.


I share the sentiment. I think we will only be using Next.js for static sites/prebuilt SPA in the future.


Actually Next.js with App router (and with Pages being pushed out) is really bad for SPAs. See this thread: https://github.com/vercel/next.js/discussions/64660


> I think we will only be using Next.js for static sites/prebuilt SPA in the future.

With whats mentioned in the blog post I would not use it even from static builds.


You probably have better alternatives for that: Astro, React Router 7, TanStack.


I think it's most illustrative to see the sample battles (H2H) that LMArena released [1]. The outputs of Meta's model is too verbose and too 'yappy' IMO. And looking at the verdicts, it's no wonder by people are discounting LMArena rankings.

[1]: https://huggingface.co/spaces/lmarena-ai/Llama-4-Maverick-03...


In fairness, 4o was like this until very recently. I suspect it comes from training on COT data from larger models.


Yep, it’s clear that many wins are due to Llama 4’s lowered refusal rate which is an effective form of elo hacking.


> This will mark the first experimental model with higher rate limits + billing. Excited for this to land and for folks to really put the model through the paces!

From https://x.com/OfficialLoganK/status/1904583353954882046

The low rate-limit really hampered my usage of 2.0 Pro and the like. Interesting to see how this plays out.


Any word on what that pricing is? I can't seem to find it


Traditionally at Google experimental models are 100% free to use on https://aistudio.google.com (this is also where you can see the pricing) with a quite generous rate limit.

This time, the Googler says: “good news! you will be charged for experimental models, though for now it’s still free”


Right but the tweet I was responding to says: "This will mark the first experimental model with higher rate limits + billing. Excited for this to land and for folks to really put the model through the paces!"

I assumed that meant there was a paid version with a higher rate limit coming out today


The parent Twitter post mentions:

    Available as experimental and for free right now in Google AI Studio + API, with pricing coming very soon!
And the pricing page [1] still does not show 2.5 yet.

[1]: https://ai.google.dev/gemini-api/docs/pricing


I expect this might be pricier. Hoping not unusable level expensive.


Currently free, but only 50 requests/day.


Any idea what is RPM for this model?


https://aistudio.google.com/prompts/new_chat says 2 for free, but also 5, which might be the rpm when they start charging.


> The bottleneck then becomes how to self-host the finetuned model in a way that's cost-effective and scalable

It's not actually that expensive and hard. For narrow usecases, you can produce 4-bit quantized fine-tunes that perform as well as the full model. Hosting the 4-bit quantized version can be done on relatively low cost. You can use A40 or RTX 3090 on Runpod for ~$300/month.


Normally, yes. But there's a couple rendering modes with these frameworks. In this case, the rendering is most likely 'hybrid'. Some routes are statically pre-rendered, some are served via SSR. You'd need a JS server for the SSR ofc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: