Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think people also expect models to be optimised over time. For example, the 5x drop in cost of o3 was probably due to some optimisation on OpenAI's end (although I'm sure they had business reasons for dropping the price as well).

Small models have also been improving steadily in ability, so it is feasible that a task that needs Claude Opus today could be done by Sonnet in a year's time. This trend of model "efficiency" will add on top of compute getting cheaper.

Although, that efficiency would probably be quickly eaten up by increased appetites for higher performance, bigger, models.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: