Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

isn't that what the mixture of experts trick that all the big players do is? Bunch of smaller, tightly focused models


Not exactly. MoE uses a router model to select a subset of layers per token. This makes them faster but still requires the same amount of RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: