Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

llm inference is fine on rocm. llama.cpp and vllm both have very good rocm support.

llm training is also mostly fine. I have not encountered any issues yet.

most of the cuda moat comes from people who are repeating what they heard 5-10 years ago.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: