Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I saw there was an answer already in your issue, although you plan on doing a lot of inferencing on your GPU, I'd highly recommend you consider dual-booting into Linux. It turns out exllama merged ROCm support last week and more than 2X faster than the CLBlast code. A 13b gptq model at full context clocks in at 15t/s on my old Radeon VII. (Rumor has it that ROCm 5.6 may add Windows support, although it remains to be seen what that exactly entails.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: