Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ryao
80 days ago
|
parent
|
context
|
favorite
| on:
Making AMD GPUs competitive for LLM inference (202...
That is GH200 and it is likely due to an amd64 dependency in vLLM.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: