Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
H8crilA
12 hours ago
|
parent
|
context
|
favorite
| on:
DeepSeekMath-V2: Towards Self-Verifiable Mathemati...
How do you run this kind of a model at home? On a CPU on a machine that has about 1TB of RAM?
pixelpoet
12 hours ago
|
next
[–]
Wow, it's 690GB of downloaded data, so yeah, 1TB sounds about right. Not even my two Strix Halo machines paired can do this, damn.
reply
Gracana
12 hours ago
|
prev
|
next
[–]
You can do it slowly with ik_llama.cpp, lots of RAM, and one good GPU. Also regular llama.cpp, but the ik fork has some enhancements that make this sort of thing more tolerable.
reply
bertili
11 hours ago
|
prev
[–]
Two 512GB Mac Studios connected with thunderbolt 5.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: