Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

CuDNN is only for Nvidia GPUs, and those machines generally have decent sized disks and decent network connections so no nobody cares about a few GBs of libraries. There are alternatives to using CuDNN with much smaller binary size. Maybe they can match or beat it or maybe not, depending on your model and hardware. But you'll have to do your own work to switch to them, since most people are happy enough with CuDNN for now.

The real problem with deep learning on Nvidia is the Linux driver situation. Ugh. Hopefully one day they will come to their senses.



It's not just disk size. Also memory size, and loading speed.

Yes, I agree about the driver situation.


The disk size of the shared library is not indicative of RAM usage or loading speed. Shared libraries are memory mapped with demand paging. Only the actually used parts of the library will be loaded into RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: