Hacker News new | past | comments | ask | show | jobs | submit login

The problem is that tensorflow is an umbrella name for bunch of related technologies: it's a matrix calculation engine, graph definition language, distributed graph calculation engine, ML algorithmic libraries, ML training libraries. On top of that it's extremely poorly documented. At the end of the day when you use it anything beyond most trivial stuff turns out to be incompatible with each other (this operation is not implemented for TPUs or GPUs, this API doesn't work with this API) and most of development cut-n-paste trial and error. Then you go to read it at source, but creative Python renaming and importing leads you to multi-hour wild goose chase.

If you switch to PyTorch, what are you going to use for prod deployment? Is there any way to use TPUs?




> If you switch to PyTorch, what are you going to use for prod deployment? Is there any way to use TPUs?

PyTorch has an optional XLA device, that let's you use TPUs: https://github.com/pytorch/xla




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: