Hacker News new | past | comments | ask | show | jobs | submit login

Indeed, this seems closely related, thanks for the pointer!

Unfortunately I'm not deep enough into the topic to understand what their contribution to the theory part of it is. (they have some Supplementary Material in [INN Supp]). In the discussion of the Integral Neural Networks (INN) paper, there's this paragraph about an operator learning publication:

"In [24] the authors proposed deep neural networks with layers defined as functional operators. Such networks are designed for learning PDE solution operators, and its layers are continuously parameterized by MLPs only along the kernel dimensions. A re-discretization was investigated in terms of training on smaller data resolution and testing on higher input resolution. However, the proposed framework in [24] does not include continuous connections between filters and channels dimensions."

Also the weight permutation to perform the resampling on pretrained networks in INNs seems to be novel? And I guess it doesn't hurt that they're bringing new eyeballs to the topic, by providing examples of common networks and a PyTorch implementation.

[INN Supp]: https://openaccess.thecvf.com/content/CVPR2023/supplemental/...

[24]: Zongyi Li Nikola Kovachki. Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485, 2020, https://arxiv.org/abs/2003.03485




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: