We are pleased to announce that our paper, titled “Fast training of accurate physics-informed neural networks without gradient descent” by Chinmay Datar, Taniya Kapoor, Abhishek Chandra, Qing Sun, Erik Lien Bolager, Iryna Burak, Anna Veselovska, Massimo Fornasier, and Felix Dietrich has been accepted to the International Conference on Learning Representations (ICLR) 2026. The paper presents “Frozen-PINN”, a novel Physics-Informed Neural Network (PINN) based on the principle of space-time separation that leverages random features instead of training with gradient descent, and incorporates causality by construction. Our key contributions are:
- Frozen-PINNs break the training and accuracy bottlenecks of PINNs, making PINNs rapidly trainable and highly accurate.
- We present Benchmarks across eight challenging PDE benchmarks against state-of-the-art PINNs.
- We use solution data from previous time-steps to compute efficient random features.
- We experiment with an SVD layer in the network's last hidden layer, which speeds up training by up to 75 times.
arXiv: arxiv.org/abs/2405.20836