Message Boards Message Boards

Solving PDEs using neural networks?

Hello everyone!

I am trying to play with solving PDEs using neural networks. As I understand, this area is an example of self-supervised learning when the model (neural network itself) automatically finds its derivatives with respect to input variables, inserts them into the PDE given and then attempts to satisfy it by modifying weights.

But I encountered a problem - I can't find a way to "tell" the network how to find the derivatives. I've found NetPortGradient symbol, but there's no example showing if it can be accessed from within the network (for example, via NetGraph object).

What do you think, is it possible to construct a neural-network-based PDE solver using built-in functionality of the Wolfram Language?

POSTED BY: Nikolay Shilov
5 Replies

Thank you all for your replies!

Yes, my question arose exactly from Python - my colleague works on the ANN-based eikonal equation solver implemented with Tensorflow - and I wanted to play with this problem using WL.

A particular thanks to Sam's reply: interesting approach to the problem!

POSTED BY: Nikolay Shilov

Not sure if this is useful to you guys: Teaching neural networks to solve partial differential equations https://community.wolfram.com/groups/-/m/t/1379466

POSTED BY: Sam Carrettie
Posted 3 years ago

That post involves turning the solution into a supervised learning problem by building a dataset of finalized renders of the pde solution generated by NDSolve, then training a network on the dataset.

I am not exactly sure how that method ideally compares to something like the techniques described here (paper1) or here (paper2), but it could potentially be an acceptable approach. The creator of that post claims an 8% error, but this could be a result of hyperparameter selection of the model.

Is NDSolve fast enough at solving high-dimensional differential equations to build a training dataset of thousands of results to train on? If NDSolve took a long enough time, it would probably be faster to directly optimize a black-box PINN learner that can directly use the gradients to train on (as mentioned in the papers linked above) rather than a distance function to the 'correct' NDSolve solution.

I am very new to this area, so any insight regarding the tradeoffs would be greatly appreciated.

POSTED BY: Alec Graves
Posted 3 years ago

I have tried for a couple hours, but I have not found any solution.

You could export the model and use mxnet python to do the training. Perhaps, you could still use the Python integration to send the equations from Wolfram language to Python.

POSTED BY: Alec Graves
Posted 3 years ago

Also, it seems MXNet has had a high-priority ticket to support second-derivative computations for several years now, but it does not appear any progress has been made. Perhaps this is not even possible with the MXNet backend Wolfram is using.

Theoretically, the reply to this other issue on the MXNet GitHub is how such a computation would be done in MXNet if all layers in your network have supported higher-order derivatives implemented:

Tensorflow supports it, and I believe PyTorch has a similar method for computing higher-order derivatives as well. This page shows the tf method.

Perhaps ONNYX transformation could be done to bring the model into another library for training and back to Wolfram...

POSTED BY: Alec Graves
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract