Message Boards Message Boards

PINN: Physics Informed Neural Networks for Laplace PDE on L-shaped domain

Posted 1 year ago
6 Replies
Posted 4 months ago

Hi Gianluca and thanks for sharing this! One question: Why do you use shared Weight of NetArray in LinearLayer?

POSTED BY: Zhang Haoyu

Dear Leo, many thanks for your comment. You are right: my NN solution doesn't approximate well the boundary condition u = 0, in particular near the point {0, 0}. The classical numerical solution by FEM, as that computed using NDSolve, is very more accurate. I suspect that this thing is due to the lack, at present, in Mathematica of a real Backpropagation mode of Differentiation for a Neural Net, as that available in software like Tensorflow. For this reason I have introduced the relative procedures for computing udxx and udyy. On the other hand, we should consider that aim of a Neural Net is that of give a fast "approximation" for estimate possible values of real physical problems like those of Boundary PDEs. The laplacian problem that you and I considered is very symple and numerical resolution via FEM is more accurate and fast than numerical resolution using NN; but there are some problems, as those I should consider in the contest of flame simulation in combustion assembly, where FEM method fails, and approximate, even if not very accurate, solution via NN is very appreciated! Many thanks. Gianluca

Hi Gianluca,

The solution to the Poisson equation \div \grad u = -1 you show does not seem to obey the boundary condition u = 0.

Compare to:

\[CapitalOmega] = 
 Region[RegionDifference[Rectangle[{-1, -1}, {1, 1}], 
   Rectangle[{0, 0}, {1, 1}]]]; Subscript[\[CapitalGamma], D] = 
  DirichletCondition[u[x, y] == 0, True];uval = NDSolveValue[{D[u[x, y], x, x] + D[u[x, y], y, y] == -1, 
   Subscript[\[CapitalGamma], D]}, 
  u, {x, y} \[Element] \[CapitalOmega]]; ContourPlot[uval[x, y], {x, y} \[Element] \[CapitalOmega]
POSTED BY: Leo Kärkkäinen

Hi Giulio, thanks for your comment.

Yes, I observed that cycling NetTrain could get some benefit for stabilizing the results, because the set of learned parameters by a previous cycle seems to be not destroyed by the next cycle. But experimenting with MaxTrainingRounds could be interesting, I have not used this feature at present.

Thanks. Gianluca

Hi Gianluca and thanks for sharing this!

One question: why do you map NetTrain instead of of using MaxTrainingRounds? Do you want to reset the learning rate?

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: Moderation Team
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract