How to apply physics informed neural networks on Matlab toolbox?

313 vues (au cours des 30 derniers jours)
Ahmad
Ahmad le 3 Fév 2023
Commenté : Bobby LUKA le 2 Jan 2024
Dear Matlab Community Members,
I would like to apply my physics-informed neural networks on MATLAB using its APPS, but I don't really know how and which app.
There are many apps in Matlab like nnstart, Deep network designer, and ect.
I don't want to code my neural network from zero, I would like to use the Matlab toolbox and make my modifications on it.
Please assist me on this point and I hope there is a video or course or tutorial I can follow it.
thank you.
  1 commentaire
KSSV
KSSV le 3 Fév 2023
nnstart is Neural Network tool box.
Deep network designer is Deep learning tool box. To start with go ahead with nnstart.
Suggestions depend on what exactly your problem is.

Connectez-vous pour commenter.

Réponses (2)

Ben
Ben le 3 Fév 2023
I'm not sure if this answers your question but you can take the network from this example, defined in the "Define Deep Learning Model" section, and implement this network in Deep Network Designer. It's a multi-layer perceptron network with tanh activations and 9 fullyConnectedLayer-s with hidden size 20.
Once you implement that in Deep Network Designer you can export the layer array to your MATLAB workspace and create a dlnetwork that can be used in the example - simple replace the model function with forward(net,X,T) where net is the dlnetwork.
You'll still need to use the custom training loop code in the example to train this as a PINN.
Hope that helps.
  4 commentaires
Ben
Ben le 3 Juil 2023
@Sankarganesh P - those sound like the correct changes to make. Here's a few notes:
  1. If your PDE is then you need in the model loss function. As written you have the wrong signs for the u and terms.
  2. The condition and only makes sense if for some integer k since you need that .
  3. Potentially the term might get very large during training which could cause issues. If you see this you should try ways to control it, e.g. via learning rate, weight regularization, etc.
  4. You may need to modify how many sample points you need to get a good fit since this depends on the individual PDE.
Hope that helps,
Ben
Sankarganesh P
Sankarganesh P le 3 Juil 2023
Thank u very much..

Connectez-vous pour commenter.


Bobby LUKA
Bobby LUKA le 30 Déc 2023
@Ben please can you help me with Loss function for heat transfer equation? The input are XY cordinates extracted from PDE solutions and the neural network is to learn the solution and create a different mapped geometry. I'm having issues with including both time dependent and the space dependent parts in a single loss function. The geometry is circular. Thank you.
  2 commentaires
Ben
Ben le 2 Jan 2024
@Bobby LUKA could you clarify what you mean by "create a different mapped geometry"?
Here's a script showing a heat equation style problem on a 2D disk. The hyperparameters here aren't fine-tuned in anyway, for example you may want to sampled different numbersr of points on the interior, boundary and initial condition, and you may want to modify the network architecture or training loop. The visualization with contourf was the first thing I could make work. If you have PDE toolbox then pdeplot may be better for the visualization.
In this example the time and space variables are bundled together so that the network has one vector input .
% Solve heat equation on disk with constant initial temperature and constant heat
% flux at the boundary.
%
% The disk has some initial heat that is lost through the boundary at a
% constant rate.
%
% Model this for 5s on a disk of radius 1.
initialTemperature = 10;
boundaryHeatFlux = 0.5;
radius = 1;
maxTime = 5;
% Define a function to uniformly sample the domains.
function [t,x,y] = samplePoints(numPoints,radius,maxTime,sampleBoundary)
arguments
numPoints
radius
maxTime
sampleBoundary=false
end
% Sample points (t,x,y) with t uniformly sampled in (0,maxTime), and (x,y)
% sampled uniformly in the disk with radius given by the radius variable.
% This method follows https://mathworld.wolfram.com/DiskPointPicking.html
% for sampling a 2D disk uniformly.
%
% This function can also be used to sample the initial condition by setting
% maxTime = 0 or ignoring the t output.
%
% This function can also be used to sample the boundary (x^2 + y^2) =
% radius^2 by setting sampleBoundary = true
t = dlarray(rand(numPoints,1)*maxTime,"BC");
if sampleBoundary
r = radius*ones(numPoints,1);
else
r = sqrt(rand(numPoints,1))*radius;
end
theta = rand(numPoints,1)*2*pi;
x = dlarray(r.*cos(theta),"BC");
y = dlarray(r.*sin(theta),"BC");
end
% Specify a network to approximate the solution u(t,x,y).
numPoints = 100;
hiddenSize = 50;
net = [
featureInputLayer(3)
fullyConnectedLayer(hiddenSize)
tanhLayer
fullyConnectedLayer(hiddenSize)
tanhLayer
fullyConnectedLayer(hiddenSize)
tanhLayer
fullyConnectedLayer(1)];
net = dlnetwork(net);
% Train the PINN.
avgG = [];
avgSqG = [];
maxIter = 1000;
lossFcn = dlaccelerate(@modelLoss);
for iter = 1:maxIter
[t,x,y] = samplePoints(numPoints,radius,maxTime);
[~,xinitial,yinitial] = samplePoints(numPoints,radius,0);
[tboundary,xboundary,yboundary] = samplePoints(numPoints,radius,maxTime,true);
[loss,gradient] = dlfeval(lossFcn,t,x,y,xinitial,yinitial,tboundary,xboundary,yboundary,net,initialTemperature,radius,boundaryHeatFlux);
[net,avgG,avgSqG] = adamupdate(net,gradient,avgG,avgSqG,iter);
fprintf("Iter: %d, Loss: %.4f\n",iter,extractdata(loss));
end
% Plot results
% Take uniformly sampled points on the disk and at uniform time steps.
% Create ndgrid-s, predict with the model and plot contours.
res = 20;
tres = 10;
r = sqrt(linspace(0,1,res))*radius;
theta = linspace(0,2*pi,res);
t = linspace(0,maxTime,tres);
[tm,rm,thetam] = ndgrid(t,r,theta);
t = reshape(tm,[],1);
r = reshape(rm,[],1);
theta = reshape(thetam,[],1);
x = r.*cos(theta);
y = r.*sin(theta);
X = dlarray(cat(2,t,x,y),"BC");
u = predict(net,X);
u = extractdata(u);
u = reshape(u,tres,res,res);
x = reshape(x,tres,res,res);
y = reshape(y,tres,res,res);
for i = 1:tres
subplot(1,tres,i);
contourf(squeeze(x(i,:,:)),squeeze(y(i,:,:)),squeeze(u(i,:,:)),50,LineColor="none")
title("t="+t(i,1,1))
colormap("parula")
clim([min(u,[],"all"),max(u,[],"all")])
end
colorbar
% Loss functions.
function loss = pinnLoss(t,x,y,net)
% Compute the heat equation loss = ||du/dt - (d2u/dx2 + d2u/dy2)||
% Where u(t,x,y) is defined by the network.
X = cat(1,t,x,y);
u = predict(net,X);
du = dlgradient(sum(u,2),X,EnableHigherDerivatives=true);
dudt = du(1,:);
dudx = du(2,:);
dudy = du(3,:);
d2udx = dlgradient(sum(dudx,2),X,RetainData=true);
d2udx2 = d2udx(2,:);
d2udy = dlgradient(sum(dudy,2),X,RetainData=true);
d2udy2 = d2udy(3,:);
laplacian = d2udx2 + d2udy2;
loss = sum((dudt-laplacian).^2)/size(dudt,2);
end
function loss = initialConditionLoss(x,y,net,initialCondition)
% Compute the initial condition loss ||u(t=0,x,y) - initialCondition||
% where u(t,x,y) is defined by the network.
t = zeros(size(x),like=x);
X = cat(1,t,x,y);
u = predict(net,X);
loss = sum((u-initialCondition).^2)/size(u,2);
end
function loss = boundaryConditionLoss(t,x,y,net,radius,boundaryHeatFlux)
% Compute the boundary condition loss ||Du(t,x,y).n(x,y)-boundaryHeatFlux|| where u is defined by
% the network, (x,y) are on the boundary of the disk, n(x,y) is the normal
% at point (x,y).
normal = -cat(1,x,y)/radius;
X = cat(1,t,x,y);
u = predict(net,X);
du = dlgradient(sum(u,2),X,RetainData=true);
dudN = dot(du([2,3],:),normal);
loss = sum((dudN-boundaryHeatFlux).^2)/size(u,2);
end
function [loss,gradient] = modelLoss(t,x,y,xinitial,yinitial,tboundary,xboundary,yboundary,net,initialCondition,radius,boundaryHeatFlux)
loss1 = pinnLoss(t,x,y,net);
loss2 = initialConditionLoss(xinitial,yinitial,net,initialCondition);
loss3 = boundaryConditionLoss(tboundary,xboundary,yboundary,net,radius,boundaryHeatFlux);
loss = loss1 + loss2 + loss3;
gradient = dlgradient(loss,net.Learnables);
end
Bobby LUKA
Bobby LUKA le 2 Jan 2024
@Ben Thank you so much for your kind and helpful reply. I tried replicating this https://www.mathworks.com/help/pde/ug/solve-poisson-equation-on-unit-disk-using-pinn.htmlwith this equation pdeeq=(rho*Cp*tz*diff(T,t)-k*tz*laplacian(T,[x,y])+Qc), Qc = hc*(T - Ta) unfortunately the loss is always very large and the mapped solution is not consistent with the PDE solution, despite playing around with the activation function, learning rate, number of epoch etc. How can I impose this PDE in the loss function since is dissimilar to the poisson equation implemented in this example?
I tried running the code you sent but MATLAB flagged some errors in the function and gave this error message: unrecognized function or variable 'numPoints'. I’m using MATLAB 2023a, could it be that the version of the MATLAB I’m using is not compatible?
Thank you once more in anticipation for your help

Connectez-vous pour commenter.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by