From the series: Deep Learning Webinars 2020

*
Abhijit Bhattacharjee, MathWorks
*

Take a deeper dive into designing, training, and tuning deep learning models. Learn how MATLAB^{®} deep learning apps can help you edit neural networks and devise and run experiments. See a brief teaser on how to customize deep learning training to handle more advanced types of neural networks.

You will also learn how to:

- Use the Deep Network Designer app to graphically create, edit, and train models

- Track and run modeling runs with the Experiment Manager app for rapid, automated iteration

- Introduce the extended deep learning framework to customize and train advanced neural networks

So we have deep learning. In the context of today's talk, I'm going to interchangeably use the terms neural network and model. So for the purposes of today-- I know that model can be a very overloaded term-- whenever I say model today, I'm referring to a deep neural network. And if I use the verb form modeling, what I mean in this case is training a neural network, or designing a neural network. So that's what we're going to use as the terminology today.

So to be clear, we're going to talk about how you design the neural network topology. So a neural network, you see a diagram on the right, is typically composed of a input layer and output layer, and then multiple hidden layers. The more you have, the deeper the model. And so that's why it's called deep learning.

So we're going to ask ourselves, how do we design this? And what are some steps to make this easier? How do we train this model, and how do we validate it? And then how do we experiment with the model and tune different parameters? And in particular, I have a few questions that I'd like to cover today that I think are important to anybody who has any background in doing machine learning or deep learning.

So some questions that come up are, for example, how do we sweep through a range of hyperparameter values. So hyperparameters are-- what I'm referring to is the parameters that control how a model trains. How you train a model. There's a lot of knobs and dials, you could say a lot of different options, that affect how a neural network trains. So a lot of these are numerical. You may need to find the right value. So how do we do that?

How do we compare the results of using different data sets? So suppose you want to train a classifier, and it's supposed to be able to recognize different objects. Perhaps you would use different data sets to train that model. How could you compare the results or, even more, let's say, fundamental, how do you test different neural network architectures while using the same data? So one model has 10 layers, another model has 20 layers, how do you compare them and test them? So that's what we're going to cover today.

And the way we're going to do that is we're going to use these new apps that have been designed and included in the deep learning toolbox. The first app is called the Deep Network Designer, on the left. This allows you to build, visualize, and edit the deep learning model. And on the right, this is another app we're going to spend a lot of time in today the Experiment Manager, which allows you to manage multiple deep learning experiments, analyze them, and compare them with the results. So these are two tools that we have heard feedback from customers that they need help with this part, and that's we're going to focus on today.

And for the purposes of today's talk, I'm going to give a simple example. Today, we're going to solve a very simple problem. You might have seen this if you've done any tutorials or any basic work in machine learning or deep learning. This is quite a common problem that's out there called MNIST. So MNIST is a very simple data set of handwritten digits. It's 60,000 images of the numbers 0 through 9, and 10,000 test images. And it's a super simple problem. There's two reasons I'm using this today. One is that it's simple problem everybody can understand and it's representative of more complex problems. But the other reason I'm using this today is because it's a very small data set. And I can actually train the models very quickly in front of you, rather than having to spend normally minutes or hours, for deep learning models to train. So I'm going to be able to train this in seconds so that we can get a better understanding, in real-time, how we use these tools.

So I'm going to jump into MATLAB now, and as I do so, I'm going to take a moment to look at some questions that came in. So the question is, do you support federated learning? The answer is yes, although it's not out of the box. There's a little bit of extra that's required. So I'm going to say, contact us and we can let you know more.

Is it possible to train neural networks through parallel computing in a laptop environment? The answer is yes, we can train in parallel on CPUs if you don't have a GPU.

And are we going to touch on transfer learning today? No, the answer is no. We're not going to do transfer learning today.

So with that, we now continue in the rest of this example. So the first thing I'm going to say is we're going to load in a data set. Now I've already loaded it in. And I'm not going to share the code right now on how you load a data set, as today the focus is on modeling. But we have here-- let me zoom in for you. I have two data sets here. They're stored as image data stores. In MATLAB we have many different types of data stores. Image data storage is one of them. And these data stores allow you to encapsulate a large amount of data. In this case, I have 60,000 samples of training data and 10,000 samples of test data. And it could be millions. It could be even billions. It doesn't matter how much is there. This is sort of an index or container for the data.

Now what I'm going to do with this data is, this is our training data for the rest of the session. I've also loaded in a neural network, which I will get to. Now the first thing is, I'm going to show the apps menu. Now if you look at the apps tab in MATLAB, if you drop down the menu, I have all the toolboxes. Obviously, there's a lot of apps here, but in your case, if you have deep learning tool box installed, what you're going to find is there's a section called Machine Learning and Deep Learning. And the two apps that we're going to focus on today to help with this experiment design is Deep Network Designer and Experiment Manager.

So let me first start with talking about the Deep Network Designer, which is a really useful tool that was initially introduced, I think, in 18b. So about four releases ago. So I will zoom in and get closer for those of you who are having trouble seeing. I know that my screen resolution is very high. It may be hard to see some things, but not to worry. I'll focus in on the important aspects.

So what we have here is the MATLAB Deep Network Designer. Now if you don't have the latest release, it won't look like this. This is the latest Deep Network Designer. And so what happens is, as soon as you open it, you get the choice of opening up either your own new neural network from scratch, or a library of pretrained models. So this library is provided as a starting point. Many of these pretrained models, you probably have heard some of these names. These come from AI experts around the world and they are usually published in well-known research papers.

These are some pretrained networks you can use for many different types of tasks. Most of these are initially trained for image classification, but they can be retrained, or edited, or modified to do all sorts of things like speech recognition, radar detection, you name it. There's a lot of things that these models can do. So, generally, it's a good idea to start with one of these as a starting point.

Now today I am focusing on such a simple task, which is the classification of digits, that I won't be using one of these pretrained networks. They're all so large and complex, sometimes, to use, and so for now, I'm going to use a more simple network that I will create myself. So let's go ahead and create blank neural network.

And what we now see here is a blank canvas. So you can actually here go in and create a neural network by dragging and dropping. So suppose that I want to create a image classification neural network. I can drag in-- here let me zoom in so it's easier to see. So image input layer, bring that block in. Maybe a convolutional layer, because this is convolutional neural network. Maybe a few other layers, such as this activation function. Maybe this pulling layer. And to connect them all, you can simply connect them as you see here. Drag the arrows through. And if you want to make it neater, you can hit Auto Arrange and then zoom in. So here we have quickly built up a neural network.

Now, of course, this one is not very good. I would have to take a little more time to make this more proper, but you see how easy it is. And each of these layers has editable properties which you can go to the right here and change them. For example, how you initialize the weights is something you can edit. So I'm not going to use this neural network. Rather, I'm going to use the neural network I loaded in earlier. So I can actually go ahead and hit New and go to Work Space. I have this neural network in my Workspace, and then I'm going to go ahead and bring it in. So the reason I have this neural network is I actually looked up an example of how to solve this problem and there was an architecture suggested for this problem. So I think many of you, especially if you're new to deep learning, will probably find yourselves in a situation where you're looking for an example and trying to implement that, the closest example to your problem.

So this is what we found. It's a convolutional neural network because it has this layer here, the convolutional layer, and that's particularly important for problems like this because convolutional neural networks, or CNNs, are suitable for image processing because they preserve local spatial information. So because I have a convolutional layer here, this is a convolutional neural network, it has seven different layers in it. Now how exactly I've chosen this architecture is beyond the scope of today's example, but suffice to say that most of us probably look at similar examples and start from there. But I will get into some of the details of why we design it this way.

So here we have our neural network. Now how do we use it? What's new to the 20a release is we have a number of additional tabs here. So you can design the neural network, then you can go to the Data tab. If you go to the Data tab, there's this button that lets you import data. So I'm going to go ahead and Import data. And either we can import the data from a folder, which contains all the data-- I'm not going to do that in this case because I already have loaded the data into MATLAB using an image data store.

So I'm going to go ahead and click on the Image Data Store and select the training data set. Now on the right side, you have the option to choose what kind of validation you want to do. And for this, I'm not going to actually use validation for today's session, because it'll cause additional processing that I don't want to bore you with. But the reason that you would use validation is to make sure that your neural network model is generalizing well. If you turn on Validation, it allows you to set aside some of the data and make sure that your model is training well. But for now, I'm going to go ahead and say no Validation. And then we're going to go ahead and import that into the Workspace.

So what do we see here? This is a histogram that shows us the frequencies of the different classes. So you see there are 10 different classes, digits 0 through 9. And how many are there? So around 6,000 examples per class. So this is just helping me understand what my data set looks like. It looks fairly balanced, that's the key here. Sometimes, if you're doing a classification problem, you're going to find that there's an imbalance. And so for that you might want to go through and figure something else out. But for now we have a balanced data set, and so I'm going to go ahead.

The next stage of this is I could actually train the model. Now training the model, there's a few options here. First is you can actually click on Training Options and then adjust how the model will train, or you could just click the Run button, which I'm going to do now. Because I think we can say that MATLAB, we try to find the best default values for most problems. But obviously, in certain cases, even the default values won't work well. And in fact, we're going to see that in this case.

So while I was answering questions, if you guys were watching the screen, I have actually started the training for this model. And we're seeing something actually quite peculiar. I'm going to zoom in to help highlight what that is. So what we're seeing is this is a training plot. And on the top axis, this blue curve represents the accuracy. And on the bottom axis, this orange curve which barely had any values, and then it's kind of stopped. There's no other data after that. Is called the loss curve. And the loss is one of the biggest figures of merit in machine learning. The loss is sort of the optimization objective. Basically it's a quantity that we want to minimize. The loss usually computes the difference between the real answer and your prediction. And so we want that difference as small as possible.

So we want to drive the loss to zero. We want the accuracy to go up to 100%. The problem is-- and I'm going to come back to the live plot and show you this is not learning. The accuracy is at 10%. Now if you have 10 things and accuracy is 10%, that's the worst possible answer you can get. And it doesn't seem like this model is actually improving in any way. And in fact, the loss plot kind of dropping off to nothing indicates a big problem. So what could that mean? What is the problem here?

This model is not learning anything, and at first glance, it might be hard to understand why. I mean it's such a simple model. I found an example of a neural network that works for this model. Why shouldn't it work? This is a place where I like to point out the documentation. I'd like to show you an area where you can get help.

So let me just go to MATLAB's general help. And then for B's MATLAB documentation, each Toolbox has its own page. We're going go to Deep Learning Toolbox, here. And then there's many sections that have to do with different tasks in Deep Learning, but the one I'm going to focus on is the Tuning and Visualization, because that's the part that's going to help us with understanding what's going on.

So if I go to Tuning and Visualization, It's a series of different articles and functions and apps that can help you. But for right now, I'm going to click on this awesome article that we have in the documentation. It's called Deep Learning Tips and Tricks. So if you go to Deep Learning Tips and Tricks, we have some guidance here. Deep Learning is such a field that usually requires a lot of art and a lot of just rule of thumb knowledge. It's not always scientific. It's not always a very straightforward type of thing. Sometimes you have to learn best practices from others who have experience. And so we've tried to compile our own experiences and provide you with some guidance as to what to do.

For example, if you're stuck on which network architecture you should choose, we have a table here that tries to help you figure out what type of architecture to choose. So if you're working with images, these are some of the different ramifications of using that. Or signals, or text, or audio, what kinds of neural networks. How do you choose training options? There's some information about that. And more, importantly for our purposes today, we have this section that has to do with improving training accuracy.

Now this is actually going to illustrate the problem for us. We have the problem that's listed here. We have not a numbers, or large spikes in the loss. And I'll further justify that in a minute. But the reason I know that is because the loss curve is showing no values. And because we have this problem, the suggested solution-- I going to zoom in to the suggested solution. Is to decrease the initial learning rate. So it's telling us that there's this parameter called the learning rate, and we should probably try to decrease that value and see if that works. So to illustrate what I mean by NaN, I'm going to give you a quick 30 second lesson on NaNs for those of you who have not used this in MATLAB

So suppose I have this vector, 1 through 10. I can plot this. And you can see it's just a straight line. Now suppose that I go and change one of the values to NaN So let's say I say the sixth value is going to be NaN. And so I've replaced that value. What happens when I plot this vector now? What do you think might happen? I'm going to go ahead and do that.

As you see here, the plot contains a gap where the sixth value is. MATLAB uses the quantity, not a number, as a flag value. It's not necessarily a good thing or a bad thing. It just indicates that there's something that has been flagged. So in this case, not a number, MATLAB doesn't know how to plot it, so it plots nothing. So this is an indication that you might want to look at what's happening here.

So this is the same thing that's happening in our plot here. We have ourselves loss quantities that do not appear. So what can we do? We're going to go ahead and select the Training Options. Once again, I'm going to zoom in. And we were told that the initial learning rate should be decreased. And so this is the default value. Let's make it a lot smaller. I'm going to go ahead and change that from 0.01 to 0.001. I'm going to close that. And then let's go ahead and retrain.

Now notice, as soon as I try to train it again, it's saying that you must discard the last training results. So this is going to be an issue for us. When you have to train your model, chances are you have to do a lot of trial and error. And if you're doing trial and error, what does that mean? You're going to have to keep track of what's going on. Now if you're doing a little bit of trial and error, It's probably not such a big deal. You can actually just train the model over and over a few times. But later, it's going to be more of an issue. So let's go ahead and start training the model again. And for a moment while it starts initializing I'm going to take a moment to answer some more questions.

So one question is where you do your research to determine the initial model used? I actually found a documentation example in our documentation that has to do with classifying digits. So I simply used the example there.

Can we work with unlabeled data? The answer is yes. We're focusing on supervised learning problems today, but that would be an example of unsupervised learning. And we're going to cover an example of that in part two later this week.

So there's another question. Does the augmentation option change all the training data, or does it add to the original training data? To answer that, the answer is it changes the training data. It doesn't add to the training data. There is a possibility of adding training data, but that's different.

So with that, I'm going to come back to the presentation here. If you're watching the screen, you should be seeing now that our model is training. Not only is the training, it's training like gangbusters. It's doing really well. As you see the blue accuracy plot has already gotten to well above 90% accuracy. And on the orange loss plot, we're seeing it diminish down to 0, which is exactly what we want. So if we're happy with this, I can actually stop the training now. I don't need to let it finish. And then if I want to use this model, I can click on the Export button. And there's a few options.

Either I can export the model, or I could generate code to recreate the training. So that's one of the key factors of our apps. Apps in MATLAB are always interactive. They always help you do something using graphical tools. But then, of course, these are engineering problems, so in the end we're not going to be using graphical tools to solve these problems. We're going to implement things using code.

So we can recreate what you do using MATLAB code. So here, let me go ahead and generate the code for training. And we'll go ahead and say Yes to this. And what will happen now is MATLAB will generate a script. And this is an actual script that you can run that recreates the type of training that we just did using the app. As you can see, there's code that shows how to import the data. There's code that goes into how you set your training options. In fact, it captured the fact that I changed the initial learning rate to this value. So that's there. And then the design of the neural network is here. And then the final actual command that trains the network is here.

So one way that you can experiment is you can change the values in your Deep Network Designer, and then generate one of these scripts. And then you change the values again. You can generate another script. And that's probably sufficient if you're doing a small experiment where you're only playing around with the values a few times. Maybe a few times. And you could generate a few scripts. But if you're going to iterate more than that, it really becomes cumbersome to have to generate a whole script just to do one type of experiment, which is why we're going to introduce the next aspect of this. Something that customers have been asking for for awhile, which is an actual experiment management tool.

So let me bring up the next app that we're going to work on today. So once again, if you open your Apps menu in MATLAB, and you have the Deep Learning Tool box installed. As long as you're using the latest version of, MATLAB which is 20a, you're going to find this app called the Experiment Manager. This is a new app for the latest release of MATLAB. If you have an older version you won't see this. This is really awesome. I really love this and I'm going to show you why.

So go ahead and open the Experiment Manager. So as it doesn't look impressive when you first start it. It's a blank canvas. But really, it's up to you to make it what you need. So I'm going to go ahead and create a new project.

So I'm going to go and create a new project. And let's just go ahead and call it whatever. And now it's asking me what kind of experiment do I want to run. Now this is a interface that allows you to play around with different hyperparameters and actually setup the hyperparameters the way you want to play with them. So for example, if I have a neural network, and I want to find the best learning rate, I could use that parameter value here and I could say I want to choose different values. I want to say, try this learning rate, 0.01. And then another one, 0.001. And then another one, 0.0001. I go ahead and choose those values. If I want to play with some other parameter, I can call it another name, like, say, my solver. And then I could put in values here. And then, after you've chosen what parameters you're trying to sweep over, you can set up your experiment. So let's go ahead and edit that setup.

By default, MATLAB provides you a template. So in this template, it has an area that allows you to load in your data. It says image data, but that's just an example. You don't have to use this example. It allows you to define your neural network architecture and then specify training options. So here you notice that the training option itself is parameterized. So what will happen is, when we train this model, MATLAB will see that you've parameterized this hyperparameter, and it will choose the different values that you've put in.

So what does that look like? If I want to solve the problem that we're working on today, I could simply go into my generated code here, and I have this experiment setup problem. Instead of using the loading code that they provided here, I can put in my loading code. So Import the data, copy and paste. Bam. If I want to set the training options, I've already done that here, but let's say I want to use this neural network. I'm going to copy it in right to here. So you can create your own setup for the experiment, and it could be really highly customized. I'm not going to use the one that I just created here. This is just an example to show you how you create one. Let me go back and open up an experiment that I've already set up.

So let's go ahead and open-- actually, let me make sure I got the right path go back and open this project. So now I'm loading up an experiment that I've already created. So here is one of the panels. In this experiment, I'm going to be trying different learning rates. As you can see, I've created a parameter, and I've chosen different values because I don't know which value is the best. I'm going to try different experiments. And I have a set up function where I've already customized it to this problem. And one additional thing that I haven't shown you before is I've set up a custom metric. So I have a function that uses the test data set, which is a separate set of data, to evaluate the result. So I'm going to go ahead and run this experiment.

And you'll see that as we do so, MATLAB is going to create an interface where you can run each experiment one by one. So currently there are three different values. So there's three different experiments, and I can enable the training plot. And you can see the training plot here. And as before, you see with the initial default value of 0.01, this model is not training well at all. And the experiment will be over in about a few seconds, because I've asked it to complete in a very quick time. And we'll also evaluate the accuracy here at the end.

So in this case, the accuracy is absolutely zero, so terrible value for the learning rate. And now you can see the next experiment is running where I've changed the value. So you've changed the value to this parameter here, and then it will go on and continue on as follows. And you see when you run these experiments, we actually keep track of errors and problems.

Let's say one of these you ran out of memory or something, or something happened. It will keep track, and it'll continue gracefully and run the rest of the experiments. And finally, after the experiments are run, we have this report showing the results. So all of these experiments have been done. We've trained the neural network three different times with three different learning rate parameters, and we've also captured the accuracy. So here, with this parameter value, we have an accuracy of less than 10%. That's not great either. And then finally, when we change it to this value, we get 89%. So now we're making some progress. So this is a really good way to understand and keep track of how we're improving a model. You'll get guidance from different sources on what to do. But then, then you can actually keep track this way.

Let me show you another experiment, here. It doesn't have to be just numerical. Actually, let me show this one. So in this experiment, I'm trying out different Optimizers. So notice that my parameter values are not numbers. Instead I'm sweeping over different text parameters. And let me go ahead and show you the results. I'm not going to actually run it. So if you look at the results here, the only thing that's changing between these three different neural networks is what kind of Solver am I using. And it shows you the different accuracy values of using these different Solvers or Optimizers. So whatever you can think of to experiment with, you can probably do it here in the Experiment Manager.

I like to show one other example that's a little bit more complex. So let me show you this other experiment that I've done. This experiment contains two different parameters that I'm trying to optimize over. I want to find the best learning rate, but at the same time, I have two different neural network architectures, and I want to try those out. So what does this look like? Let me actually show you what I'm doing in my experiment code.

So this is already modified for this problem. As you can see, I've set up a parameter called My Network Choice. I just chose that name. And if I choose Network A, I'm going to use this neural network here with seven layers. If I choose Neural Network B, I found a different deeper neural network with like, 20 layers in it. So here's this. A second neural network that I would go through if I chose option B. And so this experiment will automatically go through and try both different neural networks. And it will try them with all the different learning rates. Actually this is garbage code here. What I have is the initial learning rate is parameterized. So if I look at the result of running that experiment, which again I'm not going to do that here in the interest of time, I'm just going to show you the completed results. You'll see that we had six different experiments that ran, because I had two different networks, and I had three different values of the learning rate. And so let me make this a little easier to interpret.

I'm going to Sort by Network Choice. And so with Network A, which is the first model that I chose, which is small, I swept through all the different learning rate values. And you can see that as I decrease the learning rate, the accuracy got better and better. Curiously, with Network B, which is a different architecture, as the learning rate went lower and lower, the accuracy also went lower. Interesting results here. So you could see how experimenting with multiple different parameters and finding the best values could be a way to really find the best option for you.

So here, out of these six experiments, I could see that experiment number two here, which is now out of order, but this experiment has the best accuracy. If you look at the training plot, it's clear as well. It's the best accuracy. So now if I want to use this model for my ultimate system design, I can go ahead and Export the model. And so this Experiment Manager allows you to really choose the best model for your problem.

You can also Export training information so that you can recreate these results. So I hope you can see with what I've shown you here how powerful this can be. But that's not all. There's more that we can do. I actually have a really cool extension of this that I want to show you now that makes things even easier.

So for now let me go on to the cool extension of this that I was talking about. With Experiment Manager it's clear that it's quite powerful. It's clear that if you try different parameters and setup your experiment well, you can simply let the neural networks train, walk away, come back in a day, and have a list of tabulated results that help you understand how the neural network trains and which types of parameters are best. I think that's one of the key aspects of this, that you can set up your experiment and walk away and come back and get a good result.

But I think there's even more that can be done. One thing is, the experiment you have to set it up yourself. That is, if it's a value like learning rate, I have to put in all the different values that I want to choose and try different values. What if there was a way to automate that? What if there was a way to have MATLAB try to find the best value on its own?

So to answer that, I have another aspect that I want to illustrate for you guys. It's called AutoML And I'm sure some of you've heard of this. AutoML is, generally speaking, a field that has to do with the automation of machine learning. So the idea is using machine learning models to train machine learning models. So use a machine learning tool to find the best values to train this machinery model.

And there's many different techniques out there in the field. One of those is called Bayesian Parameter Optimization, which is something that MATLAB has. So I want to show you how that works. Currently, it's not a feature that is built into the Experiment Manager Tool, so you can have to write your own code for it. But we're going to have this available in the tool at a future date. For now, let me show you how it works.

So I actually wrote some code, and I try to make it as simple as possible, so it's easy to grasp. So here's my script. This code should look familiar to you if you've been following along with some of the generated code that we've been looking at. In almost all the Experiment Manager code and Deep Network Designer code, there's some section where you load the data. And, in this case, it's image data. So if I load the data, I have the code here. And then there's going to be other code as well, but this is the new key code.

There's a way to setup a parameter for automatic tuning. The way you do that is to create what's called an Optimizable Variable. And the Optimizer Variable I've chosen here is, I'm going to try to find the best learning rate. Now I have to provide some constraints. So I have here 10 to the minus 4, all the way up to 1. Now this is not a vector. This is a range. So what I'm saying to MATLAB is, I want to find the best initial learning rate in the range specified here. Can you do that for me? And it doesn't have to be just one Optimization Variable. You can actually have many, many Optimized Variables. But for the sake of simplicity, today I'm actually doing one Optimized Variable here.

And the way that you actually optimize for the best learning rate is then you set up an Objective Function, and then use Bayesian Optimization to run your experiment. Now if I want, I can actually use a Parallel Option here and Parallelize it. But I didn't in this case. So let me go to details about what this objective function is, because that's really the key part how do you set up the objective function. It's no different than setting up an experiment.

The Objective Function is way down at the bottom. So let me just go there. It's a function that takes in your data, and then tells you how to train the models. So as you see here, these are my seven layers of my neural network. As you see here, here are my training options. And notice that, once again, the learning rate is parameterized. Its Paramterized as this Optimizable variable. And then you actually have the training network, and so on.

So this function is at the bottom because it's being executed as a callback. So when you create this function, you feed it into the Bayesian Optimization. And what'll happen is, I've said to the optimizer, spend no more than five minutes. This is 5 minutes, or 5 times 60 seconds, trying to optimize this. Obviously, for a real problem you're going to let it go for hours, or even days, but for now it's only 5 minutes.

So within five minutes, MATLAB tried to train this model as many times as possible. As you can see, here's one iteration where it shows a learning rate of 0.04, and it got a result. Here's another iteration, where it shows this value, and got a different result. Here's another iteration, and another iteration, and so on. You can see that MATLAB is auto iterating over this parameter range, and computing the result at the end, which is the accuracy of the model.

And then we obtain, at the end of this-- I'm going to zoom out a little bit. We obtain a plot. And this is what helps us understand what's going on with this tuning. So as you see here, there are a few blue dots. These are the experiments. These are the iterations that have been tried by the model, by the Bayesian Optimization Model. And what we want is the minimum value, because the value that we're optimizing here is a loss function. So, again, we want minimum. And, as you can see, there is a value here that lies between some of the experiments that run. So basically, through some extrapolation the Bayesian routine is saying that this starred point is the minimum. And if I want to extract that numerically, it's saying that the best possible learning rate that it found was this particular value, 0.0003, whatever.

So clearly, it's found a value that we wouldn't have tried ourselves if we just specified a grid of values. It found a value here that is kind of in that range. So I can actually validate this. So I'm going to go ahead and open up the Experiment Manager again. And I'm going to go ahead to my Parameter Table here. I'm going to change this Parameter Table. I'm going to say, try the value that I just picked. So this was 0.00032, let's say. Oops, it went away. Let me try that one more time. So 0.00032. Enter. Now what I'll do is actually I'll run this experiment again. And the reason I'm doing this is just to validate what happened. And so I'm only running two experiments. The first experiment is my manually chosen best value and then the second experiment is the value that was automatically determined by the Bayesian routine as the best value. And I'll see if that actually made a difference, if the automatic tuning was actually better than my manual tuning. So in a minute here we'll have our results. Let's see what happens.

All right look, at that. With the automatically found value, the accuracy is actually a few percentage points higher. So that helps me out. Now, of course, that's one way to do tuning. Bayesian Optimization is one way to do tuning. There are other methods as well to do tuning. And you can come up with as many different experiments as you want to try. All of that is programmatically available. What we're trying to do next in the evolution of MATLAB is make this type of functionality available right here, interactively, as part of the Experiment Manager.

So you can do other things. You can do Monte Carlo simulations. You could implement some sort of optimizer from a paper that doesn't exist in MATLAB There's all sorts of things you can do. But the point is, we're trying to make it even easier by incorporating it into the Experiment Manager, hopefully very soon. But even though it's not here yet, I hope you can see that the tooling really still has the right bones to help you out. So that's something I would suggest, trying these things out, and hopefully very soon I'll be able to give you even more easy to use functionality. That will make it easier to experiment with this stuff.

So if you have 20a, you can do this today. I will provide you the code, and I will provide you the slides as well. So if you are a current subscriber of MATLAB, you will have access to the latest MATLAB release, and you could run this all yourself.

So with that, we have a few extra minutes. I'm going to wrap up this presentation with a few last thoughts. And I do apologize that we did not get to answer a lot of your questions. It's just such a large group. I really wish we could. But we're going to continue anyway.

Please feel free to contact us and ask your questions. They will probably get routed to me, or my colleagues on the Deep Learning Team. So I really appreciate you guys asking questions afterwards.

All right, so we covered a lot today. We covered the Deep Network Designer, which allows you to choose from a library of free trade models, graphically train monitor the training, and even generate MATLAB code to recreate your model design. We also talked about the Experiment Manager, which can help you with trial and error during model selection, sweep over the parameters, monitor, and even replicate research and track the results. So these two apps we're very proud of, and we think that there's a lot of usefulness to you guys, but also we're looking to improve them as much as possible.

And with that, I'm going to give the quick teaser for the next session, which is later this week. So I do hope you guys can join us for the next session, which is part two. To set that up for you, we have been in the Deep Learning space for some time now, over five years. And in that time, you can see the evolution of how much we've added over time. We've added more and more features, and come out with more and more solutions for deep learning. And the one I want to highlight is that, as of 2020, which is now, we have more than 200 examples.

So these are not toy problems. We understand you are engineers and scientists. We actually have examples that pertain to real engineering problems. If you want to solve-- let's say you want to use Deep Learning to solve a radar problem. We have that. If you want to use it for medical images, or medical research, if you want to use it for speech recognition, chances are there's examples that pertain to your domain. That's one key aspect here. Another aspect of this is that we've been recognized by independent third parties like Gartner as a leader in the area of Data Science. So you can see MATLAB is here, even considered higher than Microsoft and Google in the area of Data Science, when it comes to leadership.

But really, what I want to focus on for the last few minutes here is what we're going to see in the next part. And I'll cover this more detail later. So I'm not going to go through very deeply now, but in 1980 and prior, we could cover different types of neural networks, like CNNs, LSTMs, C-LSTMs, and MLPs. And I'm not going to read the acronyms. You can read them here. But there's many more types of neural networks that can be addressed. Because in 19a, we had a simpler framework. We extended that framework out for the 19b and 20Aa So this is what we're going to get into the details of. With this framework, you could do things like GANs, variation autoencoders, Siamese networks, attention networks, and so much more. I'm not going to go into details here. I just wanted to give you a quick teaser, because there's lots of examples that we're going to go through, and we'll go into more detail.

So with that, my last minute here, I'm going to illustrate what else we can do. So today you're attending a webinar. That's one of the ways we support you. But we think of ourselves as your Deep Learning partner. We have a lot of staff here. We have a lot of engineers who can help you. And with that, we think of ourselves as platform experts. We're the experts in MATLAB, but you are the expert in your domain. We can get together and help you find a solution. Some of the ways that we help you is through training, but then we can also do guided evaluations. If you or your team are working on a project and struggling to figure out how to apply Deep Learning in your problem, or how to use MATLAB best, we can actually work with you as a one on one or a team to team basis. So it's not simply just giving you software and training. We can actually work with you hand-in-hand in a deeper way. And I like to encourage you to reach out to us if you think you have a problem that may warrant additional engineering support. So we're happy to do that.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

Select web siteYou can also select a web site from the following list:

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

- América Latina (Español)
- Canada (English)
- United States (English)

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)

- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)