Note: This page has been translated by MathWorks. Click here to see

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

This example shows how to classify radar returns using feature extraction followed by a support vector machine (SVM) classifier.

This example requires

Wavelet Toolbox

Statistics and Machine Learning Toolbox

Target classification is an important function in modern radar systems. Because of the recent success of using machine learning techniques for classification, there is a lot of interest in applying similar techniques to classify radar returns. This example describes a workflow where the SVM techniques are used to classify radar echoes from a cylinder and a cone. Although this example uses the synthesized I/Q samples, the workflow is applicable to real radar returns.

The next section shows how to create synthesized data to train the neural network.

The following code simulates the RCS pattern of a cylinder with a radius of 1 meter and a height of 10 meters. The operating frequency of the radar is 850 MHz.

c = 3e8; fc = 850e6; [cylrcs,az,el] = helperCylinderRCSPattern(c,fc); helperTargetRCSPatternPlot(az,el,cylrcs);

The pattern can then be applied to a backscatter radar target to simulate returns from different aspects angles.

cyltgt = phased.BackscatterRadarTarget('PropagationSpeed',c,... 'OperatingFrequency',fc,'AzimuthAngles',az,'ElevationAngles',el,'RCSPattern',cylrcs);

The following plot shows how to simulate 100 returns of the cylinder over time. It is assumed that the cylinder under goes a motion that causes small vibrations around bore sight, as a result, the aspect angle changes from one sample to the next.

rng(2017); N = 100; az = 2*randn(1,N); % model vibration with +/- 2 degrees around boresight el = 2*randn(1,N); cylrtn = cyltgt(ones(1,N),[az;el]); % generate target echo clf plot(mag2db(abs(cylrtn))); xlabel('Time Index') ylabel('Target Return (dB)'); title('Target Return for Cylinder');

The return of the cone can be generated similarly. To create the training set for the SVM classifier, the above process is repeated for 5 arbitrarily selected cylinder radius. In addition, for each radius, 10 motion profiles are simulated by varying the incident angle following 10 randomly generated sinusoid curve around boresight. There are 701 samples in each motion profile, so there are 701x50 samples for each shape in total. Because of the long computation time, the training data is precomputed and loaded below.

```
load('RCSClassificationReturnsTraining');
```

As an example, the next plot shows the return for one of the motion profiles from each shape. The plots show how the values change over time for both the incident azimuth angles and the target returns.

clf; subplot(2,2,1); plot(cylinderAspectAngle(1,:)); ylim([-90 90]); title('Cylinder Aspect Angle vs. Time'); xlabel('Time Index'); ylabel('Aspect Angle (degrees)'); subplot(2,2,3); plot(RCSReturns.Cylinder_1); ylim([-50 50]); title('Cylinder Return'); xlabel('Time Index'); ylabel('Target Return (dB)'); subplot(2,2,2); plot(coneAspectAngle(1,:)); ylim([-90 90]); title('Cone Aspect Angle vs. Time'); xlabel('Time Index'); ylabel('Aspect Angle (degrees)'); subplot(2,2,4); plot(RCSReturns.Cone_1); ylim([-50 50]); title('Cone Return'); xlabel('Time Index'); ylabel('Target Return (dB)');

To improve the matching performance of learning algorithms, the learning algorithms often work on extracted features rather than the original signal. The features make it easier for the classification algorithm to discriminate between returns from different targets. In addition, the features are often smaller in size compared to the original signal so it requires less computational resources to learn.

There are a variety of ways to extract features for this type of data set. To obtain the right feature, it is often useful to take a look at a time-frequency view of data where the frequency due to motion is varying across radar pulses. The time-frequency signature of the signal can be derived by either Fourier transform (the spectrogram) or wavelet transforms. Specifically, this example uses wavelet packet representation of the signal. The following plots show the wavelet packet signatures for both the cone and the cylinder. These signatures provide some insight that the learning algorithms will be able to distinguish between the two. Specifically, there is separation between the frequency content over time between the two signatures.

levels = 3; [wpt,~,F] = modwpt(RCSReturns{:,1},'fk6',levels,'TimeAlign',true); clf; contour(1:size(wpt,2),F,abs(wpt).^2); grid on; xlabel('Time Index'); ylabel('Cycles per sample'); title('Wavelet Packet for Cylinder Return');

[wpt,~,F,E,RE] = modwpt(RCSReturns{:,51},'fk6',levels,'TimeAlign',true); clf; contour(1:size(wpt,2),F,abs(wpt).^2); grid on; xlabel('Time Index'); ylabel('Cycles per sample'); title('Wavelet Packet for Cone Return');

The apparent frequency separation between the cylinder and cone returns suggests using a frequency-domain measure to classify the signals. This example uses the maximal overlap discrete wavelet packet transform (MODWPT) to compare relative subband energies. The MODWPT at level partitions the signal energy into equal-width subbands and does this in a way that the total signal energy is preserved. To see this, you can do the following.

T = array2table([F E RE*100],'VariableNames',{'CenterFrequency','Energy','PercentEnergy'}); disp(T)

CenterFrequency Energy PercentEnergy _______________ __________ _____________ 0.03125 1.9626e+05 42.77 0.09375 82923 18.071 0.15625 65162 14.2 0.21875 46401 10.112 0.28125 37044 8.0728 0.34375 20725 4.5166 0.40625 8952 1.9509 0.46875 1405.4 0.30626

The table shows the subband center frequencies in cycles/sample, the energy in those subbands, and the percentage of the total energy in each subband. Note that MODWPT preserves the signal energy, which is an important property that is very difficult to achieve with conventional bandpass filtering. Specifically, you have a signal decomposition into subbands, which mimics an orthogonal transform. In signal classification problems where there is frequency separation between classes, this property significantly improves the ability of a classifier to accurately distinguish the classes.

Using wavelet transform, the extracted features consists of 8 predictors per target return. Comparing to the original time domain signal of 701 points, it is a significant reduction in the data. The number of levels for the wavelet transform can be tuned to improve performance of the classification algorithms.

trainingData = varfun(@(x)helperWPTFeatureExtraction(x,'fk6',levels),RCSReturns); trainingData = array2table(table2array(trainingData)'); % 50 cylinders followed by 50 cones shapeTypes = categorical({'Cylinder';'Cones'}); trainingData.Type = shapeTypes([zeros(50,1); ones(50,1)]+1);

The Classification Learner app can be used to train the classifier. Once the training data is loaded, it can help apply different learning algorithms against the data set and report back the classification accuracy. The following picture is a snapshot of the app.

Based on the result, this example uses the SVM technique and then generates the corresponding training algorithm, `helperTrainClassifier`

, from the app. The output of the training algorithm is a configured classifier, `trainedClassifier`

, ready to perform the classification.

[trainedClassifier, validationAccuracy] = helperTrainClassifier(trainingData);

Once the model is ready, the network can process the received target return and perform classification using `predictFcn`

method. The next section of the example creates a test data set using the approach similar to creating training data. This data is passed through the derived classifier to see if it can correctly classify the two shapes. The test data contains 25 cylinder returns and 25 cone returns. These cylinders and cones consist of 5 sizes for each shape and 5 motion profiles for each size. The generation process is the same as the training data, but with the specific values are slightly different due to randomness of the size values and incident angle values. The total number of samples for each shape is 701x25.

load('RCSClassificationReturnsTest'); testData = varfun(@(x)helperWPTFeatureExtraction(x,'fk6',levels),RCSReturnsTest); testData = array2table(table2array(testData)'); testResponses = shapeTypes([zeros(25,1); ones(25,1)]+1); % 25 cylinders followed by 25 cones testPredictions = trainedClassifier.predictFcn(testData); cmat = confusionmat(testResponses, testPredictions)

cmat = 16 9 0 25

From the confusion matrix, it can be computed that the overall accuracy is about 82%.

classacc = sum(diag(cmat))/sum(cmat(:))

classacc = 0.8200

It is possible to improve the performance of the classifier by increasing the quality and quantity of the training data. In addition, the feature extraction process can be improved to further distinguish characteristics of each target within the classification algorithm. Note that different features may have different optimal classification algorithms.

This example presents a workflow for performing radar target classification using machine learning techniques. Although this example used synthesized data to do training and testing, it can be easily extended to accommodate real radar returns.