*MUltiple SIgnal Classification* (MUSIC)
is a high-resolution direction-finding algorithm based on the eigenvalue
decomposition of the sensor covariance matrix observed at an array.
MUSIC belongs to the family of subspace-based direction-finding algorithms.

The signal model relates the received sensor data to the signals
emitted by the source. Assume that there are *D* uncorrelated
or partially correlated signal sources, *s _{d}(t)*.
The sensor data,

$$\begin{array}{l}x(t)=As(t)+n(t)\\ s(t)=[{s}_{1}(t),{s}_{2}(t),\dots ,{s}_{D}(t)){]}^{\prime}\\ A=[a({\theta}_{1})|a({\theta}_{2})|\dots |a({\theta}_{D})]\end{array}$$

*x(t)*is an*M*-by-1 vector of received snapshot of sensor data which consist of signals and additive noise.*A*is an*M*-by-*D*matrix containing the arrival vectors. An arrival vector consists of the relative phase shifts at the array elements of the plane wave from one source. Each column of*A*represents the arrival vector from one of the sources and depends on the direction of arrival,*θ*._{d}*θ*is the direction of arrival angle for the_{d}*d*th source and can represents either the broadside angle for linear arrays or the azimuth and elevation angle for planar or 3D arrays.*s(t)*is a*D*-by-1 vector of source signal values from*D*sources.*n(t)*is an*M*-by-1 vector of sensor noise values.

An important quantity in any subspace method is the *sensor
covariance matrix*,*R _{x}*,
derived from the received signal data. When the signals are uncorrelated
with the noise, the sensor covariance matrix has two components, the

$${R}_{x}=E\left\{x{x}^{H}\right\}=A{R}_{s}{A}^{H}+{\sigma}_{n}^{2}I$$

where *R _{s}* is
the

$${R}_{s}=E\left\{s{s}^{H}\right\}$$

For uncorrelated sources
or even partially correlated sources, *R _{s}* is
a positive-definite Hermitian matrix and has full rank,

The signal covariance matrix, *AR _{s}A^{H}*,
is an

An assumption of the MUSIC algorithm is that the noise powers
are equal at all sensors and uncorrelated between sensors.With this
assumption, the noise covariance matrix becomes an *M*-by-*M* diagonal
matrix with equal values along the diagonal.

Because the true sensor covariance matrix is not known, MUSIC
estimates the sensor covariance matrix, *R _{x}*,
from the

$${R}_{x}=\frac{1}{T}{\displaystyle \sum _{k=1}^{T}x(t)}x{(t)}^{H},$$

where *T* is
the number of snapshots.

Because *AR _{s}A^{H}* has
rank

$$A{R}_{s}{A}^{H}{u}_{i}=0\Rightarrow {u}^{H}A{R}_{s}{A}^{H}{u}_{i}=0\Rightarrow {\left({A}^{H}{u}_{i}\right)}^{H}{R}_{s}\left({A}^{H}{u}_{i}\right)=0\Rightarrow {A}^{H}{u}_{i}=0$$

Therefore the arrival vectors are orthogonal to the null subspace.

When noise is added, the eigenvectors of the sensor covariance
matrix with noise present are the same as the noise-free sensor covariance
matrix. The eigenvalues increase by the noise power. Let *v _{i}* be
one of the original noise-free signal space eigenvectors. Then

$${R}_{x}{v}_{i}=A{R}_{s}{A}^{H}{v}_{i}+{\sigma}_{0}^{2}I{v}_{i}=\left({\lambda}_{i}+{\sigma}_{0}^{2}\right){v}_{i}$$

shows that the signal
space eigenvalues increase by *σ _{0}^{2}*.

The null subspace eigenvectors are also eigenvectors of *R _{x}*.
Let

$${R}_{x}{u}_{i}=A{R}_{s}{A}^{H}{u}_{i}+{\sigma}_{0}^{2}I{u}_{i}={\sigma}_{0}^{2}{u}_{i}$$

with eigenvalues of *σ _{0}^{2}* instead
of zero. The null subspace becomes the

MUSIC works by searching for all arrival vectors that are orthogonal to the noise subspace. To do the search, MUSIC constructs an arrival-angle-dependent power expression, called the MUSIC pseudospectrum:

$${P}_{MUSIC}(\overrightarrow{\varphi})=\frac{1}{{a}^{H}(\overrightarrow{\varphi}){U}_{n}{U}_{n}^{H}a(\overrightarrow{\varphi})}$$

When an arrival vector
is orthogonal to the noise subspace, the peaks of the pseudospectrum
are infinite. In practice, because there is noise, and because the
true covariance matrix is estimated by the sampled covariance matrix,
the arrival vectors are never exactly orthogonal to the noise subspace.
Then, the angles at which *P _{MUSIC}* has
finite peaks are the desired directions of arrival. Because the pseudospectrum
can have more peaks than there are sources, the algorithm requires
that you specify the number of sources,

For a ULA, the denominator in the pseudospectrum is a polynomial
in $${e}^{ikd\mathrm{cos}\phi}$$,
but can also be considered a polynomial in the complex plane. In this
cases, you can use root-finding methods to solve for the roots, *z _{i}*.
These roots do not necessarily lie on the unit circle. However, Root-MUSIC
assumes that the

When some of the *D* source signals are correlated, *R _{s}* is
rank deficient, meaning that it has fewer than

*Spatial smoothing* takes advantage of
the translation properties of a uniform array. Consider two correlated
signals arriving at an *L*-element ULA. The source
covariance matrix, *R _{s}* is
a singular 2-by-2 matrix. The arrival vector matrix is an

$$\begin{array}{l}{A}_{1}=\left[\begin{array}{c}1\\ {e}^{ikd\mathrm{cos}{\phi}_{1}}\\ \vdots \\ {e}^{i(L-1)kd\mathrm{cos}{\phi}_{1}}\end{array}\begin{array}{c}1\\ {e}^{ikd\mathrm{cos}{\phi}_{2}}\\ \vdots \\ {e}^{i(L-1)kd\mathrm{cos}{\phi}_{2}}\end{array}\right]=\left[a\left({\phi}_{1}\right)|a\left({\phi}_{2}\right)\right]\\ \end{array}$$

for signals arriving
from the broadside angles *φ _{1}* and

You can create a second array by translating the first array
along its axis by one element distance, *d*. The
arrival matrix for the second array is

$${A}_{2}=\left[\begin{array}{c}{e}^{ikd\mathrm{cos}{\phi}_{1}}\\ {e}^{i2kd\mathrm{cos}{\phi}_{1}}\\ \vdots \\ {e}^{iLkd\mathrm{cos}{\phi}_{1}}\end{array}\begin{array}{c}{e}^{ikd\mathrm{cos}{\phi}_{2}}\\ {e}^{i2kd\mathrm{cos}{\phi}_{2}}\\ \vdots \\ {e}^{iLkd\mathrm{cos}{\phi}_{2}}\end{array}\right]=\left[{e}^{ikd\mathrm{cos}{\phi}_{1}}a\left({\phi}_{1}\right)|{e}^{ikd\mathrm{cos}{\phi}_{2}}a\left({\phi}_{2}\right)\right]$$

where the arrival vectors
are equal to the original arrival vectors but multiplied by a direction-dependent
phase shift. When you translate the original array *J –1* more
times, you get *J * copies of the array. If you form
a single array from all these copies, then the length of the single
array is *M = L + (J – 1)*.

In practice, you start with an *M*-element
array and form *J* overlapping subarrays. The number
of elements in each subarray is *L = M – J + 1*.
The following figure shows the relationship between the overall length
of the array, *M*, the number of subarrays, *J*,
and the length of each subarray, *L*.

For the *p*th subarray, the source signal arrival
matrix is

$$\begin{array}{c}{A}_{p}=\left[{e}^{ik\left(p-1\right)d\mathrm{cos}{\phi}_{1}}a\left({\phi}_{1}\right)|{e}^{ik\left(p-1\right)d\mathrm{cos}{\phi}_{2}}a\left({\phi}_{2}\right)\right]\\ =\left[a\left({\phi}_{1}\right)|a\left({\phi}_{2}\right)\right]\left[\begin{array}{cc}{e}^{ik\left(p-1\right)d\mathrm{cos}{\phi}_{1}}& 0\\ 0& {e}^{ik\left(p-1\right)d\mathrm{cos}{\phi}_{2}}\end{array}\right]={A}_{1}{P}^{p-1}\\ P=\left[\begin{array}{cc}{e}^{ikd\mathrm{cos}{\phi}_{1}}& 0\\ 0& {e}^{ikd\mathrm{cos}{\phi}_{2}}\end{array}\right].\end{array}$$

The original arrival vector matrix is postmultiplied by a diagonal phase matrix.

The last step is averaging the signal covariance matrices over
all *J* subarrays to form the averaged signal covariance
matrix, *R ^{avg}_{s}*.
The average signal covariance matrix depends on the smoothed source
covariance matrix,

$$\begin{array}{l}{R}_{s}^{avg}={A}_{1}\left(\frac{1}{J}{\displaystyle \sum _{p=1}^{J}{P}^{p-1}{R}_{s}{\left({P}^{p-1}\right)}^{H}}\right){A}_{1}^{H}={A}_{1}{R}^{smooth}{A}_{1}^{H}\\ {R}^{smooth}=\frac{1}{J}{\displaystyle \sum _{p=1}^{J}{P}^{p-1}{R}_{s}{\left({P}^{p-1}\right)}^{H}}.\end{array}$$

You can show that the diagonal elements of the smoothed source covariance matrix are the same as the diagonal elements of the original source covariance matrix.

$${R}_{ii}^{smooth}=\frac{1}{J}{\displaystyle \sum _{p=1}^{J}{\left({P}^{p-1}\right)}_{im}{\left({R}_{s}\right)}_{mn}{\left({P}^{p-1}\right)}_{ni}{}^{H}}=\frac{1}{J}{\displaystyle \sum _{p=1}^{J}{R}_{s}}={\left({R}_{s}\right)}_{ii}$$

However, the off-diagonal elements are reduced. The reduction
factor is the beam pattern of a *J*-element array.

$${R}_{ij}^{smooth}=\frac{1}{J}{\displaystyle \sum _{p=1}^{J}{e}^{ikd(p-1)\left(\mathrm{cos}{\phi}_{1}-\mathrm{cos}{\phi}_{2}\right)}}{\left({R}_{s}\right)}_{ij}=\frac{1}{J}\frac{\mathrm{sin}\left(kdJ\left(\mathrm{cos}{\phi}_{1}-\mathrm{cos}{\phi}_{2}\right)\right)}{\mathrm{sin}\left(kd\left(\mathrm{cos}{\phi}_{1}-\mathrm{cos}{\phi}_{2}\right)\right)}{\left({R}_{s}\right)}_{ij}$$

In summary, you can reduce the degrading effect of source correlation by forming subarrays and using the smoothed covariance matrix as input to the MUSIC algorithm. Because of the beam pattern, larger angular separation of sources leads to reduced correlation.

Spatial smoothing for linear arrays is easily extended to 2D and 3D uniform arrays.