Get Started with Hyperspectral and Multispectral Image Processing
Spectral imaging is a powerful technique used to capture and analyze the spectral information of objects across various wavelengths, playing crucial roles in fields such as remote sensing, medical imaging, and industrial applications.
Multispectral imaging gained prominence in the early 20th century as a method to enhance aerial surveys by capturing images in multiple spectral bands. NASA significantly advanced this approach with the Landsat program in the 1970s, which revolutionized Earth observation. However, the need for more detailed spectral information led to the development of hyperspectral imaging in the 1980s and 1990s. Hyperspectral imaging allows for data capture across hundreds of narrow, contiguous bands, providing precise material identification essential for applications like mineral exploration and environmental monitoring.
Hyperspectral imaging captures images across hundreds of narrow bands (400–2500 nm) with high spectral resolution (5–10 nm), making it ideal for applications that require precise material identification, such as mineralogy, environmental monitoring, non-destructive testing, and medical imaging. In contrast, multispectral imaging captures fewer, broader bands (typically less than 10) with bandwidths of 70–400 nm, efficiently supporting applications like land cover classification and agricultural monitoring. Both techniques enable enhanced analysis and decision-making based on spectral data, each suited to specific applications.
Spectral image processing involves representing, analyzing, and interpreting the information contained in spectral images.
Representation
You can read hyperspectral and multispectral images from files into the workspace using functionality from the Hyperspectral Imaging Library for Image Processing Toolbox™. The toolbox represents hyperspectral image are represented as data cubes, and multispectral images as series of band images.
Representing Hyperspectral Data
Hyperspectral imaging sensors store the values they measure to data files. For each data file, an associated header file contains ancillary information (metadata) like the sensor parameters, acquisition settings, spatial dimensions, spectral wavelengths, and encoding formats that are required for proper representation of the values in the data file. Alternatively, the ancillary information can also be directly added to the data file as in multipage TIFF and NITF file formats.
Hyperspectral image processing arranges the values read from the data file into a three-dimensional (3-D) array of the form M-by-N-by-C, where M and N are the spatial dimensions of the acquired data, and C is the spectral dimension specifying the number of spectral wavelengths (bands) used during acquisition. Thus, you can think of the 3-D array as a set of two-dimensional (2-D) monochromatic images captured at various wavelengths. This set is known as the data cube.
The Hyperspectral Imaging Library for Image Processing
Toolbox enables you to represent hyperspectral data and its metadata as a
hypercube object, which you can create by using the imhypercube and geohypercube functions. While the
imhypercube function does not store any geospatial
information in the hypercube object, the
geohypercube function stores geospatial information in
the Metadata property of the hypercube
object.
You can use the hypercube object to analyze the associated
hyperspectral data using other functions in the Hyperspectral Imaging Library for Image Processing
Toolbox. For more information on how to analyze hyperspectral images, see
Analyze Hyperspectral and Multispectral Images.
You can use the Hyperspectral Viewer app to interactively visualize and process
hyperspectral images directly from files or from hypercube
objects. For an example of how to visualize hyperspectral images in the
Hyperspectral Viewer app, see Explore Hyperspectral and Multispectral Data in the Hyperspectral Viewer.
Representing Multispectral Data
Multispectral imaging sensors store the values they measure to data files. For each data file, an associated header file contains ancillary information (metadata) like the sensor parameters, acquisition settings, spatial dimensions, spectral wavelengths, and spatial resolution of each band. Alternatively, the ancillary information can also be directly added to the data file as in multipage TIFF, HDF, and SAFE file formats.
Multispectral image processing arranges the values read from the data file into a series of C two-dimensional (2-D) matrices, where C is the spectral dimension that specifies the number of spectral wavelengths (bands) used during acquisition. Each 2-D band image represents the image acquired at the corresponding wavelength. The band images can have different spatial resolutions, and can represent different sizes of geographic area. If all the band images of the multispectral image represent the same geographic area, you can process the multispectral data as a data cube of the form M-by-N-by-C, where M and N are the spatial dimensions of the acquired data, by resampling the band images to a uniform resolution.
The Hyperspectral Imaging Library for Image Processing
Toolbox enables you to represent multispectral data and its metadata as a
multicube object, which you can create by using the immulticube and geomulticube functions. While the
immulticube function does not store any geospatial
information in the multicube object, the
geomulticube function stores geospatial information in
the Metadata property of the multicube
object.
You can use the multicube object to analyze the associated
multispectral data using other functions in the Hyperspectral Imaging Library for Image Processing
Toolbox. For more information on how to analyze multispectral images, see
Analyze Hyperspectral and Multispectral Images.
You can use the Hyperspectral Viewer app to interactively visualize and process
multispectral images directly from files or from multicube
objects. For an example of how to visualize multispectral images in the
Hyperspectral Viewer app, see Explore Hyperspectral and Multispectral Data in the Hyperspectral Viewer.
Large Spectral Images
Processing hyperspectral and multispectral images of very large spatial
resolution requires a large amount of system memory, and might cause your system
to run out of memory. Because creating a hypercube or
multicube object loads only the associated metadata into
the workspace, you can crop the spectral image to a small region of interest
before performing any operation that reads the spectral data into the workspace.
For examples of how to process small regions of large spectral images, see Process Large Hyperspectral and Multispectral Images.
Color Representation
You can apply a color scheme to represent a data cube as a 2-D image, which
enables you to visually inspect the object or region you are imaging. Use the
colorize function to compute color representations of a data
cube.
The RGB color scheme uses the red, green, and blue spectral band responses of a hyperspectral or multispectral image to generate a 2-D image of the data. The RGB color scheme has a natural appearance, but does not convey subtle spectral information.
As hyperspectral images can a have a large number of bands across the spectrum, you can represent hyperspectral images using additional color schemes.
The false-color scheme uses a combination of any number of bands other than the visible red, green, and blue spectral bands. Use false-color representations to visualize the spectral responses of bands outside the visible spectrum. The false-color scheme efficiently captures distinct information across all spectral bands of your hyperspectral data.
The CIR color scheme uses spectral bands in the NIR range. The CIR representation of a hyperspectral data cube is particularly useful in displaying and analyzing vegetation areas of the data cube.
Preprocessing
Hyperspectral imaging sensors typically have high spectral resolution and low spatial resolution, whereas multispectral imaging sensors typically have low spectral resolution and high spatial resolution. Spectral images can get distorted due to factors such as sensor noise, low resolution, atmospheric effects, and spectral distortions from sensors.
You can use the denoiseNGMeet function to remove noise from hyperspectral data using
the non-local meets global approach.
To enhance the spatial resolution of hyperspectral data, you can use image fusion
methods. The fusion approach combines information from low-resolution hyperspectral
data with a high-resolution multispectral data set or a panchromatic image of the
same scene. This approach is also known as sharpening or
pansharpening in hyperspectral image analysis.
Pansharpening specifically refers to fusion between hyperspectral and panchromatic
data. You can use the sharpencnmf function to sharpen hyperspectral data using the coupled
nonnegative matrix factorization method.
To compensate for the atmospheric effects in a hyperspectral or multispectral image, you must first calibrate the pixel values, which are digital numbers (DNs). You must preprocess the data by calibrating DNs using radiometric and atmospheric correction methods. This process improves interpretation of the pixel spectra and provides better results when you analyze multiple data sets. In addition, spectral distortions that occur due to hyperspectral sensor characteristics during acquisition can lead to inaccuracies in the spectral signatures. To enhance the reliability of spectral data for further analysis, you must apply preprocessing techniques that significantly reduce spectral distortions in hyperspectral images. For information about hyperspectral and multispectral data correction methods, see Hyperspectral and Multispectral Data Correction.
Because the spectral bands of a multispectral image can have different spatial
resolutions, you might need to preprocess them by performing band
resampling, which consists of resampling the bands to a uniform
spatial resolution for further processing. Use the resampleBands function to resample the spectral bands of a
multispectral image.
In hyperspectral data, the large number of bands increases the computational complexity of processing the data cube. Because the contiguous nature of the band images and high correlation between neighboring bands results in spectral redundancy, you can preprocess hyperspectral data to reduce its complexity by performing dimensionality reduction, which consists of removing the redundant bands by decorrelating the band images. Popular approaches for reducing the spectral dimensionality of a data cube include band selection and orthogonal transforms.
The band selection approach uses orthogonal space projections to find the spectrally distinct and most informative bands in the data cube. Use the
selectBandsandremoveBandsfunctions for the finding most informative bands and removing one or more bands, respectively.Orthogonal transforms, such as principal component analysis (PCA) and maximum noise fraction (MNF), decorrelate the band information and find the principal component bands.
PCA transforms the data to a lower dimensional space and finds principal component vectors with their directions along the maximum variances of the input bands. The principal components are in descending order of the amount of total variance explained.
MNF computes the principal components that maximize the signal-to-noise ratio, rather than the variance. The MNF transform is particularly efficient at deriving principal components from noisy band images. The principal component bands are spectrally distinct bands with low interband correlation.
The
hyperpcaandhypermnffunctions reduce the spectral dimensionality of the data cube by using the PCA and MNF transforms, respectively. You can use the pixel spectra derived from the reduced data cube for hyperspectral data analysis.
Spectral Unmixing
Each pixel in a hyperspectral image is a vector of values that specify the intensities at a location (x, y) in z different bands. The vector is known as the pixel spectrum, and it defines the spectral signature of the pixel located at (x, y). Because hyperspectral images have a large number of spectral bands, the pixel spectra are important features in hyperspectral data analysis.
![]()
In a hyperspectral image, the intensity values recorded at each pixel specify the spectral characteristics of the region that the pixel belongs to. The region can be a homogeneous surface or heterogeneous surface. The pixels that belong to a homogeneous surface are known as pure pixels. These pure pixels constitute the endmembers of the hyperspectral data.
Heterogeneous surfaces are a combination of two or more distinct homogeneous surfaces. The pixels that belong to heterogeneous surfaces are known as mixed pixels. The spectral signature of a mixed pixel is a combination of two or more endmember signatures. This spatial heterogeneity is mainly due to the low spatial resolution of the hyperspectral sensor.

Spectral unmixing is the process of decomposing the spectral signatures of mixed pixels into their constituent endmembers. The spectral unmixing process involves these steps:
Endmember extraction — The spectra of the endmembers are prominent features in the hyperspectral data, and can be used for efficient spectral unmixing of hyperspectral images. Efficient approaches for endmember extraction include convex geometry based approaches, such as the pixel purity index (PPI), fast iterative pixel purity index (FIPPI), and N-finder (N-FINDR) methods.
Use the
ppifunction to estimate the endmembers by using the PPI approach. The PPI approach projects the pixel spectra to an orthogonal space and identifies extrema pixels in the projected space as endmembers. This is a non-iterative approach, and the results depend on the random unit vectors generated for orthogonal projection. To improve results, you must increase the random unit vectors for projection, which can be computationally expensive.Use the
fippifunction to estimate the endmembers by using the FIPPI approach. The FIPPI approach is an iterative approach, which uses an automatic target generation process to estimate the initial set of unit vectors for orthogonal projection. The algorithm converges faster than the PPI approach and identifies endmembers that are distinct from one another.Use the
nfindrfunction to estimate the endmembers by using the N-FINDR method. N-FINDR is an iterative approach that constructs a simplex by using the pixel spectra. The approach assumes that the volume of a simplex formed by the endmembers is larger than the volume defined by any other combination of pixels. The set of pixel signatures for which the volume of the simplex is highest are the endmembers.
Abundance map estimation — Given the endmember signatures, you can estimate the fractional amount of each endmember present in each pixel. You can generate an abundance map for each endmember, which represents the distribution of that endmember spectra in the image. You can label a pixel as belonging to an endmember spectra by comparing all of the abundance map values obtained for that pixel.
Use the
estimateAbundanceLSfunction to estimate the abundance map for each endmember spectra.
Spectral Matching and Target Detection
Spectral matching identifies the class of an endmember
material by comparing its spectra with one or more reference spectra. You can also
use spectral matching for material identification or target
detection in a hyperspectral image when the spectral signature of
the target is distinct from other regions in the hyperspectral image. Interpret the
pixel spectra of a hyperspectral image by performing spectral matching using the
spectralMatch function. To identify the class of an endmember
material in a hyperspectral image using spectral matching, use reference data that
consists of pure spectral signatures of materials, available as spectral libraries.
Use the readEcostressSig function to read the reference spectra files from
the ECOSTRESS spectral library. Then, you can compute the similarity between the
spectra in the ECOSTRESS library files and the spectra of an endmember material by
using the spectralMatch function.
However, when the spectral contrast between the target and other regions is low,
or when the image contains a smaller number of broader bands, as in multispectral
images, spectral matching becomes more challenging. In such cases, you must use more
sophisticated target detection algorithms that consider the entire hyperspectral or
multispectral data cube and use statistical or machine learning methods. To perform
target detection in hyperspectral and multispectral image using such statistical
methods, use the detectTarget function.
For more information on spectral matching and target detection techniques, see Spectral Matching and Target Detection Techniques.
Spectral Indices, Segmentation, and Labeling
A spectral index is a function, such as a ratio or
difference, of two or more spectral bands. Spectral indices delineate and identify
different regions in an image based on their spectral properties. By calculating a
spectral index for each pixel, you can transform the hyperspectral or multispectral
data into a single-band image where the index values indicate the presence and
concentration of a feature of interest. You can also use the spectral indices for
change detection and threshold-based segmentation of hyperspectral images. Use the
spectralIndices and customSpectralIndex functions to identify different regions in the
spectral image. Preprocess the spectral image to set any negative pixel values,
which can occur because of sensor issues, to zero. For more information on spectral
indices, see Spectral Indices.
To segment regions that you cannot distinguish using spectral indices, you can
use approaches such as simple linear iterative clustering (SLIC) by using the
hyperslic function for hyperspectral images, ISODATA clustering by
using the imsegisodata function for hyperspectral and multispectral images,
and anchor graphs by using the hyperseganchor function for hyperspectral and multispectral
images.
You can also use deep learning models to segment hyperspectral and multispectral images for applications such as land use land cover (LULC) classification. To train deep learning models and other models for supervised classification, you can generate ground truth using the Spectral Image Labeler app. You can manually label regions in a spectral image in the app or use techniques such as thresholded spectral indices, SLIC, ISODATA clustering, and anchor graphs for automated and semi-automated labeling. You can also define your own automation algorithm in the app. For more information on the Spectral Image Labeler app, see Get Started with Spectral Image Labeler.
Applications
Applications of hyperspectral and multispectral image processing include land cover classification, material analysis, target detection, change detection, visual inspection, and medical image analysis.
Classify land cover by classifying each pixel in a hyperspectral image. For examples of classification, see these examples.
Segment a hyperspectral image using the Segment Anything Model. For an example, see Interactively Segment Hyperspectral Image Using Segment Anything Model.
Identify materials in a hyperspectral image using a spectral library. For an example, see Endmember Material Identification Using Spectral Library.
Perform target detection by matching the known spectral signature of a target material to the pixel spectra in hyperspectral data. For examples of target detection, see Target Detection Using Spectral Signature Matching and Ship Detection from Sentinel-1 C Band SAR Data Using YOLOX Object Detection.
Detect changes in hyperspectral images over time. For examples of change detection, see Change Detection in Hyperspectral Images and Map Flood Areas Using Sentinel-1 SAR Imagery.
Perform visual inspection and nondestructive testing operations, such as monitoring the maturity of fruit. The comprehensive spectral data available in hyperspectral images enables precise and nondestructive analysis. For an example, see Predict Sugar Content in Grape Berries Using PLS Regression on Hyperspectral Data.
Analyze hyperspectral medical images. For an example, see Segment Spleen in Hyperspectral Image of Porcine Tissue.
Create a digital twin of a scene using hyperspectral and lidar data. For an example, see Generate RoadRunner Scene Using Aerial Hyperspectral and Lidar Data (Automated Driving Toolbox).
See Also
hypercube | multicube | Hyperspectral
Viewer | Spectral Image
Labeler