Main Content

detect

Detect people using pretrained deep learning-based people detector

Since R2024b

    Description

    bboxes = detect(detector,I) detects people within a single image or a batch of images, I, using a pretrained deep learning-based people detector, detector. The detect function returns the locations of detected people in the input image as a set of bounding boxes.

    Note

    This functionality requires Deep Learning Toolbox™ and the Computer Vision Toolbox™ Model for RTMDet Object Detection. You can install the Computer Vision Toolbox Model for RTMDet Object Detection from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.

    example

    [bboxes,scores] = detect(detector,I) returns the confidence scores for the bounding boxes along with their locations.

    example

    [bboxes,scores,labels] = detect(detector,I) also returns a categorical array of person labels assigned to the bounding boxes.

    detectionResults = detect(detector,ds) returns a table of predicted people for all the images in the input datastore ds.

    [___] = detect(___,roi) detects people within the rectangular region of interest roi, in addition to any combination of arguments from previous syntaxes.

    example

    [___] = detect(___,Name=Value) specifies options using one or more name-value arguments. For example, Threshold=0.75 specifies a detection threshold of 0.75.

    example

    Examples

    collapse all

    Load the default pretrained people detector model.

    detector = peopleDetector;

    Read a test image into the workspace.

    I = imread("boats.png");

    Specify a rectangular region of interest (ROI) that encloses the first boat within the test image.

    roiBox = [65 50 295 590];

    Detect people in the first boat in the image using a Threshold value of 0.45.

    [bboxes,scores] = detect(detector,I,roiBox,Threshold=0.45);

    Annotate the detected people with bounding boxes and their detection scores.

    img = insertObjectAnnotation(I,"rectangle",roiBox,"ROI");
    detectedImg = insertObjectAnnotation(img,"rectangle",bboxes,scores);
    figure
    imshow(detectedImg)

    Input Arguments

    collapse all

    People detector, specified as a peopleDetector object.

    Test images, specified as one of these values:

    • A matrix of form H-by-W for a grayscale image.

    • A 3-D numeric array of form H-by-W-by-3 for an RGB image.

    • A 4-D numeric array of form H-by-W-by-C-by-T for a batch of test images.

    H and W are the height and width of the images, respectively. C is the number of colour channels. The value of C is:

    • 1 for grayscale images.

    • 3 for RGB images.

    T is the number of images in the batch.

    When the test image size does not match the network input size, the detector resizes the input image to the value of the InputSize property of detector, unless you specify AutoResize as false.

    The detector is sensitive to the range of the test image. Therefore, ensure that the test image range is similar to the range of the images used to train the detector. For example, if the detector was trained on uint8 images, rescale this input image to the range [0, 255] by using the im2uint8 or rescale function.

    Data Types: uint8 | uint16 | int16 | double | single

    Datastore of test images, specified as an ImageDatastore object, CombinedDatastore object, or TransformedDatastore object containing the full filenames of the test images. The images in the datastore must be grayscale or RGB images.

    Region of interest (ROI) to search, specified as a vector of the form [x y width height]. The vector specifies the upper-left corner and size of a region, in pixels. If the input data is a datastore, the detect function applies the same ROI to every image in the datastore.

    Note

    To specify the ROI to search, the AutoResize value must be true, enabling the function to automatically resize the input test images to the network input size.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: detect(detector,I,Threshold=0.5) specifies a detection threshold of 0.5.

    Detection threshold, specified as a scalar in the range [0, 1]. The function removes detections that have scores less than this threshold value. To reduce false positives, at the possible expense of missing some detections, increase this value.

    Strongest bounding box selection, specified as a numeric or logical 1 (true) or 0 (false).

    • true — Returns only the strongest bounding box for each detected person. The detect function calls the selectStrongestBboxMulticlass function, which uses nonmaximal suppression to eliminate overlapping bounding boxes based on their confidence scores.

      By default, the detect function uses this call to the selectStrongestBboxMulticlass function.

       selectStrongestBboxMulticlass(bboxes,scores, ...
                                     RatioType="Union", ...
                                     OverlapThreshold=0.45);

    • false — Return all detected bounding boxes. You can write a custom function to eliminate overlapping bounding boxes.

    Minimum region size containing a person, specified as a vector of the form [height width]. Units are in pixels. The minimum region size defines the size of the smallest person that the trained network can detect. When you know the minimum size, you can reduce computation time by setting MinSize to that value.

    Maximum region size, specified as a vector of the form [height width]. Units are in pixels. The maximum region size defines the size of the largest person that the trained network can detect.

    By default, MaxSize is set to the height and width of the input image I. To reduce computation time, set this value to the known maximum region size in which to detect people in the input test image.

    Minimum batch size, specified as a positive integer. Adjust the MiniBatchSize value to help process a large collection of images. The detect function groups images into minibatches of the specified size and processes them as a batch, which can improve computational efficiency at the cost of increased memory demand. Increase the minibatch size to decrease processing time. Decrease the minibatch size to use less memory.

    Automatic resizing of input images to preserve the aspect ratio, specified as a numeric or logical 1 (true) or 0 (false). When AutoResize is set to true, the detect function resizes images to the nearest InputSize dimension, while preserving the aspect ratio. Set AutoResize to logical false when performing image tiling-based inference, or inference at full test image size.

    Hardware resource on which to run the detector, specified as one of these values:

    • "auto" — Use a GPU if Parallel Computing Toolbox™ is installed and a supported GPU device is available. Otherwise, use the CPU.

    • "gpu" — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA® enabled NVIDIA® GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

    • "cpu" — Use the CPU.

    Performance optimization, specified as one of these options:

    • "auto" — Automatically apply a number of compatible optimizations suitable for the input network and hardware resource.

    • "mex" — Compile and execute a MEX function. This option is available only when using a GPU. Using a GPU requires Parallel Computing Toolbox and a CUDA-enabled NVIDIA GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then the detect function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

    • "none" — Do not use acceleration.

    Using the Acceleration options "auto" and "mex" can offer performance benefits on subsequent calls with compatible parameters, at the expense of an increased initial run time. Use performance optimization when you plan to call the function multiple times using new input data.

    The "mex" option generates and executes a MEX function based on the network and parameters used in the function call. You can have several MEX functions associated with a single network at one time. Clearing the network variable also clears any MEX functions associated with that network.

    The "mex" option is available only for input data specified as a numeric array, cell array of numeric arrays, table, or image datastore. No other types of datastore support the "mex" option.

    The "mex" option is available only when you are using a GPU. You must also have a C/C++ compiler installed. For setup instructions, see MEX Setup (GPU Coder).

    "mex" acceleration does not support all layers. For a list of supported layers, see Supported Layers (GPU Coder).

    Output Arguments

    collapse all

    Locations of people detected within the input image or images, returned as one of these options:

    • M-by-4 matrix — Returned when the input is a single test image. M is the number of bounding boxes detected in an image. Each row of the matrix is of the form [x y width height]. The x and y values specify the upper-left corner coordinates, and width and height specify the size, of the corresponding bounding box, in pixels.

    • B-by-1 cell array — Returned when the input is a batch of images, where B is the number of test images in the batch. Each cell in the array contains an M-by-4 matrix specifying the bounding boxes detected within the corresponding image.

    Detection confidence scores for each bounding box in the range [0, 1], returned as one of these options:

    • M-by-1 numeric vector — Returned when the input is a single test image. M is the number of bounding boxes detected in the image.

    • B-by-1 cell array — Returned when the input is a batch of test images, where B is the number of test images in the batch. Each cell in the array contains an M-element row vector, where each element indicates the detection score for a bounding box in the corresponding image.

    Labels for bounding boxes, returned as one of these options:

    • M-by-1 categorical vector — Returned when the input is a single test image. M is the number of bounding boxes detected in the image.

    • B-by-1 cell array — Returned when the input is a batch of test images. B is the number of test images in the batch. Each cell in the array contains an M-by-1 categorical vector containing the class name.

    Detection results when the input is a datastore of test images, ds, returned as a table with these columns:

    bboxesscoreslabels

    Predicted bounding boxes, defined in spatial coordinates as an M-by-4 numeric matrix with rows of the form [x y w h], where:

    • M is the number of axis-aligned rectangles.

    • x and y specify the upper-left corner coordinates of the rectangle, in pixels.

    • w specifies the width of the rectangle, which is its length along the x-axis, in pixels.

    • h specifies the height of the rectangle, which is its length along the y-axis, in pixels.

    Confidence scores of the person class for each bounding box, returned as an M-by-1 numeric vector with values in the range [0, 1].

    Predicted person class labels assigned to the bounding boxes, returned as an M-by-1 categorical vector.

    Version History

    Introduced in R2024b