Computer Vision Toolbox Model for OpenAI CLIP Network

The Contrastive Learning Image Pre-Training (CLIP) network is a vision language model that can be used for joint image-text classification.

Vous suivez désormais cette soumission

The CLIP network uses contrastive learning to encode image and textual data into a shared feature space for joint classification. Images and text with high similarity will be close in this feature space, and have a high CLIP score. This further enables image search from input text, and text search from an input image.

Add the first tag.

Compatibilité avec les versions de MATLAB

  • Compatible avec R2026a

Plateformes compatibles

  • Windows
  • macOS (Apple Silicon)
  • macOS (Intel)
  • Linux