Skip to content

Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.

Notifications You must be signed in to change notification settings

maxi-w/CLIP-SAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CLIP-SAM

Small experiment on combining CLIP with SAM to do open-vocabulary image segmentation.

The approach is to first identify all the parts of an image using SAM, and then use CLIP to find the ones that best match a specific description.

Usage

  1. Download weights and place them in this repos root.

  2. Install dependencies:

    pip install torch opencv-python Pillow
    pip install git+https://github.com/openai/CLIP.git
    pip install git+https://github.com/facebookresearch/segment-anything.git
  1. Run Notebook main.ipynb

Example

Example output for prompt "kiwi"

Image with segmentation

Example Image Source

About

Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.

Resources

Stars

Watchers

Forks