Skip to content

Test-Time Training for Semantic Segmentation with Output Contrastive Loss

License

Notifications You must be signed in to change notification settings

dazhangyu123/OCL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Test-Time Training for Semantic Segmentation with Output Contrastive Loss

This is the Pytorch implementation of our "Test-Time Training for Semantic Segmentation with Output Contrastive Loss". This code is based on the paper MaxSquare, due to using its pretrained checkpoint.

Requirement

Dataset

  • Download Cityscapes, which contains 5,000 annotated images with 2048 × 1024 resolution taken from real urban street scenes. We use its validation set with 500 images.

Checkpoints

  • Download the checkpoint pretrained on the GTA5 -> CityScapes task and place it in fold checkpoints.
  • Download the checkpoint pretrained on the SYNTHIA -> CityScapes task and place it in fold checkpoints.

Usage

Baseline

python evaluate.py --pretrained_ckpt_file ./checkpoints/GTA5_source.pth --gpu 1 --method baseline --prior 0.0 --flip
python evaluate.py --pretrained_ckpt_file ./checkpoints/synthia_source.pth --gpu 1 --method baseline --prior 0.0 --flip

Tri-TTT

python evaluate.py --pretrained_ckpt_file ./checkpoints/GTA5_source.pth --gpu 1 --method TTT --prior 0.85 --learning-rate 2e-5 --pos-coeff 3.0
python evaluate.py --pretrained_ckpt_file ./checkpoints/synthia_source.pth --gpu 1 --method TTT --prior 0.85 --learning-rate 1e-5 --pos-coeff 3.0

Results

We present several transfered results reported in our paper.

GTA2Cityscapes

Method Source only OCL
MIoUs 37.5 45.0

Synthia2Cityscapes

Method Source only OCL
MIoUs 31.5 36.9

About

Test-Time Training for Semantic Segmentation with Output Contrastive Loss

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published