Sketch3T

Test-Time Training for Zero-Shot SBIR

Aneeshan Sain
Ayan Kumar Bhunia
Vaishnav Potlapalli
Pinaki Nath Chowdhury
Tao Xiang
Yi-Zhe Song

SketchX, Centre for Vision Speech and Signal Processing,
University of Surrey, United Kingdom

Published at CVPR 2022

[Paper]
[Talk]
[GitHub]

Framework



Zero-shot sketch-based image retrieval typically asks for a trained model to be applied as is to unseen categories. In this paper, we question to argue that this setup by definition is not compatible with the inherent abstract and subjective nature of sketches – the model might transfer well to new categories, but will not understand sketches existing in different test-time distribution as a result. We thus extend ZS-SBIR asking it to transfer to both categories and sketch distributions. Our key contribution is a test-time training paradigm that can adapt using just one sketch. Since there is no paired photo, we make use of a sketch raster-vector reconstruction module as a self-supervised auxiliary task. To maintain the fidelity of the trained cross-modal joint embedding during test-time update, we design a novel metalearning based training paradigm to learn a separation between model updates incurred by this auxiliary task from those off the primary objective of discriminative learning. Extensive experiments show our model to outperform stateof-the-arts, thanks to the proposed test-time adaption that not only transfers to new categories but also accommodates to new sketching styles.


Qualitative Results





Quantitative Results





Ablative Studies





Short Presentation



Paper and Bibtex

[Paper]

Citation
 
Sketch3T: Test-Time Training for Zero-Shot SBIR, In CVPR 2022.

[Bibtex]
@inproceedings{2022sainsketch3t,
author = {Aneeshan Sain and Ayan Kumar Bhunia and Vaishnav Potlapalli and Pinaki Nath Chowdhury and Tao Xiang and Yi-Zhe Song},
title = {Sketch3T: Test-Time Training for Zero-Shot SBIR},
booktitle = {CVPR},
year = {2022}
}
        

Acknowledgements


Website template from here and here.