Skip to main content Lecture 16 : CLIP and its applications
Notes
Recording
Readings
- Learning Transferable Visual Models From Natural Language Supervision (a.k.a. CLIP), Radford et al., ICML 2021
- StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery, Patashnik et al., ICCV 2021
- StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators, Gal et al., SIGRRAPH 2022
- CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions, Abdal et al., SIGRRAPH 2022
- CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders, Frans et al., NeurIPS 2022
- Blended Diffusion for Text-driven Editing of Natural Images, Avrahami et al., CVPR 2022
Useful Links