February 07, 1996 - February 07, 2029

  • Date:13ThursdayMay 2021

    Vision and Robotics Seminar

    More information
    Time
    10:15 - 11:30
    Title
    Deep Internal Learning
    LecturerAssaf Shocher
    Weizmann Institute of Science
    Organizer
    Faculty of Mathematics and Computer Science
    Contact
    AbstractShow full text abstract about Deep Learning has always been divided into two phases: Train...»
    Deep Learning has always been divided into two phases: Training and Inference. The common practice for Deep Learning is training big networks on huge datasets. While very successful, such networks are only applicable to the type of data they were trained for and require huge amounts of annotated data, which in many cases are not available. In my thesis (guided by Prof. Irani), I invented ``Deep Internal Learning''. Instead of learning to generally solve a task for all inputs, we perform ``ad hoc'' learning for specific input. We train an image-specific network, we do it at test-time and on the test-input only, in an unsupervised manner (no label or ground-truth). In this regime, training is actually a part of the inference, no additional data or prior training is taking place. I will demonstrate how we applied this framework for various challenges: Super-Resolution, Segmentation, Dehazing, Transparency-Separation, Watermark removal. I will also show how this approach can be incorporated to Generative Adversarial Networks by training a GAN on a single image. If time permits I will also cover some partially related works.

    Links to papers:
    http://www.wisdom.weizmann.ac.il/~vision/zssr
    http://www.wisdom.weizmann.ac.il/~vision/DoubleDIP
    http://www.wisdom.weizmann.ac.il/~vision/ingan
    http://www.wisdom.weizmann.ac.il/~vision/kernelgan
    https://semantic-pyramid.github.io/
    https://arxiv.org/abs/2006.11120
    https://arxiv.org/abs/2103.15545

    Lecture