Time, Place

Tuesdays 14:00-16:00 in room: MI 02.13.010

Begin

18th., October 2022

Prerequisites Introduction to Deep Learning

Master-Seminar – Deep Learning in Computer Graphics (IN2107, IN0014)

Liwei Chen, Benjamin Holzschuh and Nils Thuerey

Content

In this course, students will autonomously investigate recent research about machine learning techniques in computer graphics. Independent investigation for further reading, critical analysis, and evaluation of the topic are required.

Requirements

Participants are required to first read the assigned paper and start writing a report. This will help you prepare for your presentation.

Attendance
  • It is only allowed to miss two talks. If you have to miss any, please let us know in advance, and write a one-page summary about the paper in your own words. Missing the third one means failing the seminar. 
Report
  • A short report (4 pages max. excluding references in the ACM SIGGRAPH TOG format (acmtog) - you can download the precompiled latex template) should be prepared and sent two weeks after the talk, i.e., by 23:59 on Tuesday.
  • Guideline: You can begin with writing a summary of the work you present as a start point; but, it would be better if you focus more on your own research rather than just finishing with the summary of the paper. We, including you, are not interested in revisiting the work done before; it is more meaningful if you make an effort to put your own reasoning about the work, such as pros and cons, limitation, possible future work, your own ideas for the issues, etc.
  • For questions regarding your paper or feedback for a semi-final version of your report you can contact your advisor.
Presentation (slides)
  • You will present your topic in English, and the talk should last 30 minutes. After that, a discussion session of about 10 minutes will follow.
  • The slides should be structured according to your presentation. You can use any layout or template you like, but make sure to choose suitable colors and font sizes for readability.
  • Plagiarism should be avoided; please do not simply copy the original authors' slides. You can certainly refer to them.
  • The semi-final slides (PDF) should be sent one week before the talk, otherwise the talk will be canceled.
  • We strongly encourage you to finalize the semi-final version as far as possible. We will take a look at the version and give feedback. You can revise your slides until your presentation.
  • The final slides should be sent after the talk.

Schedule

29-Aug-22 Deregistration due
12-Sept-22 Deadline for sending an e-mail with 4 preferred topics
  Notification of assigned paper
18-Oct-22 Introduction lecture (updated) 
8-Nov-22 First talk

Presentation Schedule

Date

Paper

Student Name

Advisor

8 Nov.

2019, Chu et al., Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation, arXiv.org

Cenikj, Nikola

Holzschuh

8 Nov.

 

2019, Hermosilla et al., Deep-learning the Latent Space of Light Transport, arXiv.org

Chen, Wen-Ju Chen
15 Nov.

 

2019, Thies et al., Deferred Neural Rendering: Image Synthesis using Neural Textures, arXiv.org

Marzouki, Mohamed Aziz Slim Holzschuh
15 Nov.

 

2020, Dupont et al., Equivariant Neural Rendering, ICML

Melnychuk, Artem Chen
22 Nov.

 

2019, Choi & Kweon, Deep Iterative Frame Interpolation for Full-frame Video Stabilization, arXiv.org

Ghazinour, Mahyar Holzschuh

22 Nov.

2020, Xiao et al., Neural Supersampling for Real-Time Rendering, ACM Trans. Graph.

Kuzovoi, Nikita Chen

29 Nov.

2021, Yin et al., Learning to Recover 3D Scene Shape from a Single Image, CVPR

Zhuge, Wenzhe Holzschuh
29 Nov. to be confirmed Chaskopoulos, Dimitrios Holzschuh
6 Dec. 2022, Chu et al., Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data Christian, Maxime Louis Gerd Holzschuh
6 Dec. 2022, Mueller et al., Instant Neural Graphics Primitives with a Multiresolution Hash Encoding Simeng, Li Chen
13 Dec. 2022, Saharia et al., Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding Günther, Jakob Holzschuh
13 Dec. 2022, Peng et al., Shape As Points: A Differentiable Poisson Solver Lima Carneiro, Thiago Chen
10 Jan. 2023 2022, Franz et al., Global Transport for Fluid Reconstruction with Learned Self-Supervision Cancelled  
10 Jan. 2023 2022, Lin et al., 3D GAN Inversion for Controllable Portrait Image Animation Yu Heng Luo Holzschuh
17 Jan. 2023 2021, Wu et al., Coarse-to-Fine: Facial Structure Editing of Portrait Images via Latent, Space Classifications, ACM Trans. Graph Paloma Escribano Lopez Chen
17 Jan. 2023 2022, Vicini et al., Differentiable Signed Distance Function Rendering Cancelled  

Topics

Paper Number

Paper

Student Name

Advisor

1

2019, Chu et al., Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation, arXiv.org

Cenikj, Nikola

 

2

 

2019, Meka et al., Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination, ACM Trans. Graph

   
3

 

2019, Hermosilla et al., Deep-learning the Latent Space of Light Transport, arXiv.org

Chen, Wen-Ju  
4

 

2019, Werhahn et al., A Multi-Pass GAN for Fluid Flow Super-Resolution, ACM Comput. Graph. Interact. Tech.

   
5

 

2019, Thies et al., Deferred Neural Rendering: Image Synthesis using Neural Textures, arXiv.org

Marzouki, Mohamed Aziz Slim  
6

 

2020, Dupont et al., Equivariant Neural Rendering, ICML

Melnychuk, Artem  
7

 

2020, Luo et al., Consistent Video Depth Estimation, ACM Trans. Graph

   
8

 

2019, Choi & Kweon, Deep Iterative Frame Interpolation for Full-frame Video Stabilization, arXiv.org

Ghazinour, Mahyar  

9

2019, Frühstück et al., TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures, arXiv.org

   
10

 

2021, Wang et al.,Rethinking and Improving the Robustness of Image Style Transfer, CVPR

   
11

 

2021, Wu et al., Coarse-to-Fine: Facial Structure Editing of Portrait Images via Latent, Space Classifications, ACM Trans. Graph

Paloma Escribano Lopez  
12

 

2020, Wang et al., Attribute2Font: Creating Fonts You Want From Attributes, ACM Trans. Graph

   
13

 

2021, Chu et al., Learning Meaningful Controls for Fluids, ACM Trans. Graph

   

14

2020, Xiao et al., Neural Supersampling for Real-Time Rendering, ACM Trans. Graph.

Kuzovoi, Nikita  

15

2021, Yin et al., Learning to Recover 3D Scene Shape from a Single Image, CVPR

Zhuge, Wenzhe  

16

2020, Kopf et al., One Shot 3D Photography, ACM Trans. Graph

   
17 2022, Xie et al., TemporalUV: Capturing Loose Clothing with Temporally Coherent UV Coordinates    
18 2022, Chu et al., Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data Christian, Maxime Louis Gerd  
19 2022, Lin et al., 3D GAN Inversion for Controllable Portrait Image Animation Yu Heng Luo  
20 2022, Vicini et al., Differentiable Signed Distance Function Rendering Jongsul Han
21 2022, Mueller et al., Instant Neural Graphics Primitives with a Multiresolution Hash Encoding Simeng, Li  
22 2022, Saharia et al., Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding Günther, Jakob  
23 2022, Peng et al., Shape As Points: A Differentiable Poisson Solver Lima Carneiro, Thiago  
24 2022, Franz et al., Global Transport for Fluid Reconstruction with Learned Self-Supervision Ronchetti, Lucas Sergio  

Topics (copy 1)

No Date Presenter Paper Advisor
 1 ---   2020, Kopf et al., One Shot 3D Photography, ACM Trans. Graph ---
 2 ---   2021, Wang et al.,Rethinking and Improving the Robustness of Image Style Transfer, CVPR ---
 3     2021, Yin et al., Learning to Recover 3D Scene Shape from a Single Image, CVPR Nilam Tathawadekar
 4     2021, Wu et al., Coarse-to-Fine: Facial Structure Editing of Portrait Images via Latent
Space Classifications, ACM Trans. Graph
Georg Kohl
 5     2021, Chu et al., Learning Meaningful Controls for Fluids, ACM Trans. Graph Georg Kohl
 6     2019, Werhahn et al., A Multi-Pass GAN for Fluid Flow Super-Resolution, ACM Comput. Graph. Interact. Tech. Nilam Tathawadekar
 7     2019, Meka et al., Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination, ACM Trans. Graph Erik Franz
 8     2020, Nilsson & Akenine-Möller, Understanding SSIM, arXiv.org Georg Kohl
 9     2019, Frühstück et al., TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures, arXiv.org Erik Franz
10     2019, Thies et al., Deferred Neural Rendering: Image Synthesis using Neural Textures, arXiv.org Erik Franz
11     2020, Kim et al., Lagrangian Neural Style Transfer for Fluids, ACM Trans. Graph. Nilam Tathawadekar
12     2020, Dupont et al., Equivariant Neural Rendering, ICML Nilam Tathawadekar
13     2020, Luo et al., Consistent Video Depth Estimation, ACM Trans. Graph Nilam Tathawadekar
14     2019, Hermosilla et al., Deep-learning the Latent Space of Light Transport, arXiv.org Georg Kohl
15     2020, Wang et al., Attribute2Font: Creating Fonts You Want From Attributes, ACM Trans. Graph Georg Kohl
16     2019, Choi & Kweon, Deep Iterative Frame Interpolation for Full-frame Video Stabilization, arXiv.org Erik Franz
17 --- --- 2020, Xiao et al., Neural Supersampling for Real-Time Rendering, ACM Trans. Graph. ---
18     2019, Chu et al., Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation, arXiv.org Georg Kohl
19     2019, Karras et al., Analyzing and Improving the Image Quality of StyleGAN, arXiv.org Erik Franz
20 --- ---   ---

You can access the papers through TUM library's eAccess.

References