Seminar

Deep Learning: From Mathematical Foundations
to Image Compression

Winter Term 2019/20


(Main) Seminar: Deep Learning: From Mathematical Foundations to Image Compression

Dr. Pascal Peter, Prof. Joachim Weickert

Winter Term 2019/20

(Main) Seminar (2 h)

Notice for bachelor/master students of mathematics: This is a »Hauptseminar« in the sense of these study programs.

From O. Rippel and L. Bourdev: Real-time adaptive image compression.

NEWS:

20/1/2020: An additional session has been added in the updated schedule.

07/11/2019: The updated schedule is online.

19/07/2019: The list of assigned topics is online.

19/07/2019: The slides of the introductory meeting are online.

17/07/2019: Registration for the seminar is closed. There are no more places left.

15/07/2019: The list of topics is now online.

15/07/2019: The opponent requirement has been removed.

11/07/2019: Registration for the seminar is possible from Friday, July 12, 2019, 18:00.



Important DatesDescriptionRegistrationRequirementsIntroductory Meeting –> Write-upOverview of Topics



Introductory meeting (mandatory):
The introductory meeting will take place in building E1.7, room 4.10 on Friday, July 19, 2019, 4:15 p.m.. In this meeting, we will assign the topics to the participants. Attendance is mandatory for all participants.

The registration period is over.

Regular meetings during the winter term 2019/20:
Building E1.7, room 4.10 on Wednesdays, 4:15 pm


Contents: During the last few years, deep learning rapidly rose to the forefront of research in computer science. In particular, it has significantly changed the area of image processing and computer vision. While it is already a well-established concept for tasks like segmentation, object recognition and many similar semantically challenging topics, it is now also becoming increasingly relevant for image compression. Simultaneously, the focus is shifting from purely using deep neural networks as a tool to understanding the underlying mathematical concepts. In this seminar, we discuss a series of recent publications that range from mathematical foundations of deep learning to state-of-the-art compression with neural networks.

Prerequisites: The seminar is for advanced bachelor or master students in Visual Computing, Mathematics, or Computer Science. Basic knowledge of linear algebra, probability theory, and numerics is required (e.g. MfI I-III). Elementary knowledge in machine learning and image compression is strongly recommended.

Language: The publications discussed in this seminar are written in English, and English is the language of presentation.


The registration for this course was open

from Friday, July 12, 2019, 6 pm
until Wednesday, July 17, 2019, 6 pm.

Since the number of talks is limited, we ask for your understanding that participants will be considered strictly in the order of registration – no exceptions.


Regular attendance: You must attend all seminar meetings, except for provable important reasons (medical certificate).

Talk: Talk duration is 30 min, plus 15 min for discussion. Please do not deviate from this time schedule.
You may give a presentation using a data projector, overhead projector or blackboard, or mix these media appropriately. Your presentation must be delivered in English. Your slides and your write-up, too, have to be in English.

Write-up: The write-up has to be handed in three weeks after the lecture period ends. The deadline is Friday, February 28, 2020, 23:59 The write-up should summarise your talk and has to consist of 5 pages per speaker. Electronic submission is preferred. File format for electronic submissions is PDF – text processor files (like .doc) are not acceptable. Do not forget to hand in your write-up: Participants who do not submit a write-up cannot obtain the certificate for the seminar. The write-up must be e-mailed to your advisor.

Plagiarism: Adhere to the standards of scientific referencing and avoid plagiarism: Quotations and copied material (such as images) must be clearly marked as such, and a bibliography is required. Otherwise the seminar counts as failed.

Mandatory consultation: Talk preparation has to be presented to your seminar supervisor no later than one week before the talk is given. It is your responsibility to approach your advisor timely and make your appointment.

No-shows: No-shows are unfair to your fellow students: Some talks are based on previous talks, and your seminar place might have prevented the participation of another student. Thus, in case you do not appear to your scheduled talk (except for reasons beyond your control), we reserve the right to exclude you from future seminars of our group.

Participation in discussions: The discussions after the presentations are a vital part of this seminar. This means that the audience (i.e. all participants) poses questions and tries to find positive and negative aspects of the proposed idea. This participation is part of your final grade.

Being on time: To avoid disturbing or interrupting the speaker, all participants have to be in the seminar room on time. Participants that turn out to be regularly late must expect a negative influence on their grade.


Here are the slides from the introductory meeting. They contain important information for preparing a good talk.


The subsequently linked document provides guidelines for the creation of your write-up. Make sure to consider them during the preparation of your final report.


We will discuss the following research papers.


No.   Date   Speaker Advisor Topic
1 06/11 Moritz Kunz
Slides
Peter M. A. Ponti, L. S. F. Ribeiro, T. S. Nazare, T. Bui, J. Collomoss:
Everything you Wanted to Know about Deep Learning for Computer Vision but were Afraid to Ask.
Sections I-III.
SIBGRAPI Conference on Graphics, Patterns and Images Tutorials, October 2017.
2 06/11 Rabia Ilyas
Slides
Peter M. A. Ponti, L. S. F. Ribeiro, T. S. Nazare, T. Bui, J. Collomoss:
Everything you Wanted to Know about Deep Learning for Computer Vision but were Afraid to Ask.
Sections IV-VIII, without VI.
SIBGRAPI Conference on Graphics, Patterns and Images Tutorials, October 2017.
3 13/11 Danish Shahzad
Slides
Weickert B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert, E Holtham:
Reversible Architectures for Arbitrarily Deep Residual Neural Networks.

Thirty-Second AAAI Conference on Artificial Intelligence, February 2018.
4 13/11 Lars Schieffer
Slides
Weickert E. Haber, L. Ruthotto, E. Holtham, S.-H. Jun:
Learning Across Scales - Multiscale Methods for Convolution Neural Networks.

Thirty-Second AAAI Conference on Artificial Intelligence, February 2018.
5 27/11 Matthias Hock
Notes
Weickert Z. Li, Z. Shi:
Deep Residual Learning and PDEs on Manifold.

ArXiv Preprint 1708.05115, January 2018.
6 4/12 Antonia Hain
Slides
Weickert E. Kobler, T. Klatzer, K. Hammernik, T. Pock:
Variational Networks: Connecting Variational Methods and Deep Learning.

German Conference on Pattern Recognition (GCPR), September 2017.
7 4/12 Amr Amer
Slides
Weickert D. Rolnick, M. Tegmark:
The Power of Deeper Networks for Expressing Natural Functions.

International Conference on Learning Representations (ICLR), April 2018.
8 11/12 Anurag Das
Slides
Peter K. Gregor, F. Besse, D. Jimenez Rezende, I. Danihelka, D. Wierstra:
Towards Conceptual Compression.

Conference on Neural Information Processing Systems (NIPS), December 2016.
9 11/12 Anindita Ghosh
Slides
Peter G. Toderici, D. Vincent, N. Johnston, S. J. Hwang, D. Minnen, J. Shor, M. Covell:
Full Resolution Image Compression with Recurrent Neural Networks.

Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
10 18/12 Nan Wu
Slides
Peter L. Theis, W. Shi, A. Cunningham, F. Huszár:
Lossy Image Compression with Compressive Autoencoders.

International Conference on Learning Representations (ICLR), April 2017.
11 18/12 Yaroslav Mykoliv
Slides
Peter J. Ballé, V. Laparra, E. P. Simoncelli:
End-to-end Optimized Image Compression.

International Conference on Learning Representations (ICLR), April 2017.
12 8/1 Xuwen Yao
Slides
Peter E Agustsson, F. Mentzer, M. Tschannen, L. Cavigelli, R. Timofte, L. Benini, L. Van Gool:
Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations.

Conference on Neural Information Processing Systems (NIPS), December 2017.
13 8/1 Nathalie Zeller
Slides
Peter O. Rippel, L. Bourdev:
Real-Time Adaptive Image Compression.

International Conference on Machine Learning (ICML), August 2017.
14 15/1 Marcel Ulrich
Slides
Peter J. Ballé, D. Minnen, S. Singh, S. J. Hwang, N. Johnston:
Variational Image Compression with a Scale Hyperprior.

International Conference on Learning Representations (ICLR), April 2018.
15 15/1 Daniel Heller
Slides
Peter J. Ballé:
Efficient Nonlinear Transforms for Lossy Image Compression.

Picture Coding Symposium (PCS), June 2018.
16 22/1 Niklas Kämper
Slides
Peter D. Minnen, J. Ballé, G. Toderici:
Joint Autoregressive and Hierarchical Priors for Learned Image Compression.

Conference on Neural Information Processing Systems (NIPS), December 2018.
17 22/1 Christian Schulz
Slides
Peter C.-Y. Wu, N. Singhal, P. Krähenbühl:
Video Compression through Image Interpolation.

European Conference on Computer Vision (ECCV), September 2018.
18 29/1 Karl Schrader
Slides, Supplementary Materials
Weickert L. Ruthotto, E. Haber:
Deep Neural Networks Motivated by Partial Differential Equations.

ArXiv Preprint 1804.04272, December 2018.

MIA Group
©2001-2023
The author is not
responsible for
the content of
external pages.

Imprint - Data protection