[G-Stats Seminar] Erik Bekkers

Erik Bekkers

University of Amsterdam

28/04/2022 – h. 2 p.m.

Bio: Erik Bekkers is an assistant professor in Geometric Deep Learning in the Machine Learning Lab of the University of Amsterdam (AMLab, UvA). Before this he did a post-doc in applied differential geometry at the dept. of Applied Mathematics at Technical University Eindhoven (TU/e). In his PhD thesis (cum laude, Biomedical Engineering, TU/e), he developed medical image analysis algorithms based on sub-Riemannian geometry in the Lie group SE(2) using the same mathematical principles that underlie mathematical models of human visual perception. Such mathematics find their application in machine learning where through symmetries and geometric structure, robust and efficient representation learning methods are obtained. His current work is on generalizations of group convolutional NNs and improvements of computational and representation efficiency through sparse (graphs) and adaptive learning mechanisms.

Title: Group equivariant deep learning and non-linear convolutions

Abstract: In this talk I will cover the essential theory behind group equivariant deep learning and show applications in medical imaging, computational chemistry, and physics. Group equivariant deep learning methods allow for providing guarantees of equivariance (if the input transforms, the output transforms in a predictable way). The equivariance property in turn enables weight sharing over symmetries, enable hierarchical pattern recognition, respect symmetry constraints imposed by the to-be-solved problem at hand, and allow for working with geometric quantities such as (force) vectors in a principled way. I will start with an introduction to group convolutional neural networks (G-CNNs) with applications to medical images and show that group convolutions are the most general class of equivariant linear layers. Then I will show an application of group equivariant deep learning on point clouds (molecular property prediction, n-body simulations) using steerable group convolutions. Finally, I will introduce two important directions that we are currently pursuing: (i) powerful deep learning architectures via so-called non-linear convolutions and (ii) a flexible framework to build equivariant DL architectures by parametrizing continuous convolution kernels with band-limited neural networks.

Reference: Brandstetter, J., Hesselink, R., van der Pol, E., Bekkers, E., & Welling, M. (2021). Geometric and Physical Quantities improve E (3) Equivariant Message Passing. In ICLR 2022

Comments are closed.