Machine Learning at the Flatiron Institute Seminar: Siamak Ravanbaksh
Title: Symmetry, Beyond Invariant Networks
Abstract: Invariant and equivariant networks have been the primary mechanism for imbuing machine learning with symmetry awareness. However, constraining architectures with symmetry may be infeasible or even undesirable. Infeasibility may be due to the inability of network design or lack of information on transformations, and undesirability may be due to the approximate nature of symmetries or suboptimal use of computing. In this talk, I’ll briefly review several works from our group that use symmetries beyond equivariant networks. These examples explore symmetry in different learning paradigms ranging from recent and ongoing works on generative modelling to prior works on self-supervised learning, physics-informed learning and reinforcement learning.