Detailed Galaxy Maps Offer Clues to Cosmic Mysteries
Boris Leistedt is a cartographer of the cosmos. Using images of galaxies near and far, he and his colleagues build maps of matter in the universe and use them to test fundamental ideas about how the universe began, how it is evolving and what will happen to it in the future. A cosmologist at New York University and a Simons Society Junior Fellow, Leistedt is particularly interested in developing new statistical techniques to better understand the universe’s vast structure and what it can reveal about dark matter, dark energy and gravity. I spoke to Leistedt about his work in his NYU office. An edited version of the interview follows.
Why isn’t all of space filled with galaxies? What keeps galaxies confined within clusters across the cosmos?
The two most fundamental assumptions of modern cosmology are that the universe is homogeneous and that it is isotropic. That means it looks the same everywhere, for all observers, whoever and wherever they are. We know this is remarkably true on very large scales. But when we look carefully, the universe is not perfectly isotropic and homogeneous. Galaxies form, collapse and merge, and there are filaments and grooves, creating a weblike structure, which we call the cosmic web. These phenomena are the result of gravity. And it’s thanks to this structure that we can actually test our understanding of gravity at various scales, from individual galaxies to clusters of galaxies to the full distribution of galaxies across the cosmos. One priority of modern cosmology is to exploit these observations of the cosmic web to test our ideas about the nature of dark matter and dark energy.
What are dark matter and dark energy, and how does the clustering of galaxies help you study them?
Dark matter is matter that we don’t see, but that we know is there, mostly because it bends the light emitted by galaxies and also affects the way galaxies form. Dark matter can be mapped and studied by looking at the detailed distribution of galaxies and their shapes. Dark energy refers to the mysterious force driving the accelerated expansion of the universe. Unlike dark matter, it cannot be mapped because it is a type of pressure present everywhere, pushing space-time outward. Dark energy can only be detected by looking at the dynamics or evolution of the galaxy distribution in the universe.
One way we probe these mysterious components is by looking at the clustering of galaxies, the abundance of galaxies separated by a given distance. First, we take very deep images of a large portion of sky with a telescope and find as many galaxies as possible to make a 3-D map of where the galaxies are in that volume of space. Then we can measure the average number of galaxies separated by a given distance and compare it with theoretical predictions from cosmological models with various descriptions of gravity, dark matter and dark energy.
The hottest questions of cosmology involve performing this test on different scales, say with small groups and large clusters of galaxies, or with the positions and shapes of galaxies. We do this to probe different types of physics. We’re trying to find hints of unknown physics. These hints could inform us about the true nature of dark matter or dark energy and could also help us design future experiments to improve our understanding of fundamental physics.
Creating a cosmic map requires the examination of images of galaxies that are quite distant and faint. So how do you handle messy data?
Mapping the distribution of galaxies in the largest volumes possible involves dealing with very noisy images. Even though we have very good ways to find faint galaxies deep in the noise of one image, this is still not pushing us far enough to exploit the full potential of modern datasets. Imagine having a million images. More information can be extracted from each image by assuming that all images are actually observing a similar scene, a set of stars and galaxies with similar properties and the same level of noise, for example. One of my specialties is to come up with probabilistic and statistical ways to exploit all of these images simultaneously to make larger, more accurate galaxy maps and measure properties of dark matter, dark energy and gravity.
How did you become interested in using statistical methods to study the cosmos?
I liked astronomy from an early age but studied signal processing and engineering initially. At university, I was fascinated by the use of numerical techniques, like wavelets, to analyze large datasets, and I discovered their use in astronomy and cosmology during a summer internship. I then realized that I wanted to do a Ph.D. specifically in observational cosmology in order to fulfill my two passions, data analysis and astrophysics. I went to University College London for my Ph.D., where I worked with astrophysicists and statisticians. One of my closest collaborators, Jason McEwen, pioneered the application of wavelets to astronomical datasets, and we worked together on extending these methods and applying them to modern datasets.
What do you mean by wavelets in numerical analysis, and how can they help you analyze astronomical data?
A wavelet is a technical term referring to an extremely efficient representation of images. One example is the JPEG format. It is based on wavelets and boils down to decomposing an image as a function of scale, keeping only the most prominent features. Say you take a picture with your camera. Wavelets enable you to decompose the image into big blobs, smaller blobs and even smaller and smaller ones. It’s like smoothing your image at different scales. You can drop some scales that aren’t relevant. Or you can assume some areas don’t have certain scales, and then put the image together without them. That’s the JPEG compression.
Similarly, I try to see whether I can describe astronomical images using a wavelet basis, where if you only have a few scales, you focus on those and don’t worry about the rest. It’s also a way to design new algorithms, so you could do your entire data analysis on this scale basis. Wavelets are a tool to help simplify data analysis so you can look at more data.
How much data do you deal with?
One of the specific problems I have been working on is actually how we will use data coming from next-generation telescopes, such as the Large Synoptic Survey Telescope (LSST) and the Square Kilometre Array. The daily amount of data each telescope will generate is the same as Internet traffic, the flow of data across the Internet, for a day. We have to find ways to reduce the data and do science with it. That’s a fact. The money is there. These telescopes are going to be built. We just don’t know how to process that much data right now. A lot of data delivered by LSST, for example, will be unavoidably compressed or even thrown away. But astronomers are now working with data scientists to design algorithms that can decide what data to keep to make sure we don’t miss any discoveries.
With the data you have now and more to come, what questions about the cosmos do you hope are answered in your lifetime?
Well, there are questions I want to answer, and then there are questions I want to know the answer to. The nature of dark matter and dark energy are urgent questions of modern physics, but they cannot be answered by one person. My personal goal is to make sure that we are actually able to make the most of the data provided by ongoing and upcoming astronomical experiments. This is challenging but also exciting. It is truly at the interface between cosmology, astronomy and statistics.
The Simons Society of Fellows is a community of scholars that encourages intellectual interactions across disciplines and across research centers in the New York City area.