Neural networks to revolutionize the analysis of gravitational lenses images

Images of galaxies taken using gravitational lenses (Image Yashar Hezaveh/Laurence Perreault Levasseur/Phil Marshall/Stanford/SLAC National Accelerator Laboratory; NASA/ESA)
Images of galaxies taken using gravitational lenses (Image Yashar Hezaveh/Laurence Perreault Levasseur/Phil Marshall/Stanford/SLAC National Accelerator Laboratory; NASA/ESA)

An article published in the journal “Nature” describes the application of neural networks to gravitational lensing. A team of researchers reduced from a few weeks to a few seconds the time needed to analyze complex space distortions in images captured thanks to gravitational lenses. This could greatly facilitate this type of task with great benefits for astronomical research.

Gravitational lenses are a phenomenon predicted by Albert Einstein’s general relativity theory and have been identified many times during astronomical observations. They bend the light of objects behind them magnifying them with great advantages for their study, for example in the case of galaxies billions of light years away that would barely be visible even for the best space telescopes of which we can instead see many details thanks to gravitational lenses.

The problem is that the images obtained thanks to gravitational lenses need to be subjected to complex processing because magnification involves distortions and sometimes multiplication of the original image. The analysis has so far been performed by comparing the image to computer models but it’s a process that can easily take weeks.

To try to solve this problem, a team of the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), an institute of SLAC National Accelerator Laboratory and Stanford University, tried to use neural networks for this type of analysis. Yashar Hezaveh, the article’s lead author, explained that the test was conducted using three publicly available neural networks and one developed by the researchers who conducted this study.

Neural networks are modeled on brain structure and have the ability to learn. For this reason, they need to be trained for the task they’re meant to be used for: in the specific case, the researchers showed the neural networks about half a million simulated images of gravitational lenses for about a day. At the end of this training, they were able to analyze new lenses almost instantaneously with a precision comparable to that of traditional methods.

In another article submitted for publication to “The Astrophysical Journal Letters”, the researchers describe how neural networks can also determine the uncertainties of their analysis. So far the applications of neural networks in the field of astrophysics didn’t go beyond the recognition of gravitational lenses in an image.

This new application allows neural networks to determine the properties of a lens, including how its mass was distributed and how much it magnified the image of the object in the background. Another advantage is that all this can be done without using supercomputers but with normal computers, even with a smartphone.

With next-generation telescopes under construction, the amount of images captured thanks to gravitational lenses will increase even more. Laurence Perreault Levasseur, one of the article’s authors, pointed out that with traditional methods of analysis there wouldn’t be enough people to do the job in a timely manner. It’s one of the cases where developments in the field of neural networks can bring huge benefits to another type of research.

Leave a Reply

Your email address will not be published. Required fields are marked *