Skip to main content

Neural network surrogate models of gravitational waveforms

Project Information

AI/ML, astrophysics, deep-learning, parameter-sweeps
Project Status: Reviewing Applicants
Project Region: CAREERS
Submitted By: Gaurav Khanna
Project Email: sfield17@uri.edu
Project Institution: University of Rhode Island
Anchor Institution: CR-University of Rhode Island
Project Address: Rhode Island

Students: Ashwin Girish

Project Description

The gravitational wave (GW) signal received by an interferometric detector is an oscillatory "chirp" signal with its amplitude and frequency peaking at merger. Such waveforms are challenging to model directly because they contain a lot of structure and additional modulations resulting from precession effects and higher-order modes. Over the past decade, data-driven surrogate models have become prominent in GW data analysis, resulting in efficient and accurate surrogate models for a given set of GW signals with the help of numerical methods for decomposing, compressing, and fitting the original data.

We propose to extend existing surrogate methods with the overarching goal of accelerating evaluation times while retaining accuracy by replacing current regression techniques -- such as greedy polynomial fits -- with artificial neural networks (ANNs), which have been shown to be universal function approximators. Given sufficient depth, ANNs can be very powerful and efficiently approximate arbitrary functions. For regression, one usually minimizes the mean squared error of the target function. We propose to model data pieces for processing GW signals with ANNs, making use of hyper-parameter optimization to find optimal values for the number of layers and neurons of ANNs as well as an optimal choice of the activation function. The student will work with an existing Python codebase that relies on PyTorch / Tensorflow deep learning packages. The student will become familiar with working on UNITY, writing and submitting bash submission scripts, using GPU acceleration, training ANNs, inspecting loss curves, and working and contributing with an existing Python codebase and version with git.

Project Information

AI/ML, astrophysics, deep-learning, parameter-sweeps
Project Status: Reviewing Applicants
Project Region: CAREERS
Submitted By: Gaurav Khanna
Project Email: sfield17@uri.edu
Project Institution: University of Rhode Island
Anchor Institution: CR-University of Rhode Island
Project Address: Rhode Island

Students: Ashwin Girish

Project Description

The gravitational wave (GW) signal received by an interferometric detector is an oscillatory "chirp" signal with its amplitude and frequency peaking at merger. Such waveforms are challenging to model directly because they contain a lot of structure and additional modulations resulting from precession effects and higher-order modes. Over the past decade, data-driven surrogate models have become prominent in GW data analysis, resulting in efficient and accurate surrogate models for a given set of GW signals with the help of numerical methods for decomposing, compressing, and fitting the original data.

We propose to extend existing surrogate methods with the overarching goal of accelerating evaluation times while retaining accuracy by replacing current regression techniques -- such as greedy polynomial fits -- with artificial neural networks (ANNs), which have been shown to be universal function approximators. Given sufficient depth, ANNs can be very powerful and efficiently approximate arbitrary functions. For regression, one usually minimizes the mean squared error of the target function. We propose to model data pieces for processing GW signals with ANNs, making use of hyper-parameter optimization to find optimal values for the number of layers and neurons of ANNs as well as an optimal choice of the activation function. The student will work with an existing Python codebase that relies on PyTorch / Tensorflow deep learning packages. The student will become familiar with working on UNITY, writing and submitting bash submission scripts, using GPU acceleration, training ANNs, inspecting loss curves, and working and contributing with an existing Python codebase and version with git.