Workshop Overview:
The space-based gravitational-wave observatory LISA will offer unparalleled science returns, including a view of massive black-hole mergers to high redshifts, precision tests of general relativity and black-hole structure, a census of thousands of compact binaries in the Galaxy, and the possibility of detecting stochastic signals from the early Universe.
While the Mock LISA Data Challenges (2006–2011) gave us confidence that LISA will be able to fulfill its scientific potential, we still have a rather incomplete idea of what the end-to-end LISA science analysis should look like. The task at hand is substantial. Our algorithms need to resolve thousands of individual sources of different types and strengths, all of them superimposed in the same multi-year dataset, and simultaneously characterize the underlying noise-like stochastic background. Our catalogs need to represent the complex and highdimensional joint distributions of estimated source parameters for all sources. Our waveform models need to reach part-in-105 accuracy (to achieve full testingGR performance), with sufficient computational efficiency to sample parameter space broadly. Our data reduction needs to ensure the phase coherence of GW measurements across data gaps and instrument glitches over multiple years. It is tempting to assume that current algorithms and prototype codes will scale up to this challenge, thanks to the greatly increased computational power that will become available by LISA’s launch in the early 2030s. In reality, harnessing that power will require very different methods, adapted to future high-performance computational architectures that we can only glimpse now. Thus, we need to begin our exploration at this time, seeking inspiration from other disciplines (e.g., big-data processing, computational biology, the most advanced applications in astroinformatics), and learning to pose the same physical questions in different, future-proof ways—or even daring to imagine questions that will be tractable only with future machines.
The broad objective of this study program is to imagine how evolved or rethought data-analysis algorithms and source-modeling codes will solve the LISA science analysis on the computers of the future. For instance, can we run numericalrelativity simulations on massively parallel, loosely connected processors, in a fault tolerant way? Can we break away from the serial nature of stochastic parameter estimation to (again) exploit parallelism? Can we apply “divide and conquer” principles to the extremely interconnected LISA “global fit”? What representation can we give for the entries (which range from very fuzzy to very defined) in evolving source catalogs, so that we can support the production of partially cleaned datasets, and allow the interaction of multiple analysts? The answers will help guide LISA science and data analysis R&D for the next decade.