- © 2014 by the Seismological Society of America
Editor’s note: The following is the text of the SSA Presidential Address presented at the Annual Luncheon of the Seismological Society of America (SSA) Annual Meeting on 30 April 2014.
The Seismological Society of America (SSA) has always been dedicated to understanding and reducing the earthquake threat. The Society was founded in 1906 “for the acquisition and diffusion of knowledge concerning earthquakes and allied phenomena.” According to our new strategic plan, approved by the Board in 2012, the core purpose of SSA is to “advance seismology and the understanding of earthquakes for the benefit of society.” This plan lays out the vision for SSA to be “the primary forum for the assembly, exchange, and dissemination of scientific knowledge essential for an earthquake‐aware and safer world.”
In the past twenty years or so, the study of earthquakes has become a true system science, offering new pathways for the advancement of seismology. Today I would like to explore what the rise of earthquake system science might imply for the future of our field and for SSA’s mission in earthquake research.
System science seeks to explain phenomena that emerge from nature at the system scale, such as global climate change or earthquake activity in California or Alaska. The “system” is not a physical reality, but a hypothetical representation of nature, typically a numerical model that replicates an emergent behavior and predicts its future course.
The choice of target behavior determines the system model, as can be illustrated by two representations of earthquake activity in California. One is UCERF3, the latest uniform California earthquake rupture forecast of the Working Group on California Earthquake Probabilities, which represents future earthquake activity in terms of time‐dependent fault‐rupture probabilities. Another is the Southern California Earthquake Center (SCEC)’s CyberShake ground‐motion model, which uses simulations to represent the probability of future earthquake shaking at geographic sites, conditional on the fault rupture. These two system‐level models can be combined to generate site‐specific hazard curves, the main forecasting tool of probabilistic seismic‐hazard analysis (PSHA).
The first point to emphasize is that earthquake system science is all about forecasting and prediction. For many years now, earthquake prediction has remained an awkward topic in polite seismological company, primarily because it has been defined in the public mind by something we cannot do, which is to predict with high probability the regional occurrence of large earthquakes over the short term. Yet the “P‐word” is too central to our science to be banned from our working vocabulary. From a practical perspective, we must be able to predict earthquake hazards in order to lower seismic risk. From the basic‐research perspective of system science, testing a model’s predictions against new data is the principle means by which we can gain confidence in the hypotheses and theories on which the model is built.
System science offers a brick‐by‐brick approach to building up our understanding of earthquake predictability.
For example, many interesting problems of contingent predictability can be posed as physics questions in a system‐specific context. What will be the shaking intensity in the Los Angeles basin from a magnitude 7.8 earthquake on the southern San Andreas fault? By how much will the strong shaking be amplified by the coupling of source directivity to basin effects? Will deep injection of waste fluids cause felt earthquakes near a newly drilled well in Oklahoma? How intense will the shaking be during the next minute of an ongoing earthquake in Seattle? SSA should stake its claim as the central forum for the physics‐based study of earthquake predictability, and its publications should be the place where progress in understanding predictability is most rigorously documented.
My second point is that forecasting and prediction are all about probabilities. The deep uncertainties intrinsic to earthquake forecasting are most coherently expressed in terms of two distinct types of probability: the aleatory variability that describes the randomness of the system, and the epistemic uncertainty that characterizes our lack of knowledge about the system. In UCERF3, the former is cast as the time‐dependent probabilities of fault ruptures, of which there are over 250,000, whereas the latter is expressed as a logic tree with 5760 alternative branches. Similarly, CyberShake represents the aleatory variability in wave excitation through conditional hypocenter distributions and conditional slip distributions, and it characterizes the epistemic uncertainty in the wavefield calculations in terms of alternative 3D seismic‐velocity models.
The full‐3D treatment of seismic‐wave propagation has the potential to improve our PSHA models considerably. A variance‐decomposition analysis of the recent CyberShake results indicates that more accurate earthquake simulations could reduce the aleatory variance of the strong‐motion predictions by at least a factor of 2 relative to the empirical ground‐motion prediction equations in current use; other factors being equal, this would the lower exceedance probabilities at high‐hazard levels by an order of magnitude. The practical ramifications of this probability gain for the formulation of risk‐reduction strategies could be substantial.
The coherent representation of aleatory variability and epistemic uncertainty in physics‐based hazard models involves massive forward and inverse calculations, typically requiring very large ensembles of deterministic simulations. For example, a CyberShake hazard model for the Los Angeles region involves the computation of about 240 million synthetic seismograms. These calculations have been made feasible by the development of clever algorithms based on seismic reciprocity and highly optimized anelastic wave propagation codes, but they still strain the capabilities of the world’s fastest supercomputers, which are currently operating at petascale (∼1015 floating point operations per second).
It is important to realize that our community’s needs for computation are growing more rapidly than our nation’s supercomputer resources. In this year alone, for example, SCEC simulations will consume almost 200 million core‐hours on National Science Foundation (NSF) supercomputers such as Blue Waters and Department of Energy (DOE) supercomputers such as Titan. As we move towards exascale computing, the machine architectures will become more heterogeneous and difficult to code, and the workflows will increase in complexity. To an ever‐increasing degree, progress in earthquake system science will depend on deep, sustained collaborations among the seismologists and computational scientists focused on extreme‐scale computing. SSA should think carefully about how to accommodate such interdisciplinary collaborations into its structure, and it will need to work with NSF, DOE, and other government agencies to make sure our computational capabilities are sufficient for the demands of physics‐based PSHA.
PSHA occupies a central position in the universe of seismic‐risk reduction. However, recent earthquake disasters have reinvigorated a long‐standing debate about PSHA methodology. Many practical deficiencies have been noted, not the least of which is the paucity of data for retrospective calibration and prospective testing of long‐term PSHA models. But some critics have raised the more fundamental question of whether PSHA is misguided because it cannot capture the aleatory variability of large‐magnitude earthquakes produced by complex fault systems. Moreover, the pervasive role of subjective probabilities and expert opinion in specifying the epistemic uncertainties in PSHA has made this methodology a target for scientists who adhere to a strictly frequentist view of probabilities. According to some of these critics, PSHA should be replaced by “neodeterministic” hazard estimates based on a maximum credible earthquake.
As Warner Marzocchi pointed out in an Eos article last July, neodeterministic SHA is not an adequate replacement for probabilistic SHA. The choice of a maximum credible earthquake requires uncertain assumptions, such as choosing a return period, which essentially fix the level of acceptable risk. This black‐and‐white approach is fundamentally flawed because it conflates the role of scientific advisor with that of a decision maker, mixing scientific judgments with political and economic choices that lie outside the domain of science. Fully probabilistic descriptions, such as those given by PSHA, are needed for two reasons: first, to avoid unintended and often uninformed decision making in the tendering of scientific forecasts, and second, to provide decision makers, including the public, with a complete rendering of the scientific information they need to balance the costs and benefits of risk‐mitigation actions.
We may never be able to predict the impending occurrence of extreme earthquakes with any certainty, but we do know that earthquakes cluster in space and time, and that earthquake probabilities can locally increase by a thousand‐fold during episodes of seismicity. The lessons of L’Aquila and Christchurch make clear that this information must be delivered to the public quickly, transparently, authoritatively, and on a continuing basis. Systems for this type of operational earthquake forecasting (OEF) are being developed in several countries, including Italy, New Zealand, and the United States, and they raise many questions about how to inform decision making in situations where probability for a significant earthquake may go way up in a relative sense but still remain very low (<1% per day) in absolute terms.
As we usher in new technologies that will spew out predictive information in near real time—and here we should include earthquake early warning (EEW) systems as well as OEF—the need to engage the public has never been more critical. We must continually educate the public into the conversation about what can, and cannot, be foretold about earthquake activity.
We must continually educate the public into the conversation about what can, and cannot, be foretold about earthquake activity.
Toward this end, SSA should increase its role in communicating the science that underlies OEF and EEW. In particular, it should provide a roundtable for seismologists to interact with social scientists and risk‐communication experts in helping the responsible government agencies translate uncertain probabilistic forecasts into effective risk‐mitigation actions.
This brings me to my final point, which concerns the importance of rigorous forecast validation. Validation involves testing whether a forecasting model replicates the earthquake‐generating process well enough to be sufficiently reliable for some useful purpose, such as OEF or EEW. Since 2006, a new international organization, the Collaboratory for the Study of Earthquake Predictability (CSEP), has been developing the cyberinfrastructure needed for the prospective testing of short‐term earthquake forecasts. CSEP testing centers have been set up in Los Angeles, Wellington, Zürich, Tokyo, and Beijing, and more than 380 short‐term forecasting models are being prospectively evaluated against authoritative seismicity catalogs in natural laboratories around the world. CSEP experiments have validated the probability gains of short‐term forecasting models that are being used, or will be used, in OEF. Moreover, the Collaboratory is capable of supporting OEF and EEW by providing an environment for the continual testing of operational models against alternatives. However, U.S. participation in CSEP has thus far been primarily funded by a private organization, the W. M. Keck Foundation, and stable support for its long‐term mission is not guaranteed.
Of course, extreme earthquakes are very rare, so it will be a while before enough instrumental data have accumulated to properly test our long‐term forecasts. However, as Dave Jackson argued in a paper presented at this meeting, the earthquake hiatus in California suggests the current UCERF model inadequately represents the large‐scale interactions that are modulating the earthquake activity of the San Andreas fault system. Use of paleoseismology to extend the earthquake record back into geologic time is a clear priority. The SSA should be the home for this type of historical geophysics.
It is also urgent that we increase the spatial scope of our research to compensate for our lack of time. One goal of SSA should be to join forces with CSEP and other international efforts, such as the Global Earthquake Model (GEM) project, in fostering comparative studies of fault systems around the world. The issue is not whether to focus on the prediction problems of earthquake system science, but how to accomplish this research in a socially responsible way according to the most rigorous scientific standards.
I call upon a new generation of seismologists—the students and early‐career scientists in this room—to take on the challenges of earthquake system science. You are fortunate to be in a field where the basic prediction problems remain mostly unsolved and major discoveries are still possible. You are also fortunate to have access to vast new datasets and tremendous computational capabilities for attacking these problems. System‐level models, such as those I have described here, will no doubt become powerful devices in your scientific arsenal.
However, these models can be big and unwieldy, requiring a scale of expertise and financial resources that are rarely available to one scientist or a small research group. This raises a number of issues about how to organize the interdisciplinary, multi‐institutional efforts needed to develop these models. In particular, all of us at SSA need to make sure that any research structure dominated by earthquake system science allows you, as the rising leaders in this field, to develop new ideas about how earthquake systems actually work.