- © 2014 by the Seismological Society of America
We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).
Operational earthquake forecasting is the dissemination of authoritative information about time‐dependent probabilities to help communities prepare for potentially destructive earthquakes.
The status of OEF and guidelines for its deployment have been described in a comprehensive report by the International Commission on Earthquake Forecasting for Civil Protection (ICEF), which was convened by the Italian government following the L’Aquila earthquake disaster (Jordan et al., 2011). The ICEF conceived of OEF not as a stand‐alone activity, but as one component of a larger system for guiding mitigation actions based on scientific information about the earthquake threat (Fig. 1). The proper role of OEF is to inform, but not prescribe, the response by end users to changing seismic hazards. The front end of an OEF system should be an interface for continually providing forecasting information that is timely, authoritative, and properly conditioned to serve multiple end users, including the general public.
The ICEF recommendations were first released only five years ago (in October 2009), so the commentaries on their efficacy are just being written. At the same time, events on (and under) the ground in the three seismically active countries where we authors reside—New Zealand, Italy, and United States—are contributing to the seismological experience that will shape new OEF systems. With this experience in mind, we address the issues raised in the OEF critiques recently published by Peresan et al. (2012; hereafter PKP12) and Wang and Rogers (2014; hereafter WR14).
PKP12 view the problem as practitioners of deterministic earthquake prediction; they argue that alarm‐based techniques are preferable for decision making, and they raise questions about the conceptual consistency and statistical testability of probabilistic OEF. WR14 question whether, from a practical point of view, short‐term earthquake forecasting should be made operational and conclude it should not. Their critique, offered from the vantage of earthquake engineering, is based on three primary points: OEF is ineffective, distracting, and dangerous.
IS OEF INEFFECTIVE?
According to WR14 (p. 569), “the most objective measure of the usefulness of a short‐term forecast is whether it can guide preseismic evacuation of unsafe buildings,” and they assert that “a change in earthquake probability from practically impossible (0.001%) to very unlikely (5%) is not useful for saving lives.” They also discount the effectiveness of mitigation actions less than full evacuations, calling low‐impact, short‐term measures impractical.
The focus by WR14 on building safety is understandable, because failures of the built environment pose the largest risk for life safety in most urban settings. However, we strongly disagree that OEF utility should be judged primarily on how it guides building evacuations to protect against the loss of life. Society can respond to increased earthquake probabilities by selecting from a very broad spectrum of mitigation and readiness options (van Stiphout et al., 2010; Peresan et al., 2012; Woo and Marzocchi, 2013). By tying OEF to only one type of high‐cost mitigation measure, building evacuation, WR14 severely underestimate the utility of OEF information in reducing risk and improving resilience.
During seismic crises, there are numerous low‐cost mitigation options that can be exercised. Oft‐cited examples include reiterating recommended preparedness measures, rehearsing disaster response in scenario‐based drills, increasing the readiness of emergency equipment and personnel, mitigating nonstructural risks, and relayering reinsurance.
Even the actions related to building occupancy have low‐cost options. In April–May 2000, an intense seismic sequence hit Faenza, a small municipality in northern Italy, causing concern among the population. The local officials made tents available to people who did not feel safe in their houses, and they refreshed the emergency plan to prepare for the worst. If a large earthquake had occurred, lives surely would have been saved. One did not, but as far as we know, no citizen complained about the local government’s preparations for the possibility. Italy is developing an OEF system that would help local officials make decisions about when sensible mitigation options of this sort should be offered (Marzocchi et al., 2014).
Some of the most advanced applications of OEF have been made in New Zealand to deal with the recent Canterbury and the Cook Strait sequences (Gerstenberger et al., 2014). The public was informed about the short‐term hazard increases during these sequences, and some took personal actions, including self‐evacuations. By all accounts, OEF has been effective in helping New Zealanders manage their seismic risks.
The ICEF argued that OEF will benefit the public by filling information vacuums that can lead to informal predictions and misinformation. WR14 (p. 570) have a much more pessimistic view: “It seems unrealistic to expect the public to ignore sensational forecasts on the basis of official forecasts, which are authoritatively promised to be very uncertain.”
As far as we are aware, only anecdotal data are available on the effectiveness of OEF in quelling earthquake rumors and amateur predictions (Jordan and Jones, 2010), but the ICEF position is strongly supported by sociological research. When faced with confusing and possibly hazardous situations, people respond positively to consistent, authoritative statements from multiple sources about the actions they should take, even when the future is highly uncertain (e.g., Mileti and DeRouem, 1995). Exactly what actions to initiate when the forecast probabilities exceed some threshold is a decision problem in the realm of risk analysis and mitigation, but we trust Wang and Rogers would agree that the best available scientific information should be used in making these decisions.
The ICEF also noted that, by conditioning the public to changes in the threat level of earthquakes, OEF information can have psychological value above and beyond its role in driving risk‐mitigation actions. Simply knowing that the aftershocks were behaving in an expected manner was highly valued by people of Christchurch (Wein and Becker, 2013).
TWO BASIC OEF PRINCIPLES
Before proceeding further, we should recall why probabilities are the coin of the OEF realm. Owing to the deep uncertainties in our understanding of earthquake systems, OEF must render prospective information in the form of probabilities to properly express the aleatory variability of nature and our epistemic uncertainty in quantifying this randomness. The ICEF went further, however, recommending that OEF information should also be communicated to decision makers in terms of probabilities. Underlying their recommendations are two basic principles:
The Transparency Principle: authoritative scientific information about future earthquake activity should not be withheld from the public. Transparency is essential for building public trust in OEF and for educating the public about the meaning of OEF information (Jordan and Jones, 2010). According to the transparency principle, validated OEF information should be made available to all potential users in a timely manner. Probabilistic forecasting provides a more complete description of prospective earthquake information than the alternatives, such as deterministic prediction.
The Hazard‐Risk Separation Principle: authoritative scientific information about future earthquake activity should be developed independently of its applications to risk assessment and mitigation. The OEF realm of hazard assessment should be kept distinct from the risk analysis and mitigation realm of OEF users, as illustrated in Figure 1. This separation is important because decisions about mitigation and preparedness actions are contingent on a host of economic, political, and psychological considerations that lie beyond the science of hazard analysis. In particular, transmitting OEF information as probabilities appropriately separates the hazard estimation role of earthquake scientists from the public protection role of civil authorities.
Several key issues raised by PKP12 and WR14 can be resolved by following these principles.
Why not restrict OEF to aftershock sequences? WR14 and nearly all other participants in the OEF debate agree that the public should be informed about the increased hazard of damaging earthquakes during aftershock sequences. It has become standard practice for certain government agencies to release aftershock advisories following larger earthquakes. The U.S. Geological Survey (USGS) has issued ad hoc advisories after a variety of earthquakes in the United States and some other countries since the mid‐1980s, but it only releases advisories on a consistent basis in California. In New Zealand, GNS Science continues to provide forecast probabilities for three on‐going sequences, including Canterbury. However, when advisories are not released on regular schedules, the forecasting information for some times and locations is essentially hidden from the public. Moreover, the issuance of advisories can be a slow process, impeding user access to information at critical times.
The operational definitions of mainshocks and aftershocks are problematic. Models of earthquake triggering and clustering based on Omori–Utsu and Gutenberg–Richter statistics, such as the epidemic‐type aftershock (ETAS) model (Ogata, 1998) and the short‐term earthquake probability (STEP) model (Gerstenberger et al., 2005), provide robust information about how hazard probabilities change with time during any particular seismic sequence, but they cannot (and do not need to) distinguish on the fly whether an event is a foreshock, mainshock, or aftershock.
It is not the role of OEF to prescribe the response to earthquake forecasts any more than it is the role of operational weather forecasting to stipulate the response to hurricane and tornado probabilities.
According to the separation principle, thresholds stipulating what earthquakes are significant should be cast as user‐specific decisions in the risk analysis and mitigation realm, not as general criteria for releasing information from the OEF realm. According to the transparency principle, OEF should update forecasts whenever new authoritative information is available, including the information from small earthquakes that changes the OEF probabilities. Releasing OEF information only during aftershock sequences thus violates both principles.
THE THRESHOLD PROBLEM
The threshold problem was articulated in ICEF Recommendation G2 (p. 362): “Quantitative and transparent protocols should be established for decision making that include mitigation actions with different impacts that would be implemented if certain thresholds in earthquake probability are exceeded.” PKP12 claim the ICEF distinction between probabilistic forecasting and deterministic prediction is inconsistent with this recommendation. They state that “according to the [ICEF] definition and conclusions, forecasting may become useful when and only when formulated as operational prediction.”
Their confusion stems from conflating the OEF realm, in which probabilistic forecasts are constructed, with the risk analysis and mitigation realm, where decisions are made. Thresholds are one way to condition probabilistic forecasts for specific uses, but those thresholds need to be negotiated among the stakeholders in a specific user group, not imposed by OEF across user groups. The deterministic prediction schemes advocated by PKP12 are no substitute for probabilistic forecasting, because deterministic prediction in the OEF realm is inconsistent with both the transparency and separation principles.
It is not the role of OEF to prescribe the response to such earthquake forecasts any more than it is the role of operational weather forecasting (Inness and Dorling, 2013) to stipulate the response to hurricane and tornado probabilities. When the consequences of an extreme event are high enough, the risks may exceed the action thresholds of certain OEF users (e.g., utility operators, hospital managers, emergency responders) even when the hazard probabilities remain very small (Woo and Marzocchi, 2013). At‐risk parties and their protectors should be free to decide for themselves at which thresholds to act on new hazard information, in accordance with their own aversion to particular risks.
IS OEF DISTRACTING?
WR14 fear that OEF will weaken long‐term efforts to develop a built environment safe from earthquakes. However, risk mitigation is hardly a zero‐sum game. In fact, a strong case can be made that OEF information will enhance societal actions to build a safer earthquake environment, in particular by increasing public knowledge and reducing apathy. Seismically active periods are teachable moments, and alerts of increased probabilities during such periods can serve as reminders to all residents of earthquake country that long‐term mitigation measures must be enacted to ensure their safety. For example, if people are advised to minimize their time in vulnerable structures during an active seismic sequence, they will become more educated about which buildings are vulnerable, and they may be more motivated to demand long‐term remedial actions.
WR14 have an abiding faith that our built environment can be made earthquake proof, abrogating the need for any short‐term information to improve earthquake preparations: “as seismic‐hazard assessment, building codes, and hence construction quality improve with scientific research, the perceived need for short‐term forecasting will continue to diminish.” (p. 570) These utopian remarks do not describe the world we live in or will ever construct. For many reasons, our cities will never be earthquake proof:
Even if building codes are rigorously applied (which is often not the case), they do not guarantee that a building will not be severely damaged by an earthquake, because the life‐safety code provisions are based on nonzero exceedance probabilities or, alternately, on maximum credible earthquakes that are not maximum possible earthquakes.
The PSHA, or deterministic seismic‐hazard analysis, on which the codes are based can be wrong; in fact, the epistemic uncertainties in long‐term forecasts are usually much larger than in short‐term forecasts.
Retrofitting expenses may be unaffordable, which is the case for many historical buildings in Italy and other regions with antique cultures.
Areas judged to have low or moderate seismic hazard will usually have lower seismic design standards, but, like Christchurch, they may experience seismic surprises with ground motions above the design values.
The Christchurch experience is particularly instructive, because it illustrates the operational utility of time‐dependent forecasts that extend well beyond the intervals of days to weeks usually associated with aftershock hazards. A time‐dependent seismic‐hazard model has been developed by GNS Science that spans forecasting horizons from the short term (daily, weekly) to the long term (50 years). This hybrid model now guides recovery and rebuilding decisions for Christchurch, including revisions to the regional building design standards (Gerstenberger et al., 2014). Retrospective testing against the well‐recorded Canterbury seismic catalog has shown the hybrid model performs better than any of the individual models in the ensemble on which it was based, including a gridded implementation of the national seismic‐hazard model (Rhoades et al., 2013).
The utility of OEF probabilities will substantially increase as end users become educated about the meaning of these probabilities through their repeated usage.
Though we heartily agree with WR14 (and everyone else) that the enforcement of sound building codes remains our best defense against earthquakes, we consider a mitigation strategy based on long‐term PSHA to be, by itself, insufficient. We know that the probability of large earthquakes increases significantly with respect to the long‐term background during seismic sequences and that this hazard increase can persist for years (Michael, 2012; Gerstenberger et al., 2014). People deserve all the information OEF can provide to help them make short‐ and medium‐term decisions about working and living in a vulnerable environment, as well as long‐term decisions about how to reduce that vulnerability.
IS OEF DANGEROUS?
WR14 (p. 570) also fear the unreliability of OEF information could cause people harm: “For matters of life and death, partial measures would trigger runaway reactions among the public, especially in the age of social media.” If by “runaway reactions” they mean panic, then they are plain wrong. The substantial literature on the panic myth has been summarized by Clarke (2002, p. 26): “Before, during, and after disasters, the general public warrants trust and respect. Panic is often used as a justification by high‐level decision makers to deny knowledge and access to the public, on the presumption that people cannot handle bad news. Research on how people respond to life‐threatening disasters and the stories from the World Trade Center show that people handle even the most terrifying news civilly and cooperatively.”
WR14 also warn about crying wolf, and they insist “society has very limited capacity in dealing with uncertain short‐term earthquake forecasting because of severe consequences of any wrong decisions of mitigation action.” (p. 570) But the crying‐wolf effect arises only when costly mitigation measures (e.g., evacuations) are imposed on society with a high rate of false alarms. It will rarely be a problem as long as the various end users are correctly informed and left free to adjust their false‐alarm rates to be consistent with their own levels of risk aversion.
A conceit among some earthquake experts is that people cannot understand, and will widely misinterpret, probabilistic forecasts. We admit there is widespread illiteracy about uncertainty and probability (even among scientists), but we believe the utility of OEF probabilities will substantially increase as end users become educated about the meaning of these probabilities through their repeated usage. We envisage threshold‐setting as an iterative process that tunes the response of end users to the available OEF information.
IS OEF TESTABLE?
PKP12 criticize some published short‐term forecasting models, but their main critique, which applies to PSHA as well as OEF, concerns the fundamental nature of probabilistic forecasting: “There are natural unavoidable problems in assigning, for responsible practical use, any specific value of probability to earthquake occurrence that is mathematically acceptable … In point of fact, making quantitative probabilistic claims, within the frameworks of the most popular objectivistic viewpoint on probability theory, requires a long series of recurrences, which cannot be obtained at local scale from the existing catalogs of earthquakes.” PKP12 imply that the inclusion of subjectivity into forecasts through expert elicitation leads to probability estimates that are untestable and, therefore, cannot be validated, recapitulating a long‐standing critique of PSHA (e.g., Castaños and Lomnitz, 2002).
The testability of probabilistic forecasting models that include subjective assessments of epistemic uncertainties is a subtle theoretical problem, the solution of which requires a layering of frequentist and Bayesian concepts of probability (Marzocchi and Jordan, 2014). Testability requires an experimental concept that defines collections of data, observed and not yet observed, that are judged to be “exchangeable”; that is, to have a joint probability distribution that is invariant with respect to the data ordering. Forecasting models are now being prospectively tested around the world under the auspices of the Collaboratory for the Study of Earthquake Predictability (CSEP). In these evaluations, the data from different cells in a testing region are judged to be exchangeable when conditioned on the explanatory variables of a particular experimental concept, thus allowing the aggregation of testing statistics across the entire region and mitigating the sampling difficulties raised by PKP12.
Given our present state of knowledge, the probabilities of large earthquakes may increase a 1000‐fold during periods of elevated seismic activity, but they rarely exceed about 1% per day. Consequently, short‐term forecasting using our current seismicity‐based models is largely confined within what the ICEF has called a “low‐probability environment.” We can be hopeful that incorporating more physics (e.g., rate‐state‐dependent nucleation and Coulomb stress changes) or different types of data (e.g., geodetic and electromagnetic transients) into the forecasting models will elevate the time‐dependent probabilities attainable for potentially damaging earthquakes. In this article, however, our arguments are based strictly on the current capabilities of statistical seismology; they do not hinge on an optimistic view of future information gains.
CSEP experiments show that earthquake clustering models provide robust information and can consistently outperform time‐independent Poisson models. There are large epistemic uncertainties in the time‐dependent forecasting models, but the existence of uncertainties does not justify postponing the deployment of OEF systems—nor does the observation that each seismic sequence has peculiarities not described by the generic statistical models. Models that are uncertain and cannot explain everything can still be very useful.
OEF can provide information about other aspects of seismic time dependence. In May, the USGS and the Oklahoma Geological Survey issued a joint statement concerning earthquake hazards in Oklahoma, which have increased since 2009 and may continue to increase in the future (http://earthquake.usgs.gov/regional/ceus/products/newsrelease_05022014.php; last accessed July 2014). Such changes are difficult to incorporate into the U.S. seismic hazard maps, which consider a 50‐year time span and are updated only every several years. An OEF system would be able to track the changes in seismicity and thereby help Oklahoma residents adapt their mitigation and preparedness actions to increasing hazard levels.
Though communicating OEF and its uncertainties is a difficult issue, not communicating is hardly an option. Only by going forward with OEF, from developing forecast models and validating these models against the available data to deploying them in operational systems and helping users formulate mitigation options that facilitate wise decisions, can we as seismologists fulfill our responsibilities to society.
We thank Michael Blanpied and William Ellsworth for detailed reviews and helpful suggestions.