- © 2012
The 21st century has already seen its share of devastating earthquakes, some of which have been labeled as “unexpected,” at least in the eyes of some seismologists and more than a few journalists. A list of seismological surprises could include the 2004 Sumatra-Andaman Islands; 2008 Wenchuan, China; 2009 Haiti; 2011 Christchurch, New Zealand; and 2011 Tohoku, Japan, earthquakes. Depending on where you live, you might also include smaller events such as the M∼5 earthquakes that occurred last year in Colorado, Virginia, and Oklahoma. In all of these cases the possibility of those earthquakes occurring was neither widely acknowledged beforehand in the public discourse on earthquake hazards, nor considered very likely in probabilistic seismic hazard models.
Being rare is not the same as being an outlier that invalidates a particular probability model or overturns a methodology.
As rare an occurrence as each might have been, does that mean that our models are wrong and should be abandoned? Do our models require a fundamental overhaul? Readers of this column have been treated to a spirited debate on these questions in recent months (Stein et al. 2011; Aster 2012; Stirling 2012).
Rather than join the fray (I endorse testing of models), I’d rather consider why such events stand out against the background of business-as-usual earthquakes. The big quakes mentioned above might be considered “black swans” (Taleb 2010), for they are rare, had an extreme impact on society, and became the object of attempts to predict their occurrence or fix perceived defects in our probabilistic models after the fact. But being rare is not the same as being an outlier that invalidates a particular probability model or overturns a methodology. Consequently, it is not obvious that our models and methods are in need of repair.
Some of these events, such as the Christchurch earthquakes, occurred on faults that were unknown beforehand. As Mark Stirling (2012) pointed out in this column in the March-April issue, the New Zealand hazard model accounted for such “background” earthquakes. Were these earthquakes unlikely? Yes. Were they unexpected? No. The official Japanese earthquake rate model clearly missed the mark in the Tohoku earthquake in that an M 9 earthquake was considered impossible. The question of the maximum size of earthquakes in the subduction zone off northern Honshu was far from a settled question and was being openly debated before the earthquake occurred. It’s also difficult to call the 2004 Sumatra-Andaman Islands earthquake a black swan, as it was included in the hazard map published beforehand by Petersen et al. (2004).
In contrast to these events, another recent great earthquake, the 2010 Maule, Chile, earthquake, was no surprise when it ruptured a high-probability segment of the Andean subduction zone. So, it would appear that the doubts and concerns lie with low-probability events with high losses.
While we may debate the form of our probabilistic seismic hazard models, few would disagree that the Gutenberg-Richter frequency-magnitude distribution well describes the size distribution of earthquakes, given a large enough region and enough time. It’s also the distribution from which many long-term probabilistic seismic hazard models are derived. When we write the G-R distribution as log(N) = a + bM, it is easy to think of it as an exponential probability distribution. However, when we express the G-R distribution using a physical measure such as seismic moment, the probability distribution is revealed to be a power law or Pareto distribution. This has important consequences for the nature of earthquake hazard and risk.
All Pareto distributions are said to be heavy-tailed, meaning that there is a higher probability of extremely large events (in the tail) than would be found in any exponential or normal distribution. According to a recent analysis by Ibragimov et al. (2009), the economic losses in earthquakes also follow a Pareto law. Heavy-tailed loss distributions will have extremely negative outcomes with greater frequency than intuition based on experience with Gaussian or lognormal statistics. Simply put, we shouldn’t be surprised by their occurrence.
When either the hazard (the likelihood of faulting, shaking, or tsunami) or risk (the consequences in terms of deaths, damage, or disruption) is heavy-tailed, it’s as important to define the shape of the tail as it is to measure the rate parameter of the overall model. For example, the tail of the G-R distribution doesn’t extend to infinity, and consequently discussions about the behavior of the earthquake frequency-magnitude distribution at large magnitudes often revolve around the functional form and parameters that define the tail. Defining the probability model in the tail requires substantial effort and observational data compared with the determination of the median behavior, simply because the events in the tail are rare.
Post-Tohoku, seismologists, geologists, and geodesists have been re-examining their assumptions about the largest earthquakes hosted by the global system of subduction zones. New techniques, such as seafloor GPS, and the patience to gather the data, will likely be required to measure the coupling and thereby the potential for great earthquakes in areas where they are unobserved in the short historic record. Similar questions are also being asked by the Working Group on California Earthquake Probabilities, which is considering the possibility of faults linking together to produce longer ruptures and hence larger quakes than are known in either the historic or paleoseismic record. Even though such events are unlikely and may not greatly influence the mean hazard, we don’t want to be surprised by them. Neither does the press, government officials, or the public. In particular, no one wants to be surprised by heavy losses. It’s precisely for this reason that our building codes design for ground motions with long mean return times rather than those that are expected.
No one wants to be surprised by heavy losses. It’s precisely for this reason that our building codes design for ground motions with long mean return times rather than those that are expected.
There is an additional wrinkle to consider: not all earthquakes are purely a consequence of tectonic processes. Earthquakes induced by a wide variety of industrial activities are well-known, but the hazards they pose are difficult to characterize beforehand. In fact, the scientific case that specific events were induced can be difficult to make. We typically lack information about local seismicity, the state of stress and fluid pressure, the geologic setting and locations of faults, and details about the industrial influences. These are just the things that we need to know to test our current understanding of earthquake mechanics. Even when the case for a non-tectonic trigger is solid, we still have much to learn about the magnitude distribution of the earthquakes, particularly those that populate the heavy tail. And in the end, that is where society’s concern lies and where our research must go.
Comments and suggestions from John Filson, Art Frankel, and Ross Stein are gratefully acknowledged.