Publications: SRL: Opinion |
”It seems probable that a very long period will elapse before another important earthquake occurs along that part of the San Andreas rift which broke in 1906; for we have seen the strains causing the slip were probably accumulating for 100 years.“ Professor Reid in his 1910 contribution to the Lawson Commission report thus anticipated why the centenary of the San Francisco earthquake would be so significant to earthquake scientists: the northern San Andreas, in decimal markers of logarithmic age, is entering a mature stage of the Reid cycle.
The south-central portion of the San Andreas reached its earthquake centenary when I was a kid (in 1957), while the southernmost stretch probably passed this milestone before Thomas Jefferson was elected president (circa 1800). Tectonic forces are inexorably tightening the springs of the San Andreas fault system. The probability that at least one of these three segments will rupture in the next 30 years is thought to lie somewhere between 35% and 70%, depending on how you interpret the paleoseismic data and other constraints on the regularity of the Reid cycle.
The sedimentary basins of coastal California have become highly urbanized since the last major San Andreas earthquake. These basins are strung out along the San Andreas fault system in a natural but unfortunate geometry that funnels energy from large earthquakes into very intense, long-duration basin waves. New physics-based simulations of San Andreas earthquakes indicate that the low-frequency shaking in the urban basins could be substantially larger than previously predicted. Moreover, the strong shaking from large ruptures on subsidiary faults, such as the Puente Hills blind thrust directly beneath Los Angeles, could be even worse—in the words of one structural engineer, ”the earthquake from hell.“ A recent risk study estimated that a Puente Hills earthquake could cause 3,000 to 18,000 deaths and direct economic losses ranging from $80 billion to $250 billion. California approaches its Armageddon on a geologic fast-track, and we seismologists are getting nervous. Not everyone appears to be worried, however. In a front-page story on November 4, 2005, the Los Angeles Times reported that ”efforts to bolster earthquake safety in California have hit roadblocks at the state and local levels as memories of major temblors fade and lawmakers and business owners balk at the cost of retrofitting structures.“ In September, the governor vetoed funding for the California Seismic Safety Commission.
It is this context—the heightening risk to a sometimes indifferent society—that compels my own thinking on the troublesome topic of earthquake prediction. Despite more than a century of research, no methodology can reliably predict the locations, times, and magnitudes of potentially destructive fault ruptures on time scales of a decade or less. Many scientists question whether such predictions will ever contribute to risk reduction, even with substantial improvements in the ability to detect precursory signals, simply because the chaotic nature of brittle deformation may preclude useful short-term predictions. Most observations in well-instrumented continental regions are consistent with this view. The pessimism has been deepened by repeated cycles in which public promises that reliable predictions are just around the corner have been followed by equally public failures of specific prediction methodologies.
Owing to these failures, the subject has become increasingly contentious, with some pessimists openly advocating that earthquake prediction research should be abandoned. According to this view, earthquake prediction is a wild goose chase that distracts researchers, as well as the public at large, from the real task at hand: promoting long-term seismic safety. The official enthusiasm for prediction research has dropped to low levels, both here and abroad, as has the funding. For example, the National Earthquake Prediction Evaluation Council, the federal advisory body for assessing earthquake predictions, lapsed into dormancy ten years ago, and its state counterpart, the California Earthquake Prediction Evaluation Council, though still active, receives little state support, has no effective infrastructure for assessing predictions, and largely depends on volunteer efforts by its committee members.
Despite the notable lack of past success, there is clearly a resurgence of research on earthquake prediction at the grass-roots level. The optimists, often young and unjaded, are motivated by better data from seismology, geodesy, and geology; new knowledge of the physics of earthquake ruptures; and a more comprehensive understanding of how active fault systems actually work. Promising developments include:
The optimists point to observational and theoretical evidence that large-scale failures within certain fault systems may be predictable on intermediate time scales ranging from decades to years, provided that adequate knowledge about the history and present state of the system can be obtained. They note that in some tectonic environments, such as mid-ocean ridge transform faults, large earthquakes may be predictable on time scales as short as one hour in spatial windows as narrow as 30 km.
What then should be the agenda for earthquake prediction research in this centenary year of 2006? How should our research goals balance the optimistic idealism of what might be learned against the pessimistic reality that earthquakes are liable to strike our cities with no useful warning?
To address these questions, we first need to establish a precise vocabulary for our own discourse as well as our communications with the public. We should distinguish intrinsic predictability (the degree to which the future occurrence of earthquakes is encoded in the precursory behavior of an active fault system) from a scientific prediction (a testable hypothesis, usually stated in probabilistic terms, of the location, time, and magnitude of fault ruptures), and further distinguish a scientific prediction from a useful prediction (the advance warning of potentially destructive fault rupture with enough accuracy in space and time to warrant actions that may prepare communities for a potential disaster).
Researchers conduct prediction experiments to test scientific hypotheses about earthquake predictability. These often concern aspects of earthquake behavior of little practical interest, such as the regional seismicity rate, which is dominated by very small earthquakes, or earthquakes in a remote area. However, when the hypotheses to be tested target large earthquakes in populated areas, prediction experiments intended to investigate earthquake predictability can easily be confused as operational predictions; i.e., officially sanctioned predictions, intended to be useful for risk mitigation.
In this simple vocabulary, the central issues of earthquake prediction can be posed as three related questions:
These questions constitute a hierarchy in the sense that a latter question can be more effectively addressed if answers to the former one are available. From this perspective, setting up an infrastructure that responds to question (1) deserves a very high priority.
This simple thesis is worth considering in more detail. The general public has always had high expectations that science will deliver reliable and useful predictions, and it still waits for a positive response to question (3). To meet these expectations, scientists have long sought a heroic solution: the discovery of a precursory phenomenon or pattern that can reliably signal when a fault is approaching a large earthquake. It would be premature to say such deterministic predictions are impossible, but this ”silver bullet approach“ has certainly not been successful so far. Of course the quest should continue—science should always be heroic!—but the immediate prospects of finding the silver bullet seem rather dim.
An alternative route to answering question (3) follows what I’ll call the ”brick-by-brick approach“ to question (2): building an understanding of earthquake predictability through interdisciplinary, physics-based investigations of active fault systems across a wide range of spatial and temporal scales. The ability to predict the behavior of an active fault system (or any other geosystem for that matter) is an essential measure of how well we can model its dynamics. Let’s face it, no existing model adequately describes the basic features of dynamic fault rupture, nor is one available that fully explains the dynamical interactions among faults, because we do not yet understand the physics of how matter and energy interact during the extreme conditions of rock failure. As noted by the National Research Council in its 2003 decadal study of earthquake science, ”a fundamental understanding of earthquake predictability will likely come through a broad research program with the goals of improving knowledge of fault-zone processes, the nucleation, propagation, and arrest of fault ruptures, and stress interactions within fault networks.“
To understand earthquake predictability, we must be able to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. Retrospective prediction experiments, in which hypotheses are tested against data already available, have their place in calibrating prediction algorithms, but only true (prospective) prediction experiments are really adequate for testing predictability hypotheses. Therefore, scientific research on earthquake predictability would profit from a solution to the experimental problems posed by question (1). Attempts have been made over the years to structure earthquake prediction research on an international scale. For example, the International Association of Seismology and Physics of the Earth’s Interior has convened a Sub-Commission on Earthquake Prediction for almost two decades, which has attempted to define standards for evaluating predictions, and IASPEI holds regular meetings and symposia on the issue. However, most observers would agree that our current capabilities for conducting scientific prediction experiments remain inadequate for at least four reasons:
Solving these problems requires an extraordinary degree of scientific collaboration. In 2001, the Southern California Earthquake Center (SCEC) and the U.S. Geological Survey established a Working Group on Regional Earthquake Likelihood Models—the RELM project—with the goal of prototyping and testing a variety of earthquake prediction algorithms, including seismicity-based forecasts, geodetically driven forecasts, pattern recognition algorithms, and stress interaction and rate-and-state models. The ensuing efforts to set common standards have led to an agreement among all RELM modelers to test their prediction algorithms in a fully prospective sense. Three contests are being initiated on January 1, 2006: one for daily predictions, evaluated in each 24-hour period; and two for one- and five-year predictions, evaluated yearly. The models are implemented on dedicated computers in the RELM testing center at the Swiss Federal Institute of Technology in Zürich, Switzerland, isolated from their authors, and the ongoing evaluations are posted on a common website.
The RELM project is limited in geographic and conceptual scope; all algorithms included in the testing program must specify seismicity likelihood in preset spatial and magnitude bins. The testing program does not accommodate alarm-based predictions, nor does it provide a venue for prediction experiments outside California. However, it has shown the way forward by prototyping a controlled environment in which scientists can conduct earthquake prediction experiments according to community standards and compare their results with reference predictions, such as the long-term forecast of the National Seismic Hazard Mapping Project. Comparative testing is particularly crucial, because it is the best way to bootstrap our understanding of earthquake predictability.
The time is right to extend RELM to a more ambitious level by creating a virtual, distributed laboratory with a cyberinfrastructure adequate to support a truly global program of research on earthquake predictability. Such a facility would be a ”collaboratory“ in the sense originally coined by Bill Wulf in 1989: ”a center without walls, in which … researchers can perform their research without regard to geographical location, interacting with colleagues, accessing instrumentation, sharing data and computational resources, [and] accessing information in digital libraries.“
An appropriate name for this facility would be the Collaboratory for the Study of Earthquake Predictability (CSEP). Within SCEC, we have begun to develop plans for CSEP based on five objectives:
The last objective implies that SCEC cannot go it alone; CSEP must be developed through international partnerships with scientists who share an interest in earthquake prediction research.
Properly configured, CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community and general public in an appropriate research context. Th e standards set by CSEP and the results obtained on the performance of scientific prediction experiments should help the responsible government agencies, such as the USGS and California Office of Emergency Services, assess the utility of earthquake prediction and place prediction research in the appropriate context of risk reduction.
To be sure, CSEP is an ambitious proposition, requiring a robust, sustainable infrastructure and a commitment among researchers around the globe to work together more closely than we have in the past. Some prediction experiments, including prospective tests of long-term forecasting methods, will require run times of decades or longer. But we should begin a global program of comparative testing now, because it will help us build, brick by brick, a better system-level understanding of the earthquake predictability. Who knows, maybe the next big one on the San Andreas Fault won’t come as such a surprise after all.
Thomas H. Jordan
Director, Southern California Earthquake Center
University of Southern California
Los Angeles, CA 90089-0742
tjordan@usc.edu
To send a letter to the editor regarding this opinion or to write your own opinion, contact the SRL editor by e-mail.
[Back]
Posted: 02 March 2006