- © 2010 by the Seismological Society of America
The purpose of this article is to explore new Web-based resources for disseminating data in an interactive and intuitive way that is accessible and engaging to the public without sacrificing scientific utility. This article highlights these new opportunities and describes their synthesis with recently developed automated tremor monitoring methodologies. The resulting product is a website that the reader may find useful to explore in tandem with this article, http://www.pnsn.org/tremor.
In this article, the data dissemination example is specifically applied to Cascadia tremor, but the utilization of freely accessible Web applications to provide an interactive Web experience for the public and scientific community alike could easily be applied to many different aspects of seismology.
Since their discovery nearly a decade ago, advances in both instrumentation and methodology in subduction zones around the world have brought the causal connection between seismically observed tectonic tremor (Obara 2002) and geodetically observed slow slip (Dragert et al. 2001) into sharper focus. In addition to the strong spatio-temporal correlation between the separately observed phenomena (Wech et al. 2009), mounting evidence indicates that deep, non-volcanic tremor represents slow shear (Ide et al. 2007; Wech and Creager 2007) occurring at the interface (Shelly et al. 2006, 2007) between the subducting oceanic and overriding continental plates, suggesting the two phenomena are different manifestations of the same process. Of course, there is still ongoing debate over the depth and mechanism of tectonic tremor (Kao et al. 2005; McCausland et al. 2005), but what is clear is that tremor serves as a proxy for slow slip. Considering geodesy's lower limits in spatial resolution and slow slip detection together with the abundance of low-level, ageodetic tremor, this connection makes tremor a key component for monitoring when, where, and how much slip is occurring. Because slow slip transfers stress to the updip seismogenic portion of the plate interface (e.g., Rogers and Dragert 2003; Mazzotti and Adams 2004), monitoring transient events may serve in forecasting the threat of a megathrust earthquake by inferring the temporal and spatial variations in the loading of the seismogenic zone.
Assuming the task of monitoring tremor is of some importance, the process of creating a complete system to continuously monitor tremor begs a couple key questions:
Considering the non-impulsive and enduring nature of seismic tremor, how does one quickly and efficiently sift through a well-instrumented subduction zone's worth of seismic data to search for tremor on a margin-wide scale?
Given our obligation as federally funded scientists to report to and connect with the public, how does one disseminate these results in a way that is accessible and engaging to the general population yet remains valuable as a tool for scientific synergy across institutions and disciplines?
WECC & GOOGLE APIs
Combined, this system, which is currently being applied to the majority of the Cascadia subduction zone from northern California to mid-Vancouver Island, seamlessly and automatically turns raw waveforms into an interactive online catalog (Figure 1) and sends e-mail/text alerts to interested parties with activity updates. Of course, the individual processes composing this system are not new. Near-real-time tremor detection and location has been addressed before on Vancouver Island (Kao et al. 2008), and there are many examples of realtime earthquake catalogs overlaying earthquake activity on maps. However, Cascadia tremor monitoring has never been done on this scale, and earthquake Web maps tend not to be very interactive or give the user very much control. Often these earthquake maps display a fixed time period of activity. So while the individual pieces aren't new, it is the combination of these processes and utilization of freely available API resources that yields a useful and novel product of general public interest and accessibility that also addresses the growing collaborative efforts required by this interdisciplinary phenomenon.
Data Processing Details
Tremor is monitored across the subduction zone by piecing together seven overlapping subnetworks (seen by hovering cursor over region name in the “Region Options” pane on Web site), each containing ∼20 stations, from northern California to mid-Vancouver Island. At the end of a GMT day, data from the Pacific Northwest Seismic Network (PNSN), Pacific Geoscience Center, Plate Boundary Observatory, and Northern California Regional Network are processed from 24-hour-long data files available at the PNSN. These data consist of approximately 100 short-period and broadband stations that span the up- and down-dip extent of the slow slip zone across the margin. Vertical component waveforms are bandpass filtered from 1–6 Hz (tremor frequency band), converted into envelope functions, low-pass filtered at 0.1 Hz, and decimated to 1 Hz. These preprocessed 1 sps envelopes are saved for a whole day and will be read in by the location code, which will parse out each region's subnetwork. A parallel process also simultaneously generates envelopes low-passed at 0.05 Hz, decimated to 0.2 sps and saved as seven separate PDF documents (to be linked by the Web site for data viewing).
Detecting & Locating Tremor
Detections and locations are automatically determined using WECC. For a given region, I choose a subnet of about 20 stations based on geographic distribution and known station quality. One region at a time, the location code reads in the daily envelope data and parses out the appropriate subnet of stations as defined by a database text file. Tremor epicenters then are automatically detected and located by employing a cross-correlation method to generate potential epicenters before using the resulting epicenters to detect tremor (Wech and Creager 2008). By automatically analyzing network coherence through epicentral reliability and spatial repeatability, this method simultaneously detects tremor and produces robust estimates of tremor locations.
Using a reversed methodology, WECC locates before detecting tremor. For a given five-minute time window of vertical-component envelope data, WECC automatically obtain centroid location estimates by cross-correlating all station pairs and performing a 3-D grid search over potential sourcelocation S-wave lag times that optimize the cross correlations. Using boot-strap error estimates of every 50%-overlapping window, only those solutions with epicentral error estimates less than 5 km are kept as potential tremor sources. These potential locations are then tested as an enduring signal localized in space. Two or more locations within a 0.1 × 0.1 degree area in a day are considered a cluster. These clustered epicenters are counted, and a successful detection is obtained when more than one hour of tremor (spread throughout the region and day) is identified in a region (as determined by summing fiveminute-windowed locations accounting for overlap). If this is the case, the latitude, longitude, and beginning time for each five-minute window for each tremor epicenter—defined by a five-minute window contributing to a successful detection—is appended to a text file catalog of that region. See Wech and Creager (2008) for details on detection, location, weights, and error estimates.
WECC is independently applied to each subnetwork with a grid spanning 2.5 degrees latitude and 4 degrees longitude. Because of detection limitations on the fringes of each network, the networks share edge stations and latitudinally overlap each other by about 50%. Analysis of duplicated locations within 25 km of each other obtained by overlapping networks shows that adjacent networks obtain epicenters with differences typically less than 10 km. This result has two positive outcomes. 1) It means that I can safely average the epicenters from duplicated time windows in regions of overlap. 2) The redundancy gives us further confidence in the reliability of WECC results.
As each region completes, it checks to see how many regions have completed that day. Once this check yields seven, the system checks if any of the seven regions detected more than one hour of tremor. If so, it composes an e-mail summarizing the region(s) and amount(s) of tremor and sends this to recipients interested in alerts for a detected region. Also, if the system breaks somewhere along the line (e.g., missing data), it sends an e-mail and text message alert to the person “overseeing” the process (me for the moment).
XML Files for Web Site
Generating XML Files
Every step short of the Web Site (Figure 2) is performed using MATLAB software from MathWorks (http://www.mathworks.com). Day-long waveforms are read into and preprocessed with MATLAB via CORAL (Creager 1997), completing in about 10–15 minutes. Depending on tremor activity and station quality, per region WECC takes somewhere between 5 and 40 minutes. Using a quad-core computer, four regions are processed simultaneously. Monitoring a whole day across the entire margin is, therefore, completed in roughly an hour, and daily results are available online around 02:00 UTC.
Google Maps API
Google Visualization API
As mentioned before, a summary XML file is generated that contains daily summaries of number of epicenters and duration for each region. This file has four functions.
With each request, daily summaries for the selected time windows are combined to provide a summary of what is being plotted.
The daily durations are used to create time series in the Annotated Time Line for each region selected.
The regions, dates, average latitudes, average longitudes, and durations are used to create the Motion Chart, which allows users to interactively customize and explore many variables in time.
Though slightly unorthodox, tremor is summarized in duration rather than amplitude because the emergent and enduring nature of tremor signals creates a nebulous definition of what constitutes a tremor “event” or size. Determining tremor size is a work in progress, but at least knowing what fraction of a day was active in a given region seems to have meaning. Both the calendar (which highlights and provides summaries for active days) and the timeline are regenerated with each toggle of a new region. This allows the user to use the data presented in each to choose a date range. Furthermore, because of this connection between the timeline and date selection, a change in the timeline automatically populates the date fields with its current date range and each time range request automatically changes the timeline.
Some additional features worth mentioning:
Station overlay. This option will plot all the stations used for the most recent day's tremor detection attempt.
Regional station display. Hovering the cursor over the text of each region in the Region Options menus temporarily overlays the stations used for detection and location in that specific region
Plate depth overlay. This option overlays isodepth lines for the subducting Juan de Fuca plate (McCrory 2006).
Envelope PDFs. For each region and each day, a 24-page PDF (one hour per page) of smoothed seismic envelope data is available for viewing.
Marker highlights. Hovering the cursor over each time in the epicenter list highlights the marker in question by temporarily overlaying a new, bigger marker.
Items in the epicenter list are linked to the markers such that a click should trigger the popup information window for that epicenter.
Station markers can be clicked for an information window with links to generate webicorders for the past 12, 24, or 36 hours.
Through a combination of techniques, I have developed a nearreal-time system that automatically turns raw waveforms into an interactive online catalog of tremor epicenters. Each of the two major aspects of this system, monitoring and dissemination, presents a new approach to a problem with potential efficacy beyond its current application in the Cascadia subduction zone. As tremor is found in more and more places and the quantity of data grow, the need for efficient systematic monitoring increases. Though I am not presenting it as a truly portable system, this detection and location approach provides a good option for systematic tremor searches and could be applied in other subduction zones. More important, regarding dissemination of information, the approach presented here provides the user with an intuitive and flexible experience that maintains the balance of general public accessibility and scientific utility—an experience that will only get better with the inevitable inclusion of more APIs and more server-side processing.
Ultimately, I hope to have brought to your attention a whole host of Web resources for data dissemination. This paper highlighted a few specific APIs and how to interface these with data, but has just barely scratched the surface of possibilities. There are many more, and even the implementation of the ones discussed here could be improved upon and/or used in other ways. These tools are freely available, intuitive for users, easily interfaced with many types of data, useful for the public and scientists alike, and capable of facilitating clean and efficient data dissemination with surprisingly little effort.
This work was funded by USGS grant nos. 08HQGR0034, G09AP00024, and G10AP00033. Data are provided by the Pacific Northwest Seismic Network, Pacific Geoscience Center, Plate Boundary Observatory, and Northern California Regional Network. This work greatly benefited from support by Stephen D. Malone. Thanks to Google and to Jon Connelly, Mike Williams, and Pamela Fox for insights related to Google Maps. The intimate company of Weston A. Thelen, Daniel J. Morgan, and The Macallan is responsible for inspiration at lofty, golden heights. Lastly, I want to give a shout out to Creags, baller extraordinaire.
University of Washington