- © 2012
Online material: Sample SeisSound video products; MATLAB computer codes; sample data set.
INTRODUCTION TO SeisSound VIDEO PRODUCTS
The increased popularity of YouTube videos has changed the format of how information is distributed and assimilated, highlighting the importance of including auditory information in videos. Videos that include sound also permeate the research community, as evidenced by their recent increase within online supplements to journal articles. Tapping into this new approach of information exchange, we are creating videos of seismic data that augment visual imagery with auditory counterparts. We term these “SeisSound” video products (Figure 1). We find the richness and complexities of seismic data can more easily be appreciated using these SeisSound products than using just the individual visual or the auditory components independently.
Seismology includes the study of a large number of processes that affect the spectral content of a seismogram including spatial extent, duration, and directivity of a source; path effects such as attenuation, near-surface geology, and basin resonance; and the differences between abrupt tectonic earthquakes and unusual sources such as volcanic and non-volcanic tremor. With training, we can learn to discern the seismic signatures of these different processes, which can be inferred from the spectral content from time series, spectra, and spectrograms; however, subtle differences in these signals can be difficult to convey easily to a less experienced audience.
A number of our senses include the ability to act as spectral analyzers. In the audible sound range we hear pitch, in the visible light range we see color, and in the low- and sub-audible range we can feel the difference between sudden and slow motions using our senses of motion and touch. For most people, the concepts of high or low pitch (frequency) and volume (amplitude) are innate. When we listen to a symphony orchestra, we can pick out the sound of individual instruments and decipher the unique spectral content of their tones even though a hundred musicians are playing simultaneously. Similarly, we can teach people how to use these innate abilities to understand seismology by having them listen to the frequency content within a seismogram. Combining visual information with the auditory can increase the connection between the heard pitch and the visually observed frequency content within seismograms and spectrograms (see example in companion paper Peng et al. 2012, this issue, in the EduQuakes column). Introducing topics in seismology in this way can extend our ability to communicate effectively with diverse audiences who have a variety of learning styles and levels.
The audible frequency range for humans is roughly 20 Hz–20 kHz, which is about two to three orders of magnitude (or seven to ten octaves) higher than the frequency content for most recorded earthquake signals. To bring the sub-audible frequency content of earthquake seismograms into the audible range, the seismic data need to be shifted to a higher pitch. To accomplish this, the simplest and purest method is to time compress the seismogram (e.g., Hayward 1994; Dombois 2001; Dombois and Eckel 2011) by increasing the playback speed relative to the recording rate. Time compression also allows us to play back a long record in a reasonable amount of time during a lecture or demonstration.
Time compression as a method to convert seismic data to audible sound is an example of “audification,” a simple subset of the general field of “sonification,” which can involve more complex representation of data through transformations using various sound attributes (e.g., pitch, volume, and timbre). Early experiments with audification in seismology began with the advent of magnetic tape recording, which allowed playback at speeds higher than the recorded speed. One of the earliest examples of transforming seismic data into audible sounds was a recording by Benioff (1953), which includes various regional and teleseismic earthquakes recorded on an early tape system at Caltech’s Pasadena seismic station. Time compression using magnetic tape recording also was tested successfully as a method to discriminate between earthquakes and explosions (Speeth 1961; Frantti and Levereault 1965). Muirhead and Simpson (1972) used an ultra-slow speed (0.01 inch/sec) direct recording tape system to record a variety of earthquakes and explosions in Australia and incorporated time-shifted audio processing in their analysis. Some of these events were used as part of the “Murmurs of Earth” collection for the “Interstellar Record” launched on NASA’s Voyager spacecraft in 1977 (Sagan et al. 1978, 154). With the widespread use of digital recording in seismology, it is now possible to convert seismic waveforms to standard audio formats and apply simple filtering and time-compression techniques using widely available audio processing software. These types of auditory presentations of seismic data are now commonly used for educational purposes (e.g., Michael 1997; Simpson 2005; Michael 2011) and have recently regained popularity to highlight differences between typical earthquake recordings and tremor-like signals (Simpson et al. 2009; Fisher et al. 2010).
The SeisSound visual component includes the seismogram and corresponding spectrogram, presented in movie format indicating how the data evolve with time. An auditory sound file (WAVE format) of the data that are time compressed accompanies the visual information so the frequency content of the seismogram can be easily heard. Combining audio and visual information allows the user to both hear and see complexities in the frequency-time distribution of the seismogram that are often otherwise hidden in large-amplitude signals. These SeisSound video products provide a unique way to watch and listen to the vibration of the Earth, and help introduce more advanced topics in seismology.
Our computer codes are written with MATLAB and are freely available for use (see the electronic supplement’s MATLAB codes and data bundle). These MATLAB codes produce an audio file and a sequence of static image files. The audio track is produced using the MATLAB function wave-write, which allows a scaling factor to be applied to speed up or slow down the playback speed. It typically takes only a few minutes for the code to process a standard seismogram. A bundled file that contains the MATLAB programs and sample data is available from our electronic supplement (MATLAB codes and data bundle). Figure 2 shows the recommended directory structure for the codes and data. A list of the MATLAB programs and a description of the parameters used to call the main program are listed in Tables 1 and 2.
For each seismogram, the MATLAB code reads seismic data in Seismic Analysis Code (SAC) format (Goldstein et al. 2003; Goldstein and Snoke 2005) and generates a sound WAVE file and the image files (∼200–500) showing the evolution of the seismogram and spectrogram data with time. We use the software QuickTimePro to concatenate the images into a video and add the corresponding audio file in sync with the video to create a SeisSound video. The final SeisSound video file size (typically ∼1–15 MB) is dependent on the resolution and size of the images and the total number of frames in the video. In addition to the video product, we also include a stand-alone MATLAB code sac2wav.m to directly convert seismic data in SAC format to an audio WAVE file. The Incorporated Research Institutions for Seismology (IRIS) Data Management System (DMS) also provides a Webservice for extracting waveforms from the archive and converting them to WAVE format (http://www.iris.edu/ws/timeseries/).
Steps Required to Create SeisSound Products
There are two main steps required to create a SeisSound video. The first step requires running the SeisSound.m MATLAB code to produce an audio WAVE file and the sequence of images. As the code runs, it will display images of the data in three panels. The top panel shows the original seismogram, the middle panel a filtered version of the data, and the bottom a spectrogram of the data (e.g., see Figure 1). If you encounter an error message indicating a missing variable or function, check that you have all of the required routines (see Table 1) and that the codes and data are stored in the proper location in the directory structure (Figure 2). If the code runs successfully, a notification of “Render Finished” will be issued at the MATLAB command window and two new subdirectories named “Audio” and “Images” will be added appropriately (e.g., see Figure 2). In the second step, the audio and image files are merged to produce a SeisSound video (Figure 3 and Table 3).
It is relatively straightforward to process seismic waveform data using SeisSound. Currently, the input data must be in SAC format and standard SAC routines or the IRIS Data Management Center (DMC) time series Webservice (http://www.iris.edu/ws/timeseries) can be used to preprocess (e.g., filter and scale) the data. Station and waveform metadata (e.g., station code, start time, sample rate, etc.) are transferred to the SeisSound program via the SAC header and additional information required to create the SeisSound products is provided via parameters in the calling function as described in Table 2.
SAMPLE SeisSound VIDEO PRODUCTS
An assortment of SeisSound examples, selected to demonstrate differences in the frequency and temporal characteristics of different seismic signals, can be found in our electronic supplement. Because of the wide dynamic range, the sound can be better appreciated on a computer with good speakers or with earbuds to hear the full effect of the lower frequencies. The amplitude of the low frequencies can sometimes be large, so it is best to keep the volume initially low to avoid damaging the speakers or, if you are using earbuds, your ears. With these SeisSound products, students can begin to decipher and understand complicated earthquake physics and earthquake triggering processes.
Notable signatures in the videos can be indicative of certain seismic processes. For example, multiple vertical streaks of red in the spectrogram, corresponding to popping sounds that begin at a fast rate and then ebb (e.g., Magnitude 8.1—Samoa Islands Region, 29 September 2009 17:48:10 UTC (station AFI); AFI_aftershock_movie60FPS.mov) are characteristic of a mainshock/aftershock sequence (e.g., Peng et al. 2006, 2007; Kilb et al. 2007). A uniform distribution of vertical streaks in the spectrogram accompanied by a repetitive pop-pop-pop tempo sound (e.g., Drumbeat earthquake swarms during the 2004 Mount St. Helens eruption; MtStHelen_Drumbeat.mov) is a characteristic of “drum beat” earthquake swarms during volcanic eruptions (Iverson et al. 2006). Another similar, yet distinctly different, signature is that of triggered deep non-volcanic tremor that typically occurs during the passage of the surface waves (e.g., Peng et al. 2008, 2009), which manifests as vertical streaks in the spectrogram during the surface wave portion of the seismic wavetrain accompanied by a relatively short-lived rat-a-tat-tat sound as from a snare drum (e.g., Triggered tremor in Parkfield, California, from the 2002 Mw 7.8 Denali, Alaska, earthquake; Denali_Triggered_Tremor.mov). The codes and example data in the electronic supplement bundled file are set up to create this example using the MATLAB script “main_tremor.m.”
AUDIENCE AND USES FOR SeisSound VIDEO PRODUCTS
We have used SeisSound video products in ∼50 educational settings and public lectures ranging from teaching kindergarteners to educating more advanced audiences, including graduate students and experienced seismologists. The SeisSound images and sounds immediately captivate audiences regardless of their age and expertise, and the complexity of the message can be adjusted for each group. The innovative combination of auditory and visual information is particularly useful for introducing seismic data to beginning researchers, including upper-level undergraduate and first-year graduate students in introductory geophysics or seismology courses.
SeisSound videos can be used to highlight differences in the amplitude, frequency, and duration of P and S and surface waves and to teach how to discriminate between seismic signatures of teleseismic and local earthquakes. For the more advanced audiences, SeisSound products can be used to explore details of the spectral content in seismograms. Concepts that can be more easily discussed and investigated by incorporating sound include: categorizing seismic wave attenuation with distance from the source, discriminating between large and small earthquakes, identifying aftershock rates, and recognizing site effects including reverberation in basins (e.g., Benioff 1953; Michael 1997; Simpson et al. 2009; Fisher et al. 2010). SeisSound products can also be useful in discriminating complicated seismic signals from multiple sources, such as aftershocks within the coda of large earthquakes (e.g., Peng et al. 2006, 2007; Kilb et al. 2007), remote triggering of earthquakes (Hill et al. 1993), and tremor (e.g., Peng et al. 2008, 2009).
We will be working with the IRIS DMC to create and distribute select SeisSound products through the IRIS product repository. To accomplish this we plan to fully automate the process of creating SeisSound video products by replacing the use of QuickTimePro with the freeware alternatives, which will allow batch mode processing. We will also design the end products to include associated metadata information such as recording station (station name and component), earthquake parameters (time, latitude, longitude, depth, magnitude), and signature of interest in the waveform (e.g., aftershocks, tremor, volcanic drumbeats).
The design of SeisSound is modular enough to easily incorporate future enhancements. These might include but are not limited to: 1) An audio-video Webservice, similar to the IRIS time series Webservice (http://www.iris.edu/ws/timeseries/), which would allow users to either specify data at the IRIS DMC that they want to process or to upload their own data; 2) A customizable display that allows the spectrograms to be displayed in log-amplitude, or shows a series of the same seismogram with different filters applied; 3) A series of different seismograms played in sequence, presented either with true relative amplitude (volume represents differences in amplitude) or normalized amplitude (each record has the same maximum level, emphasizing the differences in pitch); 4) Aftershock/tremor location animations. Concurrent with the temporal evolution of the sound file and animated seismic data display, a map or cross-section of aftershock/tremor locations could be included where aftershocks/tremors in the current time window are marked as red or other colors, and then fade as time progresses (see example in companion paper Peng et al. 2012, this issue, in the EduQuakes column).
Although here we report on a relatively simple product that is primarily geared for use in educational settings, the concepts demonstrated by SeisSound can be expanded into more sophisticated research applications. A more advanced, and perhaps interactive, tool could include features such as zooming, filtering, and three-component rotation transformations. With these types of options available, the user could more efficiently search large quantities of seismic data for complicated and/or small nuances such as aftershock distribution characteristics, remotely triggered earthquakes, and tremor. We expect it will be easier to detect these key features using combined audio/visual techniques than with traditional or automated processing.
We thank an anonymous reviewer and SRL Associate Editor John N. Louie for their help and guidance. Integral to the success of this project was our participation in the Southern California Earthquake Center’s Summer Undergraduate Research Experience (SCEC SURE) program, which partnered undergraduate student MF in ZP’s lab in the summer of 2010.
SCEC is funded by National Science Foundation (NSF) Cooperative Agreement EAR-0106924 and U.S. Geological Survey (USGS) Cooperative Agreement 02HQAG0008. Support for this work included funding from IRIS sub-award 86-DMS funding 2011-3366 (DK) and NSF CAREER program EAR-0956051 (ZP). IRIS is funded by NSF under Cooperative Agreement EAR-0552316. The partnership with AM grew out of the online seminar series “Teaching Geophysics in the 21st Century: Visualizing Seismic Waves for Teaching and Research,” which was part of the “On the Cutting Edge— Professional Development for Geoscience Faculty” project.
↵1 Institute of Geophysics and Planetar y Physics, University of California San Diego, La Jolla, California
↵2 School of Earth and Atmospheric Sciences, Georgia Institute of Technology, Atlanta, Georgia
↵3 Incorporated Research Institutions for Seismology, Washington, DC
↵4 United States Geological Survey, Menlo Park, California
↵5 Byrn Mawr College, Bryn Mawr, Pennsylvania