Which Site Proxy Should We Use in Ground-Motion Models?

Abstract:

The database is compiled from KIKNet Archieve of Japan (kyoshin.bosai.go.jp) with following criteria: shallow crustal events between 2000-2016, recorded within 300km and Mw range is 3.5-7. Time-based average of shear wave velocity profiles up to different depths 10m (VS10), 20m (VS20), 30m (VS30) and 50m (VS50) are computed. Only stations with 200<VS30<1500m/s are used. These proxies are used as site parameters in surface ground-motion models (SGMMs). Then site variabilities of the SGMMs are compared. VS50 model has the highest variability at short periods, for other models comparable results are obtained. For longer periods, considering deeper depths leads to lower site variability. Borehole models (BGMM, uses borehole velocity) and surface-to-borehole ratio models (SBRM, uses impedance ratio) are also developed. Applying the BGMM and SBRM, estimated surface motions are obtained. The difference between observed and estimated motions are less than 3%. So SBRM can be used as a site amplification indicator. The modified impedance ratios (αm) are computed as replacing the top layers with the reference site condition. That is if 20m depth is considered, the top layers up to 20m are replaced with 760m/s. The BGMM and SBRM (with αm) are used to compute the reference surface motion. Then the site amplification (observed motion/reference motion) are computed. The differences between site amplification are compared. The minimum misfit is obtained by VS50 model at longer periods. At short periods the misfits are similar. The results reveal that as deeper depths are considered more reliable site amplification estimates are obtained. Although VS30 proxy is debatable in some aspects, taking shallower depths misses some information about site amplification especially at longer periods and counter to Japanese stations, deeper Vs profiles are not well-documented in other regions as well as its usage over 40 years are the main reasons to choose it is a site proxy in GMMs.

Slidecast:

https://vimeo.com/277701678

Estimating Seismic Source Time Functions in Stochastic Earth Models

Abstract:

This work describes three current research efforts in estimating the mechanisms of explosive seismic sources using simulations and actual field data. In all of the work here, we use linear inverse methods to invert the seismic data for an equivalent linear source type. However, in the case that seismic energy is created by an explosion, the deformation in the region of the source is highly non-linear. For the first part of this presentation, we explore the effects of non-linear source mechanisms in our linear-equivalent inverse methods. The second part of this presentation explores the effects of unmapped geologic heterogeneity on our inversions. Specifically, we create realistic geologic models but include high wavenumber stochastic heterogeneities that tomography cannot typically resolve. Using synthetic data, we explore the effects of stochastic heterogeneity on our ability to estimate seismic source time functions (STFs) using linearized inverse methods. Finally, we present preliminary results of a controlled field experiment. In this experiment, we collected seismic data from a dense seismic network that we deployed to record a series of well-perforation shots at a geothermal injection well adjacent to Blue Mountain, NV, USA. We show the data and preliminary results of our linear-equivalent STF inversions as well as the effects of near-source nonlinearities and stochastic Earth models on inversion certainty. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Deparment of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Slidecast:

https://vimeo.com/278011244

Robustness of κ0 Measurement: Insight from a Site-Specific Study in the Low-to-Moderate Seismicity Context of Southeastern France

Abstract:

The site component of κ (κ0) is used in engineering seismology to describe the high-frequency attenuation at a site. It is an important input parameter for various applications (stochastic modeling, ground-motion prediction equations, host-to-target adjustments, etc.). Its evaluation faces, however, several issues as it is difficult to properly isolate κ0 from the source and path terms of κ, and because its measurement is subjected to the operator subjectivity and to large uncertainties. This is particularly true in low-to-moderate seismicity areas because the quantity and bandwidth of the usable data are generally limited. Therefore, κ0 measurements might have higher sensitivity to site amplification, frequency-dependent attenuation, and to the earthquake source properties. Here, the κDS (displacement spectrum) approach of Biasi and Smith (2001) is compared with the original κAS (acceleration spectrum) approach of Anderson and Hough (1984) for three sites in an industrial area in Provence (southeastern France). A semiautomatic procedure is developed to measure individual values of κr that reduces inter-operator variability and provides the associated uncertainty. A good agreement is found between κ0_AS and κ0_DS for the two hard-rock sites, which yields κ0 ∼30 ms. The comparisons between these approaches are also used to infer the reliability of κ measurements by addressing their sensitivity to site amplification, frequency-dependent attenuation, and the earthquake source properties. First, the impact of site amplification on κ0 estimates is shown to be very important and strongly frequency-dependent for stiff-soil sites, and non-negligible for hard-rock sites. Second, frequency-dependent attenuation cannot be ruled out for κ, as indicated by comparison with the literature quality factor (Q) for the Alps. Finally, a source component for κAS is questionable from the comparison of κr_AS evaluated for a cluster of events that shared the same path and site components.

Poster:

WED.Monroe.1700.Perron

Operational Experience with Next-Generation Automatic Association Software NET-VISA

Abstract:

The NET-VISA software produces an automatic combined seismic, hydroacoustic and infrasound bulletin resulting from the key step of assembling detections from multiple stations within the processing chain of the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty (CTBTO). The IDC waveform analysts are systematically evaluating the results of using it as a complement to the current operational software Global Association (GA), which is nearing its 19th anniversary in continuous operation at the IDC. Events that otherwise have been missed by the standard processing are presented to the analysts from a processing pipeline running in parallel with the GA software. After just seven days of evaluation, the number of events added to the Reviewed Event Bulletin (REB) that originate with NET-VISA represent on average 12.3% of the total of the events in the REB. Out of this total, the number of events with a valid body wave (mb) or local magnitude (ML) larger than or equal to 4 is 3.5%, indicating that most added events fall below this threshold. This paper will present a more complete analysis based on multiple weeks of operational use.

Slidecast:

https://vimeo.com/277699087

Recent Findings and Recommendations for an Updated Hazard Characterization of the Eglington Fault in Las Vegas Valley, Nevada

Abstract:

The Las Vegas Valley fault system (LVVFS) is a complex set of north- to northeast- trending, intra-basin Quaternary fault scarps up to 30 m high that displace alluvial fan, fine-grained basin fill, and paleo-spring deposits in the densely populated Las Vegas metropolitan area. Characterizing the seismic hazard of the LVVFS is currently the focus of a multi-year collaborative study involving researchers from the Nevada Bureau of Mines and Geology, University of Nevada, Las Vegas, and the U.S. Geological Survey. The Eglington fault is the only LVVFS fault currently included on the National Seismic Hazard Map (NSHM), and is a priority focus in the early stages of the investigation. Substantial uncertainty remains regarding the seismogenic potential of the LVVFS. Two endmember hypotheses have been proposed regarding the mechanisms responsible for producing the scarps associated with the LVVFS, including the Eglington fault: 1) tectonic (e.g., coseismic surface rupture) and 2) non-tectonic (e.g., prehistoric differential sediment compaction). In this presentation, we will summarize existing geologic, geodetic, geophysical, and geochronologic data that provide insight into the mechanism(s) responsible for scarp formation within the LVVFS, and present unresolved problems with both endmember tectonic and non-tectonic scenarios. We will also discuss in-progress efforts to characterize the seismogenic potential of the Eglington fault including: planned paleoseismic trenching, geologic mapping using lidar and predevelopment topography derived from historical aerial photographs, Optically Stimulated Luminescence (OSL) dating of the Las Vegas basin stratigraphy, and evaluation of the potential for differential sediment compaction across the fault scarps. In addition, we will present the recommendations from the 2018 Working Group on Nevada Seismic Hazards, including the details of a logic tree framework to address uncertainty in the LVVFS hazard assessment.

Slidecast:

https://vimeo.com/277703108

Dynamic Models of Earthquake Rupture along Branch Faults of the Eastern San Gorgonio Pass Region in California Using Complex Fault Structure

Abstract:

Compilations of geologic data have illustrated that the right-lateral Coachella segment of the southern San Andreas Fault is past its average recurrence time period. On its western edge, this fault segment is split into two branches: the Mission Creek strand, and the Banning fault strand, of the San Andreas fault. Depending on how rupture propagates through this region, there is the possibility of a through-going rupture that could lead to the channeling of damaging seismic energy into the Los Angeles Basin. The fault structures and rupture scenarios on these twho strands are potentially very different, so it is important to determine which strand provides a more likely rupture path, and the circumstances that control the rupture path. In this study, we focus on the effect of different assumptions about fault geometry and initial stress pattern on the rupture process to test those scenarios and thus investigate the most likely path of a rupture that starts on the Coachella segment. We consider three types of fault geometry based on the Southern Community Fault Model (SCEC) and the Third Uniform California Earthquake Rupture Forecast (UCERF3), and we create a 3D finite element mesh for each. These three meshes are then incorporated into the finite element method code FaultMod to compute a physical model for the rupture dynamics. We use a slip-weakening friction law, and consider different assumptions of background stress, such as constant tractions and regional stress regimes with different orientations. Both the constant and regional stress distributions show that it is more likely for the rupture to branch from the Coachella segment to the Mission Creek compared to the Banning fault segment, even if the closest connectivity is between the Coachella and Banning. For the regional stress distribution, we encounter cases of super-shear rupture for one of the SCEC fault geometry, and sub-shear rupture for the other two. The fault connectivity at this branch system seems to have a significant impact on whether a through-going rupture is more likely to occur or not.

Slidecast:

https://vimeo.com/277702346