dot
Detailansicht
Katalogkarte GBA
Katalogkarte ISBD
Suche präzisieren
Drucken
Download RIS
Hier klicken, um den Treffer aus der Auswahl zu entfernen
Titel Implications of the short earthquake record for hazard assessment and hazard map testing
VerfasserIn Miguel Merino, Seth Stein, John Adams, Bruce Spencer, Edward Brooks
Konferenz EGU General Assembly 2014
Medientyp Artikel
Sprache Englisch
Digitales Dokument PDF
Erschienen In: GRA - Volume 16 (2014)
Datensatznummer 250091386
Publikation (Nr.) Volltext-Dokument vorhandenEGU/EGU2014-5677.pdf
 
Zusammenfassung
A crucial limitation for earthquake hazard assessment is the short length of catalogs compared to the time between large earthquakes. As a result, many key parameters required for hazard assessments are poorly known, unknown, or unknowable. Despite many studies, we do not even know whether to assume that the probability of a major earthquake on a fault is constant with time, or follows a seismic cycle with lower probability shortly after the last major earthquake and higher probability later. We similarly have little ability to infer the assumed magnitude of the largest future earthquakes. Absent any theoretical basis, estimates are made using various methods, and often prove too low, because large earthquakes are infrequent compared to the length of the available earthquake history. Generating synthetic earthquake histories and sampling them over periods comparable to the available record shows that Mmax cannot be reliably estimated from earthquake catalogs, because the largest earthquake observed likely reflects the length of the history used, even if larger earthquakes occur. Similar challenges arise for assessing the performance of earthquake hazard maps. Given the short shaking record, maps predicting little shaking perform seem the most successful, even if on longer time scales they would not be. One method to avoid this effect is to aggregate areas with similar predicted hazard and compare predicted and observed levels of shaking over a given time interval. However, when the areas are not widely separated in space, the observed numbers for different areas may not be independent. A more powerful method is to order the areas and compute, for each area, the maximum predicted shaking for that area and all below or above it in the ordering. These computations yield two non-decreasing functions similar to cumulative distribution functions (cdf’s), one for observed and one for predicted, and methods for comparing distribution functions can be applied to assess the performance of the predictions. For example, if areas are ordered by predicted hazard and the distribution functions cross once, then areas with predicted hazards lower or higher than some level are overpredicted or underpredicted, and conversely for the other areas. The functions can represent frequencies of events or relative frequencies (cdf’s), which may compare favorably even when the frequencies themselves are underpredicted or overpredicted. It is important to pay attention to both underprediction and overprediction, because each type of error has its own consequences. We discuss how to compare the empirical assessments of the accuracy of hazard maps with that attainable by alternative naive models, including predicting no shaking and predicting the same hazard for all areas, subject to neither overprediction nor underprediction overall. The empirical assessments ask not how well a map predicted shaking in one area, but what occurred in the combined areas where a given level of shaking was predicted.