SAN LUIS OBISPO MOTHERS FOR PEACE’S MOTION TO FILE NEW CONTENTIONS REGARDING ADEQUACY OF SEVERE ACCIDENT MITIGATION ALTERNATIVES ANALYSIS FOR DIABLO CANYON LICENSE RENEWAL APPLICATION
I. INTRODUCTION
Pursuant to 10 C.F.R. §§ 2.309(c), 2.309(f)(1), and 2.309(f)(2), San Luis Obispo Mothers for Peace (“SLOMFP”) seeks leave to file two new contentions challenging the adequacy of Pacific Gas & Electric Company’s (“PG&Es”) Severe Accident Mitigation Alternatives (“SAMA”) Analysis for the proposed renewal of the Diablo Canyon nuclear power plant (“DCNPP”) operating license to satisfy the National Environmental Policy Act (“NEPA”) and U.S. Nuclear Regulatory Commission (“NRC”) implementing regulations.[1]
In Contention C (Inadequate Consideration of Seismic Risks in SAMA Analysis), SLOMFP contends that PG&E’s SAMA Analysis is inadequate to satisfy NEPA because it proposes to rely on the “results” of PG&E’s recently-submitted and seriously deficient post-Fukushima seismic risk analysis for its evaluation of the cost-effectiveness of measures to mitigate earthquake impacts. Contention C is supported by the attached Declaration of Dr. David D. Jackson, Professor of Geophysics Emeritus at the University of California at Los Angeles (attached). Contention D (Inadequate Discussion of Flooding Risk in SAMA Analysis) asserts that the SAMA Analysis wrongly fails to consider flooding risks to safety equipment posed by “local intense precipitation” or “LIP flooding events.” These LIP flooding risks were identified in PG&E’s own post-Fukushima flood hazard analysis.[2]
As discussed below in Section II, SLOMFP’s contentions meet the NRC’s requirements for admissibility of contentions. As further discussed in Section III, SLOMFP also has good cause for filing the contentions after the initial deadline, which passed in 2010.
II. CONTENTIONS
Contention C. Inadequate Consideration of Seismic Risk in SAMA Analysis
1. Statement of Contention
PG&E’s SAMA Analysis (Appendix F of PG&E’s Amended ER) is inadequate to satisfy the National Environmental Policy Act (“NEPA”) or NRC implementing regulation 10 C.F.R. § 51.53(c)(ii)(L) because PG&E’s evaluation of potential mitigation measures is not based on a sufficiently rigorous or up-to-date analysis of seismic risks. As a result, PG&E’s evaluation of the comparative costs and benefits of measures to prevent or mitigate the effects of a severe earthquake does not sufficiently credit the cost-effectiveness of mitigation measures.
While PG&E claims that the “results and insights” of its 2014 “interim” probabilistic risk analysis (“PRA”) (labeled “DCO3”) are “reasonable for the purposes of a SAMA analysis” (Amended ER at F-34), by PG&E’s own admission, DCO3 is only an “interim” PRA. Id. In addition, it is not sufficiently rigorous or updated to support the SAMA analysis.
Nor does PG&E’s promise to “update” the DCO3 with the “results” of its 2015 seismic hazards analysis[3] cure the inadequacy of DCO3 to support PG&E’s SAMA Analysis, because PG&E’s 2015 seismic hazards analysis is also insufficiently rigorous and relies on outdated or unjustified methods and assumptions. Given the inadequacies of PG&E’s seismic hazards analysis, to merely cite its “results” in a revised SAMA Analysis would not be sufficient to ensure the adequacy of the SAMA Analysis to evaluate potential mitigation measures for severe seismic accidents. Instead, PG&E must cure the significant defects in the underlying data and analyses.
2. Statement of Basis for the Contention
a. NEPA requirements
The “core” requirement of NEPA is that for any federal action with a significant adverse effect on the human environment, federal agencies must prepare an environmental impact statement (“EIS”) which includes a “detailed statement” regarding:
“(i) the environmental impact of the proposed action, (ii) any adverse environmental effects which cannot be avoided should the proposal be implemented, (iii) alternatives to the proposed action, (iv) the relationship between local short-term uses of man’s environment and the maintenance and enhancement of long-term productivity, and (v) any irreversible and irretrievable commitments of resources which would be involved in the proposed action should it be implemented.”
San Luis Obispo Mothers for Peace, 449 F.3d 1016, 1020 (2006) (citing Dept. of Transp. v. Pub. Citizen, 541 U.S. 752, 756 (2004), 42 U.S.C. § 4332(2)(C)). NRC regulations also require that an NRC application for operating license renewal must be supported by an environmental report prepared by the applicant (10 C.F.R. § 51.53(c)) and a supplemental EIS prepared by the NRC Staff. 10 C.F.R. § 51.95(c).
An EIS must provide decision-makers with a reasonable array of alternatives for avoiding or mitigating the environmental impacts of the proposed action. Idaho Sporting Cong. v. Thomas, 137 F.3d 1146, 1151 (9th Cir. 1998) (citing Robertson v. Methow Valley Citizens Council, 490 U.S. 332, 352 (1989)). With respect to renewal of reactor operating licenses, NRC regulations specifically require a discussion of “alternatives to mitigate severe accidents.” 10 C.F.R. §§ 51.53(c)(3)(ii)(L), 51.95(c)(2). “Probability figures” used in a SAMA analysis must be adequate to support the reasoned consideration of alternatives. Duke Energy Corp. (McGuire Nuclear Station, Units 1 and 2; Catawba Nuclear Station, Units 1 and 2), CLI-02-17, 56 NRC 1, 7 (2002). In addition, the SAMA Analysis must consider relevant information from outside sources. Id. at 8; Pacific Gas and Electric Co. (Diablo Canyon Nuclear Power Plant, Units 1 and 2), CLI-11-11, 74 NRC 427, 443 (2011). Finally, an Environmental Report or EIS must address “new and significant information” that was not previously addressed in an EIS for the facility. 10 C.F.R. §§ 51.53(c)(iv), 51.92(a)(2).
b. Inadequacy of DCO3 to support SAMA Analysis
PG&E asserts that “the interim results and insights from the DC03 model are reasonable for the purposes of a SAMA analysis” because:
“At the time of the 2013 Seismic Peer Review, the Turbine Building Shear Wall fragility had been updated based on the latest hazard spectral information, which was updated with the Shoreline Fault. The fragility analysis for other SSCs was based on the LTSP. The LTSP fragility curves are acceptable for use in DC03 because no scaling is necessary for use with the updated hazard spectral information. The LTSP fragility curves are the same shape (+/-1 0°/o) in the period of interest (3-8.5Hz) and there are no components in the PRA model in the 1-3Hz range. In addition, the 2014 Central Coastal California Seismic Imaging Project Report (Reference 53) was a deterministic analysis that did not change the inputs to the seismic PRA.”
Amended ER at F-34. While PG&E claims to have relied on the “latest hazard spectral information” for its 2013 update to the Turbine Building Shear Wall fragility, that information does not include the 2015 SHS Report or the 2015 SSC Report; and therefore the spectral information is not, in fact, the “latest.” As PG&E implicitly acknowledges, the “latest” consists of the probabilistic seismic hazards assessment prepared by PG&E in response to the NRC’s post-Fukushima Request for Information. PG&E must consider this information and ensure it is adequate to support the SAMA Analysis. Duke Energy Corp., 56 NRC at 7 (requiring adequate probability figures to support a SAMA analysis); Pacific Gas and Electric Co., 74 NRC at 443 (SAMA Analysis must consider relevant information from outside sources); 10 C.F.R. §§ 51.53(c)(iv), 51.92(a)(2) (Environmental Reports and EISs must address “new and significant information” that was not previously addressed.)
c. Inadequacy of seismic hazards assessment “results” to support
SAMA analysis
The “results” that PG&E proposes to use to update its SAMA Analysis come from a “site-specific probabilistic seismic hazard assessment” that PG&E prepared in response to the NRC’s March 12, 2012 Request for Information, issued pursuant to 10 C.F.R. § 50.54(f). As described by PG&E:
“The assessment used an updated seismic source characterization (SSC) model and an updated GMC [ground motion characterization] model as basic inputs. The SSC and GMC studies were undertaken to fulfill the NRC requirement that PG&E conduct a probabilistic seismic hazard assessment using SSHAC [Senior Seismic Hazards Analysis Committee] Level 3 procedures for DCPP, as specified by the NRC (NRC 2012). Thus, the SSC [seismic source characterization] and GMC models were developed using processes that are appropriate for a SSHAC Level 3 study, as described in NUREG/CR-6372 (NRC 1997), and the detailed implementation guidance provided in NUREG-2117 (NRC 2012b). Both the SSC and GMC models represent new or “replacement” models according to the definitions and instructions in NUREG-2117. The SSC model describes the future earthquake potential (e.g., magnitudes, locations, and rates) for the region surrounding the DCPP site, and the GMC model describes the distribution of the ground motion as a function of magnitude, style of faulting, source-to-site geometry and reference site condition.”
SHS Reportat 15.
PG&E also states that it will address the effect of the seismic hazards assessment “results” on the SAMA Analysis. Amended ER at F-34. Incorporation of the “results” of the seismic hazards analysis would not satisfy NEPA or NRC implementing regulations, however, because the data and analyses underlying those results are faulty in two fundamental respects: failure to account for nearby earthquakes, and failure to account for potentially large earthquakes.
(i) Failure to account for nearby earthquakes
First, PG&E’s seismic hazards analysis fails to account for reasonably foreseeable earthquakes located nearer to the DCPP than PG&E has assumed. For instance, the seismic stations used to locate earthquakes on the Shoreline Fault are all onshore, east of the fault, so that the fault’s east-west location is highly uncertain. Thus the fault could be closer to or further from DCPP than assumed in DC03. The most effective solution would be to install offshore seismic stations west of the fault and record earthquakes long enough to infer the fault location accurately. Barring that, PGE must consider both nearer and farther locations of the Shoreline fault with realistic weights that reflect the fault location uncertainty.
In this context, PG&E also wrongly assumes that major earthquakes are located exactly on simplified versions of mapped fault traces. While PG&E considers alternative versions of fault geometry models (FGMs) for some faults (see SSC Report, section 6.3.1), PG&E makes no provision for large off-fault earthquakes. Smaller earthquakes are accounted for in the Areal Source Zones, but their magnitudes are limited to 6.8 or 6.9 in the Local Areal Zone (see SSC Report, Figure 14-1), most relevant to seismic hazard.[4] PG&E disregards that fact that earthquakes larger than those limits could occur within the Local Areal Zone, causing strong ground motion not yet accounted for.
In addition, PG&E does not follow the well-established method for California earthquake mapping of showing earthquake locations in a broad zone that covers a few kilometers on either side of major faults. (Hauksson et al, 2012; Powers and Field, UCERF3 Appendix O, 2013; Simpson, et al, 2006).[5] In the Uniform California Earthquake Rupture Forecast (Field, 2014a, 2014b) major faults are considered to have a width 12 km on both sides of major faults, with the implication that earthquakes associated with those faults could occur anywhere in those broad zones. With this understanding, earthquakes associated with the Hosgri fault, for example, could occur much closer to the DCPP than the assumed fault geometry under any of the options considered by PG&E.
As a result of PG&E’s outdated and non-conservative assumptions about the location of earthquakes, the hazard curves in the SSC Report underestimate the shaking that may be caused by nearby earthquakes. The amount of PG&E’s underestimate is potentially significant, and therefore must be evaluated by further study before PG&E may reasonably rely on the SSC Report’s results in its SAMA Analysis.[6]
ii. Failure to account for potential large earthquakes
Second, even for the faults that PG&E does recognize and analyze, PG&E fails to account for recent data and models showing that earthquakes on given faults may be much larger than previously assumed. For instance, PG&E estimates the most likely maximum magnitude for earthquakes in the Local Areal Zone to be 6.8 (SSC Report, Table 13-9). PG&E does allow that the Shoreline fault might participate as a branch of a larger earthquake on the Hosgri, but the resulting ground shaking is assumed to be less than that of a larger earthquake that extends the Shoreline fault, as it could. Experience shows that earthquake magnitudes may be much larger.
PG&E’s understatement of magnitude stems from its reliance on “scaling relations”, which are equations relating magnitude to rupture length or rupture area. The most commonly cited equations, which are used heavily by PG&E, are from Wells and Coppersmith (1994). PG&E uses them to estimate the maximum magnitude of a fault from its mapped length. But scaling of earthquake magnitudes from fault geometry has been demonstrated to be unsupportable, because the mapped fault length is no limit to the ultimate rupture length. Many earthquakes have ruptures exceeding the length of the faults on which they started. Even PG&E implicitly acknowledges this: examples listed in the SSC Report at page 6-6 include the 2002 Denali, AK (magnitude 7.9), the 1992 Landers, CA (magnitude 7.3), and the 1999 Hector Mine, CA (magnitude 7.1). Other major earthquakes occur on previously unknown faults, even in areas with extensive prior geological study: the 1989 Loma Prieta earthquake (magnitude 7.1; Spudich, 1996); the 1994 Northridge, CA (magnitude 6.7); and the 2010 Darfield, NZ (magnitude 7.1) earthquakes. Perhaps the most astounding example was the 2012 magnitude 8.6 strike-slip earthquake off the coast of northern Sumatra (Ishii et al., 2013). Nevertheless, PG&E disregards this information and unjustifiably relies on the scaling relationships to estimate maximum magnitude from fault length.
Perhaps the most significant example of PG&E’s unjustifiable reliance on the scaling assumption is in the Local Areal Zone, where PG&E has concluded that the “virtual faults” there must be confined to a length of 50 km, implying a most probable maximum magnitude of 6.8 for strike slip earthquakes and 6.9 for reverse slip (thrust) earthquakes (SSC Report, Table 13-9, page 13-29). PG&E’s assumption of a 50 km fault “length” is purely subjective, and not based on any observed fault length. More importantly, it is irrelevant because earthquake rupture length could exceed the fault length there just as it has in the cases listed above. Similarly, the length assumed by PG&E for the Shoreline Fault is based only on circumstantial evidence, and there is no evidence that earthquakes could not rupture well beyond its boundaries. PG&E does include in its source model the possibility that the Shoreline Fault might rupture in combination with the Hosgri fault. However, strong ground motion at DCPP due to a simpler large (magnitude above 6.8) earthquake extending the Shoreline Fault may be significantly different. Including such a possible source in a revised SSC is needed to resolve that question.
In some cases, PG&E has under-estimated maximum magnitude by applying scaling relationships not just to fault length, but to segment length. Segments are shorter lengths of faults that are assumed to contain the entire rupture; that is, rupture starting within a segment must stop at the next segment boundary. An earthquake presumed to rupture one full segment is referred to as a “characteristic” earthquake. The characteristic earthquake model assumes that most large earthquakes on a fault are characteristic earthquakes. But both the characteristic earthquake model and the segmentation model have been discredited, most notably by the massive Tohoku, Japan earthquake of 2011 (magnitude 9.0), which destroyed the Fukushima Nuclear Power Plant and led to the very seismic hazard analysis that PG&E has reported on in the SSC Report and SHS Report. Before the Tohoku earthquake happened, the Japanese government estimated, based on a segmentation model, that the upper magnitude limit there was on the order of 8.0. The actual magnitude of 9.0 was a surprise to those who held on to the segmentation model (Kagan and Jackson, 2013; Stein, et al., 2011). The catastrophic 2004 Banda Aceh (Sumatra) earthquake (M 9.1; Banerjee, 2004) also violated many so-called segment boundaries and exceeded the scale of magnitudes implied by the characteristic earthquake assumption.
One reason that the segmentation model has been discredited is that it assumes that hazard is greatest near the segment boundaries, because a site there would be shaken strongly by earthquakes on either of the joined segments. A corollary of that assumption is that sites near the middle of a segment are assumed to be less hazardous because they are less shaken by earthquakes on adjoining segments. But there is no direct evidence that ruptures repeatedly stop at the ends of segments, nor is there evidence of more frequent strong shaking near the boundaries. Instead, PG&E’s assumptions are purely subjective. In this context, it is important to recognize that the authors of the UCERF3 report took care to eliminate or reduce the effect of the segmentation model and segment boundaries (e.g. Field et al., 2014). As stated in the first two sentences of the abstract:
“The 2014 Working Group on California Earthquake Probabilities (WGCEP14) present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation and include multi-fault ruptures, both limitations of UCERF2.”
Some boundaries were retained in UCERF3 to minimize the changes from previous models, but not because of empirical evidence. As the authors also stated in the abstract:
“UCERF3 is still an approximation of the system, however, and the range of models is limited (e.g., constrained to stay close to UCERF2) . . . Although UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site-specific investigation.[7]“
UCERF3 maintained some vestiges of segmentation so as not to introduce changes too quickly. The effect of segmentation in the UCERF3 report was not profound because UCERF3 did not intend use for site specific application. In the case of DCPP, the segments and boundaries assumed by PG&E, especially those on the Hosgri Fault, are close enough to the source to artificially increase or decrease the calculated ground motion, PG&E should have abandoned the segmentation model in order to more accurately model earthquake hazards.
The characteristic model implies a higher earthquake rate at “characteristic” magnitudes than predicted by the Gutenberg-Richter (“GR”) model, its prime competitor. The GR model posits that the frequency of earthquakes decreases exponentially with increasing magnitude, such that earthquakes of magnitude 7 and above, for example, would be about 10 times less frequent than those of magnitude 6 and above. The GR model is consistent with the magnitudes of quakes in most regional catalogs throughout the world. The characteristic model implies that earthquakes at the characteristic magnitude should be up to 40 times more frequent than predicted by GR (Schwartz and Coppersmith, 1984) and it is generally assumed to apply to individual faults. When both models are fit to the same amount of slip on a known fault, it follows that the characteristic model predicts fewer earthquakes both above and below the characteristic magnitude range. Wesnousky, (1994), Kagan (1996) and many others have debated the issue at length. Page and Felzer (2015, in press) show that the GR model is quite consistent with earthquakes on the San Andreas fault. The failure of the segmentation model as noted above, and the very subjective nature of the way segment boundaries must be assigned, makes the characteristic model and its implied magnitude distribution very dubious.
To address this deficiency, PG&E should take several measures: give more weight to GR magnitude distribution, using realistic (higher) maximum magnitudes; randomly locate ruptures, with some extending off the ends of faults; and take a stochastic approach as in Hiemer et al (2014, 2015). When both models are constrained to match the same slip rates and a realistic maximum magnitude greater than the “characteristic” magnitude is assigned, the GR model will imply lower rates of characteristic magnitude events but higher rates at smaller and larger magnitudes. Thus replacing the characteristic assumption with a reasonable GR model could increase or decrease the calculated hazard at different spectral frequencies, depending on details that can only be determined with proper modeling. Similarly, the effect of replacing segmentation with randomized locations of earthquakes on faults may increase or decrease hazard at different spectral frequencies depending on the locations of assumed segment geometry relative to DCPP. Again, accurate modeling is required to know the effect. In any case, the assumptions of segmentation and characteristic earthquakes should be abandoned because they are in conflict with observed earthquakes as listed above.
3. Demonstration that the contention is within the scope of the
proceeding
This contention is within the scope of the Diablo Canyon license renewal proceeding because it challenges the adequacy, under NEPA and NRC implementing regulations, of the SAMA Analysis required for the re-licensing of Diablo Canyon.
4. Demonstration that the contention is material to the findings the NRC
must make to re-license Diablo Canyon
The contention is material to the findings that the NRC must make in order to license this reactor because it challenges the environmental analysis that the NRC will rely on for its NEPA findings in the DCNPP license renewal proceeding.
5. Concise statement of the facts or expert opinion supporting the
contention, along with appropriate citations to supporting scientific or
factual materials
The facts and expert opinion on which SLOMFP relies are summarized in Section 2 (basis statement) above. They are supported by the Declaration of Dr. David D. Jackson, attached. The following is a list of references relied on in the contention:
(Banerjee et al., 2004)
Paramesh Banerjee, Fred Pollitz, B. Nagarajan, and Roland Bürgmann, Coseismic Slip Distributions of the 26 December 2004 Sumatra–Andaman and 28 March 2005 Nias Earthquakes from GPS Static Offsets, Bulletin of the Seismological Society of America, January 2007, v. 97, p. S86-S102,doi:10.1785/0120050609
(Bird et al., 2004)
Peter Bird, P., and Yan Y. Kagan, “Plate-tectonic analysis of shallow seismicity: apparent boundary width, beta, corner magnitude, coupled lithosphere thickness, and coupling in seven tectonic settings”, Bull. Seismol. Soc. Amer., 94(6), 2380-2399, 2004. (plus electronic supplement), doi:10.1785/0120030107
(Field et al., 2014)
Edward H. Field, Ramon J. Arrowsmith, Glenn P. Biasi, Peter Bird, Timothy E. Dawson,
Karen R. Felzer, David D. Jackson, Kaj M. Johnson, Thomas H. Jordan, Christopher Madden, Andrew J.Michael, Kevin R. Milner, Morgan T. Page, Tom Parsons, Peter M. Powers, Bruce E. Shaw, Wayne R. Thatcher, Ray J. Weldon II, and Yuehua Zeng,
Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3)—The Time-Independent Model, Bulletin of the Seismological Society of America, Vol. 104, No. 3, pp. 1122–1180, June 2014, doi: 10.1785/0120130164
(Field et al., 2015)
Edward H. Field, Glenn P. Biasi, Peter Bird, Timothy E. Dawson, Karen R. Felzer,
David D. Jackson, Kaj M. Johnson, Thomas H. Jordan, Christopher Madden,
Andrew J. Michael, Kevin R. Milner, Morgan T. Page, Tom Parsons, Peter M. Powers,
Bruce E. Shaw, Wayne R. Thatcher, Ray J. Weldon II, and Yuehua Zeng, Long-Term Time-Dependent Probabilities for the Third Uniform California Earthquake Rupture Forecast (UCERF3), Bulletin of the Seismological Society of America, Vol. 105, No. 2A, pp. 511–543, April 2015, doi: 10.1785/0120140093
(Hardebeck, 2013)
Jeanne L. Hardebeck, Geometry and Earthquake Potential of the Shoreline Fault, Central California, Bulletin of the Seismological Society of America, Vol. 103, No. 1, pp. 447–462, February 2013, doi: 10.1785/0120120175
(Hauksson et al., 2011)
Egill Hauksson, Wenzheng Yang, and Peter M. Shearer, Waveform Relocated Earthquake Catalog for Southern California (1981 to June 2011), Bulletin of the Seismological Society of America, Vol. 102, No. 5, pp. 2239–2244, October 2012, doi: 10.1785/0120120010
(Hiemer et al., 2013)
Stefan Hiemer, David D. Jackson, Qi Wang, Yan Y. Kagan, Jochen Woessner, Jeremy D. Zechar, and Stefan Wiemer, A Stochastic Forecast of California Earthquakes Based on Fault Slip and Smoothed Seismicity, Bulletin of the Seismological Society of America, Vol. 103, No. 2A, pp. 799–810, April 2013, doi: 10.1785/0120120168
(Hiemer et al., 2014)
S. Hiemer, J. Woessner,1 R. Basili, L. Danciu,1 D. Giardini and S. Wiemer , A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe, Geophys. J. Int. (2014) 198, 1159–1172 doi: 10.1093/gji/ggu186
(Ishii et al., 2013)
Miaki Ishii, Eric Kiser, and Eric L. Geist, Mw 8.6 Sumatran earthquake of 11 April 2012: Rare seaward expression of oblique subduction. Geology, March 2013, v. 41, p.319-322 (First published on January 17, 2013).
(Kagan, 1996)
Kagan, Y. Y. (1996), Comment on “The Gutenberg-Richter or characteristic earthquake distribution,which is it?,” by S. G. Wesnousky, Bull. Seismol. Soc. Am., 86, 275–284.
(Kagan and Jackson, 2013)
Kagan, Y. Y., and D. D. Jackson (2013), Tohoku earthquake: a surprise?, Bull. Seismol. Soc. Am.,103, 1181–1194.
(Powers and Jordan, 2010)
Peter M. Powers and Thomas H. Jordan, Distribution of seismicity across strike‐slip faults in California, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 115, B05305, doi:10.1029/2008JB006234, 2010
(Powers and Fields, 2013)
Powers, P., and N. Field, UCERF3 Appendix O, Gridded Seismicity Sources,
http://pubs.usgs.gov/of/2013/1165/pdf/ofr2013-1165_appendixO.pdf
(Quigley et al., 2012)
M. Quigley, R. Van Dissen, N. Litchfield, P. Villamor, B. Duffy, D. Barrell, K. Furlong, T. Stahl, E. Bilderback and D. Noble, Surface rupture during the 2010 Mw 7.1 Darfield (Canterbury) earthquake: Implications for fault rupture dynamics and seismic-hazard analysis, Geology, January 2012, v. 40, p. 55-58 (First published on November 23, 2011)
(Schwartz and Coppersmith, 1984)
Schwartz, D. P., and K. J. Coppersmith (1984), Fault behavior and characteristic earthquakes – examples from the Wasatch and San Andreas fault zones, J. Geophys. Res., 89, 5681–5698.
(Simpson et al., 2006)
R. W. Simpson, M. Barall, J. Langbein, J. R. Murray, and M. J. Rymer, San Andreas Fault Geometry in the Parkfield, California Region, Bulletin of the Seismological Society of America, September 2006, v. 96, p. S28-S37,doi:10.1785/0120050824
(Spudich, 1996)
Paul Spudich, editor, The Loma Prieta, California, Earthquake of October 17, 1989—Main Shock Characteristics, U.S. Geological Survey Professional Paper 1550-A 1996
(Stein et al., 2011)
Seth Stein, Robert Geller, and Mian Liu, Bad Assumptions or Bad Luck: Why Earthquake Hazard Maps Need Objective Testing, Seismological Research Letters, September/October 2011, v. 82, p. 623-626,doi:10.1785/gssrl.82.5.623
(Thurber et al., 2006)
Clifford Thurber, Haijiang Zhang, Felix Waldhauser, Jeanne Hardebeck, Andrew Michael, and Donna Eberhart-Phillips, Three-Dimensional Compressional Wavespeed Model, Earthquake Relocations, and Focal Mechanisms for the Parkfield, California, Region, Bulletin of the Seismological Society of America, September 2006, v. 96, p. S38-S49,doi:10.1785/0120050825
(Wesnousky, 1994)
Wesnousky, S. G. (1994), The Gutenberg-Richter or characteristic earthquake distribution, which is it?, Bull. Seismol. Soc. Am., 84, 1940–1959.
6. A genuine dispute exists with the applicant on a material issue of law or fact
As set forth in the contention and its basis statement, this contention raises a genuine dispute with the applicant regarding the adequacy of PG&E’s SAMA Analysis to support the proposed renewal of PG&E’s operating license for Diablo Canyon.
[1] The SAMA Analysis is presented in Appendix F to PG&E’s Amended Environmental Report (submitted February 25, 2015) (“ER”).
[2] Contentions C and D follow two contentions filed by SLOMFP on April 6, 2015: Contention A (Inadequate Consideration of Energy Alternatives) and Contention B (Failure to Conduct Cost-Benefit Analysis of Energy Alternatives).
[3] PG&E’s 2015 seismic hazards analysis consists of two documents: Pacific Gas and Electric Co., Seismic Hazard and Screening Report, Diablo Canyon Power Plant Units 1 and 2 (“SHS Report”), submitted by letter from Barry S. Allen, PG&E to NRC, re: Response to NRC Request for Information Pursuant to 10 CFR 50.54(f) Regarding the Seismic Aspects of Recommendation 2.1 of the Near-Term Task Force Review of Insights from the Fukushima Dai-ichi Accident: Seismic Hazard and Screening Report (Mar. 11, 2015); and PG&E’s Seismic Source Characterization for the Diablo Canyon Power Plant, San Luis Obispo County, California; report on the results of a SSHAC level 3 study (Rev. A, March 2015) (“SSC Report”).
[4] These imposed limitations on potential magnitude are arbitrary and unjustified, as discussed below in subsection ii.
[5] While PG&E estimates most important faults like Hosgri and Los Osos have widths of 2 km or more, its hazard calculations do not include those widths.
[6] PG&E could, for instance, apply a “stochastic fault” model (Hiemer et al., 2014a and 2014b) to deal with these problems. The stochastic fault model is recent, but it overcomes the assumption that major quakes occur exactly on faults, and it has been adopted for use in the SHARE project in Europe (Hiemer et al., 2014b). Alternately, hazard calculations could include earthquake scenarios near but not exactly on mapped faults. This would be most important on nearby faults such as Hosgri and Shoreline.
[7] UCERF3 maintained some vestiges of segmentation so as not to introduce changes too quickly. The effect of segmentation in the UCERF3 report was not profound, however, because UCERF3 did not intend its use for site-specific application.