1 Definition and Importance

Structural fragility assessment is a fundamental component of modern performance-based earthquake design and assessment processes. Major advances in fragility functions development and implementation have occurred over the past three decades.

Seismic reliability should be investigated probabilistically via Fragility Functions (FFs) that express the conditional probability of reaching or surpassing a specific damage state given an Intensity Measure (IM) of earthquake shaking. Although damage probability matrices can be used to express structural fragility, a FF is conventionally represented graphically so that an engineer, a stakeholder or a policy maker may be able to visualise the vulnerability of different structural systems. FF also depicts the degree of uncertainty associated with the damage limit state, represented by the shape of the function compared to a vertical line passing through the IM.

FFs constitute an essential step in a consequence-based engineering whereby intervention measures are based on the consequences of reaching or exceeding a certain performance limit state. For example, FF can be utilised prior to an earthquake to devise mitigation and emergency response plans and in the aftermath of an earthquake to prioritise inspection and determine medium- and long-term response and recovery (Elnashai and Di Sarno 2015). Vulnerability functions that correlate the IM with economic losses can be further developed using structural fragility functions and utilised, for example, within insurance schemes at regional or global levels (Pitilakis et al. 2014). Seismic design guidelines could incorporate economic loss models within a life-cycle cost assessment framework that can be used for deciding whether the additional cost due to structural strengthening is a more suitable choice compared to the induced losses by a seismic invent (Calvi et al. 2006). Additionally, the evaluation of risk ensures the uninterrupted operation of a community and is assuming a role of increasing importance due to the increasing complexity and inter-dependence of urban support systems. The main reasons for deriving seismic FFs are summarised in Fig. 9.1.

Fig. 9.1
figure 1

A diagram with the main reasons for deriving fragility functions for structural systems

After providing a brief description of FFs that are used to capture earthquake response data of structural systems, a framework for deriving analytical FFs, which constitute the most widely-used probabilistic method for characterising the probability of failure, is described. Dynamic analysis methods and Engineering Demand Parameters (EDPs) used to describe the response of buildings and bridges are examined by reviewing previous studies. Further insights are gained by reviewing two Case Studies (CSs).

2 Types of Fragility Functions

Different types of FFs exist depending on the way that data are collected (Schultz et al. 2010). First, empirical FFs are formed using observational data that are systematically monitored, controlled and stored. Judgemental data refer to expert opinion and are used as a last resort when observational data are not available. The data may include different modelling parameters; however, the quality strongly depends on the consultant engineer’s experience and the bias cannot be as easily controlled. Thus, the empirical method is considered more realistic, whereas the former can account for several structural response factors observed during post-earthquake surveys. Empirical FFs tend to be scarce due to the limited number of data that are primarily at the low seismic intensity range.

Analytical FFs are constructed through mathematical models and can encompass different structural configurations, built environments, geotechnical and seismotectonic characteristics of a seismogenic area. Although analytical FFs can minimise bias referring to material and seismic uncertainty, include all possible failure modes, and yield robust reliability assessment, due consideration should be given during modelling in virtue of software modelling limitations. Since the modelling process can be demanding and onerous, the validity of analytical FFs can be verified with other pertinent FFs. However, this is not always possible given dissimilarities in structural layout, soil properties and seismic input.

For the aforementioned reasons, hybrid fragility investigation can be conducted to compensate modelling difficulties, combine different data sources, and ensure the least possible modelling and seismic uncertainty (Elnashai et al. 2004). Due to the deficiency of observational data for different structural configurations, the empirical or hybrid method is not commonly adopted. However, there are cases where the reliability of observational data is strengthened through analytical studies. These cases primarily pertain to a cluster of structures in seismic-prone areas (e.g. a collection of buildings and bridges in which excessive computational cost is required). A summary of the advantages and shortcomings of each fragility type is presented in Table 9.1.

Table 9.1 Primary advantages and shortcomings of each fragility type

3 Framework for Analytical Fragility Derivation

The analytical approach, which is based on damage distributions derived from the analysis of structural models under incremental seismic intensity, is the most common method of risk assessment. A number of critical steps and assumptions should be carefully followed to analyze the seismic response of a structure, derive the damage distribution, and illustrate the fragility curve. A general framework that clearly encompasses all of the main steps required for evaluating analytical fragility functions is presented in Fig. 9.2.

Fig. 9.2
figure 2

Steps required for deriving analytical fragility functions

The type of structure under investigation affects the choice of an analysis software that should take several modelling parameters into account (e.g. material, linear and/or nonlinear geometry as well as concentrated or distributed plasticity). Within a fragility analysis framework, it might be time-efficient to adopt simplified yet robust modelling, since the seismic analysis sometimes requires excessive computational capacity depending on the scale and complexity of structural model. In addition, the record-to-record randomness causes higher dispersion of response compared to epistemic/modelling uncertainty, particularly at lower damage states (Kwon and Elnashai 2006; Dolsek 2009; Vamvatsikos and Fragiadakis 2010). Therefore, the definition of limit states and selection of seismic records should be made with due consideration towards estimating robust FFs. Before determining limit states, all possible failure modes should be identified (e.g. local buckling or failure under shear). Subsequently, the performance criteria could regard either the response at the local level (e.g. shear, moment or combined actions) or the global level (e.g. chord rotation, Interstorey Drift Ratio (IDR) or peak floor acceleration as an IM) (ASCE 41–13 2013). Furthermore, a sufficient number of records should be considered to account for geotechnical as well as seismotectonic characteristics of a site. The selection of an analysis method deliberately succeeds the collection of representative seismic records, because depending on the number of available seismic records, the appropriate method can be selected (Shome and Cornell 1999; Jalayer 2003; Bakalis and Vamvatsikos 2018; Di Sarno and Karagiannakis 2020). Selection of an analysis method also depends on the scale of structure, computational capacity and performance target. This step can be considered the final step prior to running dynamic analyses.

The methods to process random variables of seismic response are categorised as analytical and numerical (Schultz et al. 2010; Elnashai and Di Sarno 2015). Numerical solutions commonly assume normal or non-normal distribution of variables with linear limit state equation. Numerical solutions are used when the limit state function cannot be expressed in a closed form, to increase reliability, and to decrease computational time in some cases. Once the limit state function is known, the seismic IM is illustrated as a function of probability of failure for each LS.

4 Analytical Fragility Derivation

The modern performance-based engineering framework requires the evaluation of structural reliability in a robust manner. The ease and efficiency by which the data are generated through dynamic analysis of structural models constitute analytical fragility functions as an increasingly attractive method. Although these models can identify bias stemming from modelling and seismic record variability by conducting a sensitivity analysis, they might involve substantial computational effort. Limitations in the modelling capabilities of analysis software may also influence the reliability of this method. The key point of analytical methodologies is the verification of results either with experimental results, which may be limited due to the high cost, or with other analytical studies that use an identical, similar or simplified structural configuration. A challenging task pertains to the evaluation of uncertainties, both epistemic and aleatory, which might be an important contributor in the overall dispersion of performance and a key point that raises issues regarding the validity of current assumptions for all limit states (e.g. the combination of both uncertainties through the square-root-sum-of-squares in performance evaluation). There are computationally demanding and simplified analysis methods that are used to examine the dispersion of performance using different response parameters. Apart from that, the number of records is essential for sufficiently capturing the aleatory uncertainty, which is also highlighted in the following subsections.

4.1 Capacity and Demand Uncertainties

Generally speaking, aleatory uncertainty reflects the variability of an outcome (e.g. seismic response that is explicitly recognised by a stochastic model and it is inherently random), while epistemic reflects uncertainty on parameters of the structural model itself (e.g. floor mass, soil nonlinear behaviour, concrete or reinforcing steel strength that are unknown due to lack of knowledge). As previously mentioned, the former type of uncertainty is more considerable than the latter. To account for epistemic uncertainty, modelling parameters of a structure should be assumed as random variables following a certain distribution. In addition to aleatory uncertainty that can be evaluated directly through Incremental Dynamic Analysis (IDA), Dolsek (2009) combined the IDA method with the Latin Hypercube Sampling (LHS) technique in order to define a set of structural models with varying structural parameters. The curves from IDA and modified IDA (modified because of different structural properties assumed for a RC frame) were compared, and it was deduced that modelling parameters do not affect the response in the range far from collapse LS. However, the median collapse capacity was reduced when epistemic uncertainties were considered in the model. The same outcome was also found by Vamvatsikos and Fragiadakis (2010). However, it should be emphasised that the random variables were properly sampled when the number of structural models was greater than the number of random variables, and the greatest influence on the collapse mechanism was observed from random variables with the highest coefficient of variation. Those variables regarded the initial stiffness and ultimate rotation of plastic hinges of columns. In addition, the research focused only on the epistemic variability without comparing the two types of uncertainty.

Furthermore, Porter et al. (2002) conducted a sensitivity analysis on the effect of both modelling and seismic uncertainty in the overall economic performance of a high-rise RC moment frame in California. The impact was measured in terms of a damage factor, which was illustrated to be influenced mostly by the uncertainty in structural capacity and shaking intensity measured in terms of spectral acceleration at the first mode period, as also shown by Kwon and Elnashai (2006). In contrast with Dolsek (2009), uncertainty in the force–deformation relationship was less impactful on the seismic response because of the refined plasticity model that caused a smaller coefficient of variations. Other modelling parameters (e.g. mass and damping) yielded a slight impact on the performance.

The selection of a sufficient amount of records to capture the record-to-record variability is a key component of a seismic reliability analysis. Every study on seismic fragility should be accompanied by the confidence interval for each IM level. For instance, Shome and Cornell (1999) showed that the minimum number of records (typically 3–7) proposed for the professional practice in American and European codes can introduce up to 30% standard error for the one-sigma confidence band of normal distribution. It is generally acknowledged that 10–20 records are adequate for the analysis of low- and mid-rise buildings.

4.2 Dynamic Analysis Methods

The fragility analysis methods are categorised into narrow- and wide-range methods depending on the range of IM and displacement values for which they provide demand estimations. The former type of methods can predict the seismic demand e.g. at an IM in the area close to the tolerable probability of a structure, thus they may not accurately predict the variability of records for different performance levels (Jalayer 2003). Single-stripe and cloud analysis are two of these methods. In the first method, a number of records is scaled up to the same intensity, which usually pertains to the exceedance of a predefined limit state. To improve the accuracy of seismic demand prediction, another stripe of records can be formed close to the previous one (usually the initial IM increases by ¼ or ½ of seismic demand dispersion, generally termed β). Furthermore, the cloud analysis is an easily implementable, time-efficient and accurate method. This method provides a cloud rather than a stripe of response values. Thus, the analysis is conducted either with unscaled (as recorded) or scaled records at different IMs. The type, number and intensity of records is decisive for the robustness of the cloud method. The method accuracy lies in estimating the dispersion of seismic demand at each performance level to avoid considering the same slope of regression line at all levels. Since it has been shown that selecting unscaled records may underestimate the seismic demand, record scaling is preferable. According to Jalayer et al. (2017), the records should be scaled so that they cover a wide range of spectral acceleration values in the region of interest, with more than 30% of records exceeding the target limit state and no more than 10% of records pertaining to the same seismic event. While this method requires less computational cost, the number of records should be adequate in order to avoid wide confidence bands from occurring. Using fewer recorded motions is especially helpful in regions where few recorded events exist.

The most common analytical approach is IDA (also called dynamic pushover), where a suite of records is step-wisely amplified, resulting in response curves (or IDA curves) that parameterise the intensity level with the EDP (Vamvatsikos and Cornell 2002). The IDA curves provide a clear picture of seismic response at all performance levels, from yielding until structural instability. This method is the most popular method in fragility assessment, since it is simple in implementation and can give a considerably accurate prediction of structural response. However, the excessive computational effort required to perform hundreds of analyses is considered a deterrent in using this method, apart from being more time efficient. In addition, the scaling of low seismic intensity records at high levels is debatable. To this effect, Baker (2015) suggested the truncated IDA method that requires record scaling up to a maximum IM, independently of whether they have caused collapse or not. With this method, the fragility is estimated by lessening the computational cost and scaling the records up to practical levels.

In contrast with the IDA, the Multiple-Stripes Analysis (MSA) method is performed at specific IMs, each of which has a unique set of ground motions (Jalayer and Cornell 2009). Both MSA and IDA can be characterised as wide-range assessment methods, since they can be conducted for a large range of IMs. A competitive edge of MSA versus IDA is the accuracy due to the compatibility of records with the conditional spectrum at different IMs, since the target properties of records change at each IM. The MSA is also computational demanding given that each stripe should include a considerable number of records which may not be always be possible to find.

The selection of one method over another depends on the single case. To compare the accuracy and time-efficiency between methods, Mackie and Stojadinovic (2005) obtained almost the same EDP—IM relationship as obtained by IDA and cloud method using the same number of records. For example, the IDA method underestimated the median drift ratio by 12%, which may have occurred due to convergence issues caused by significant amplification (e.g. scale factor from 10 to 25). Additionally, a single stripe of records scaled at the IM pertaining to the same seismic hazard further underestimated the median. As a result, the cloud method was proved to yield the best choice or the most conservative response. Evidently, the single stripe can be the best method only when the estimate at a specific IM is required, since it is independent from any error introduced from a mathematic form.

A comprehensive study having as the main goal to compare the results from IDA, cloud and MSA, and to propose a new method that attains the same amount of accuracy with considerably less amount of seismic analyses was proposed by Miano et al. (2018). The concept behind this method, which is called “cloud-to-IDA”, lies in accurately obtaining the spectral acceleration value that corresponds to the exceedance of a specific limit state. The regression line is formed by unscaled records, and subsequently, the records are scaled so that the acceleration is close to the acceleration found from the regression line to exceed the limit state of interest. This scaling process is facilitated by forming a box area using the standard deviation of the IM and EDP. Additionally, the largest set of records adopted for this method included 50 initial unscaled records, 19 of which were scaled two times (88 in total) in order to find the acceleration that causes exceedance. The fragility was estimated using 8 times fewer records than MSA that was considered as the “true estimate”, since more stripes were located in the area that the exceedance of limit state had found in advance by the classic cloud. The limitation of the proposed method is that it is applied only to one LS. However, the extension of the method to additional limit states is possible by increasing the number of records and the range of spectral acceleration values. In such cases, the total number of records compared to the wide-range method could be smaller, although it still remains to be examined whether the whole process would be less tedious and time effective. Finally, comparing the cloud and IDA method between Mackie and Stojadinovic (2005) and Miano et al. (2018) highlights that the classic cloud method overestimates the capacity. In addition, the choice of the number of records is decisive to avoid any bias from convergence problems that caused twice as much error of media estimation for the former compared to the latter case. The primary advantages and shortcomings of all methods are summarised in Table 9.2.

Table 9.2 Primary privileges and shortcomings of different analysis methods in the literature

4.3 Solution Methods

A deterministic scenario in which the violation or not of a limit state is signified by probability equal to unity or zero is a simple example of lack of uncertainty. Thus, the fragility becomes a step function with zero or one probability (Fig. 9.3). In this case, the traditional design and assessment process incorporates uncertainty through safety factors that pertain to a specific seismic intensity. Thus, no information is provided for the probability of exceedance at a different intensity level.

Fig. 9.3
figure 3

The step function (deterministic scenario) and S-shaped function with lower and higher probability (after Elnashai and Di-Sarno 2015)

In stark contrast, when uncertainty is considered at all levels of intensity, the well-known S-shaped function is formed (Fig. 9.3). The FF of an IM is actually a summation of all structural analysis results conditioned on an IM, and it is expressed as follows (Bakalis and Vamvatsikos 2018):

$$F\left(IM\right)=P[EDP>ED{P}_{C}|IM]$$
(9.1)

where the condition signifies the violation of a certain limit state (EDPC stands for the capacity of engineering demand parameter). The simplest way of estimating the probability of exceedance of the condition in Eq. 9.1 is the consideration of one EDPC without uncertainty in its definition. Thus, calculating the ratio between the number of records that violate the condition at each stripe (either from MSA or IDA) over the total number of records, it is possible to estimate the probability at each IM. In this case, it holds that:

$$F\left(IM\right)=\frac{\sum_{j=1}^{n}I\left[ED{P}^{j}>ED{P}_{C}|IM\right]}{{N}_{rec}}$$
(9.2)

This process can be considered as a Monte Carlo Simulation (MCS) using n records for each strip, where the index I(·) is an index function taking the value of 1 if the condition is true, and zero otherwise. This is the so-called EDP-basis fragility estimation, since the exceedance of a limit state relies on the EDPC. The fragility estimation can also be derived based upon the IMC which is an inherent probabilistic quantity that can be found from hazard maps for a specific site and annual probability of seismic intensity exceedance. After defining the points of IM versus probability of limit state exceedance, it is possible to simply connect the points to form an empirical distribution estimate.

In a different way, an analytical distribution function can be used. The most common analytical distribution function is the Lognormal Distribution Function (aka CDF), since it has been confirmed as a reasonable assumption [(Ibarra and Krawinkler 2005; Jalayer 2003), among others]. It should be noted that the assumption of any other distribution is another source of uncertainty that can be addressed either with empirical data or by checking the mean annual frequency of exceedance. The CDF is expressed by:

$$P \left( EDP>ED{P}_{C}|IM\right)=\Phi \left(\frac{\mathrm{ln}EDP{\left(IM\right)}_{50\mathrm{\%}}-\mathrm{ln}ED{P}_{C,50\mathrm{\%}}}{{\beta }_{EDP|IM, tot}}\right)$$
(9.3)

where EDP(IM)50% is the value of the median (50% percentile) at each IM from the IDA curves and EDP(IM)C,50% is the median. To account for the uncertainty in the capacity (βC) and damage level definition (βDL), the total dispersion βEDP|IM,tot is calculated as follows:

$${\beta }_{EDP|IM, tot}=\sqrt{{\beta }_{EDP|IM}+{\beta }_{C}+{\beta }_{DL}}$$
(9.4)

Typical values of the last two dispersions of Eq. 9.4 can be found in HAZUS (2010). The dispersion of seismic demand, βEDP|IM, is estimated through a lognormal fitting. In case of the IDA method, for instance, the moments can be found by using the natural logarithm of the 16th, 50th and 84th fractiles of EDP. An alternative way to estimate the moments is the maximum likelihood estimation method (Baker 2015) that applies for different types of distribution. If pj is the probability of observing a collapse, according to binomial distribution, it holds:

$$P\left({z}_{j} collapses\right)=\left(\begin{array}{c}{n}_{j}\\ {z}_{j}\end{array}\right){p}_{j}^{{z}_{j}}{\left(1-{p}_{j}\right)}^{{n}_{j}-{z}_{j}}$$
(9.5)

where P(·) is the probability of observing zj collapses out of nj records of a stripe. To account for m levels of IM, the product of all probabilities is calculated as follows:

$$total \; likelihood=\prod_{j=1}^{m}\left(\begin{array}{c}{n}_{j}\\ {z}_{j}\end{array}\right){p}_{j}^{{z}_{j}}{\left(1-{p}_{j}\right)}^{{n}_{j}-{z}_{j}}$$
(9.6)

The scope of this process is to maximise the total likelihood of Eq. 9.6. This can be done by substituting pj with a distribution function (e.g. lognormal), and estimate numerically the moments of the function. It is convenient to obtain the derivative of Eq. 9.6 for finding the maximum likelihood.

In the case of a cloud analysis, a logarithmic linear regression is commonly performed to fit the EDP-IM relationship, which is characterised by the following properties:

$$\begin{gathered} EDP = \left( {a \cdot 1M^{b} } \right) \cdot \varepsilon \hfill \\ \mu _{\varepsilon } = 1 \; \& \; \sigma _{{\ln a}} = \beta _{{EDP|IM}} \hfill \\ \end{gathered}$$
(9.7)

where lna is an intercept and b is the slope in log-space. The lognormal random variable ε has median, με, equal to unity and its logarithmic standard deviation, σlnε, is equal to the standard deviation of natural logarithm of EDP for a given value of IM, βEDP|IM. As mentioned in Sect. 9.4.2, it is generally recommended that the cloud analysis should be conducted in the region of interest, around the EDPC, and not in a wide range of IMs. The closed-form solution of FF considering the linear regression of Eq. 9.7 becomes:

$$P(EDP > EDP_{C}|IM) = \Phi \left(\frac{\text{In}\, lM - \text{In} \, lM_{C,50\%} }{\beta_{IM|EDP,tot} }\right)$$
(9.8)

where the IMC,50% is the median IM and βIM|EDP,tot is given by the value of Eq. 9.4 divided by the value of slope, b. Finally, Bakalis and Vamvatsikos (2018) found that the FFs derived from the closed-form solution of Eq. 9.8, the lognormal fit of Eq. 9.3 and the empirical curve from the MCS (Eq. 9.2) were coincident, which signifies the robustness of the solution methods, either analytical or numerical. It was also deduced that the IM-based method based on IDA analysis is more robust against an EDP-based method, because the dispersion βEDP|IM,tot becomes undefined when the first collapses appear.

At this point, it is necessary to discuss two important assumptions that were previously mentioned. First, it should be kept in mind that only one EDP was considered enough to describe the global seismic response of a structure. Even though this consideration is sufficient in most of cases, it might not be adequate when complex structures (e.g. pipe racks or tanks that may exhibit different failure modes) are considered. Second, uncertainty on EDP capacity should also be accounted for beyond the fixed value proposed in HAZUS (2010). For instance, this can be done by combining the IDA method and LHS technique (Dolsek 2009).

Finally, there are additional analytical and numerical solution methods that can be used to derive FFs. This includes the first-order second-moment, first-order (FOSM) reliability or the response surface method. The interested reader can find more information in Iervolino et al. (2004), Schultz et al. (2010), Elnashai and Di Sarno (2015).

5 Performance Parameters, Intensity Measures and Applications

The scope of this section is to provide a few examples of the aforementioned analysis and demonstrate the main EDPs that have been adopted for various structural frames as a function of efficient IMs. It should be noted here that a standard deviation constitutes a metric of the efficiency of the regression of tested IMs to describe the seismic response of a structure. Lower βEDP|IM values indicate reduced dispersion or a more efficient IM. For instance, Shome and Cornell (1999) addressed the efficiency of several IMs (e.g. PGA, first-mode spectral acceleration (Sa) or averaged Sa over a range of frequencies) as a function of the number of records and scaling method using the cloud method. According to the results, the standard deviation of IDR can be reduced by half when normalizing records at the median Sa. This resulted in the reduction of the number of records by a factor of four given a certain confidence level. Additionally, a number of 10–20 records was proved sufficient to describe the seismic response of a mid-rise building. The study of Miano et al. (2018) is an extension of the previous one, since the authors achieved to reduce the required number of records by scaling them close to the spectral acceleration corresponding to a target LS. This was also confirmed by Jalayer (2003) who investigated the nonlinear response of a RC using IDR and Sa. The one stripe analysis was enough to estimate the seismic demand far from collapse; however, in the near collapse region, two-stripes were necessary to find the true estimates.

Apart from efficient, an IM should be sufficient in that it is conditional independent of seismological characteristics, such as the magnitude (M) and epicentral distance (R). The sufficiency of an IM can be quantified by deriving the p-values of the residuals, εEDP|IM, based on the regression analysis of EDP with respect to IM. The p-values are derived relative to the M and R. Luco and Cornell (2001) examined the efficiency and sufficiency of Sa, spectral displacement (Sd), and a modified spectral acceleration considering the second-mode period contribution and inelasticity (SIa2). The IMs were examined with respect to IDR for moderate-to-long period structures. The main outcome of the study highlighted that the SIa2 was the most appropriate IM to describe the seismic response of a 3-, 9- and 20-storey building. As such, the seismic records characteristics can be ignored when employing this IM.

As mentioned in Sect. 9.4.1, the epistemic uncertainty could impact the seismic demand close to collapse LS. Dolsek (2009) analysed a 4-storey RC structure employing the LHS technique to create samples with different mechanical characteristics. To avoid problems in the definition of Sa due to the different period of the structure for each sample, the PGA was adopted and correlated with the maximum IDR. The dispersion of PGA and drift demand due to randomness was estimated at 0.68 and 0.46, whereas it reached the value of 0.79 and 0.56, respectively, when both types of uncertainty were considered. Instead of using PGA as IM, Vamvatsikos and Fragiadakis (2010) considered the Sa, since only the strength and not the mass or stiffness varied, thus the period remained the same. However, even under mass and stiffness uncertainties, it has been shown that the Sa can still serve as a reliable reference IM (Vamvatsikos and Cornell 2005).

The fragility analysis of bridges has also been the subject of many studies during the last decade. There has been a lack of agreement regarding the most suitable IM for bridges (e.g. spectral measures versus ground ones). For example, Mackie and Stojadinovic (2003) addressed the probabilistic seismic demand of bridges with 23 different IMs. It was highlighted that structure-dependent IMs (e.g. Sa and Sd at the fundamental period of the bridge) reduced the seismic demand uncertainty. Ground measures (e.g. peak ground velocity or duration-dependent ones such as root mean square acceleration) were not useful. In addition, local, intermediate and global EDPs were examined, such as maximum material stresses (σ), column moment (M) and IDR. Conversely, the research of Padgett et al. (2008) on a portfolio of multi-span simply supported steel girder bridges demonstrated that PGA can be the best contender out of 10 other typical IMs as a function of bearing deformation (br), ductility demand (μ) and abutment deformation (abut). Apart from the efficiency and sufficiency, the authors examined another factor, namely proficiency, which combines both efficiency and practicality. Practicality is defined as the slope b of IM-EDP relation. In this way, different factors that affect the decision-making can be combined for a proper selection of an IM. It was pointed out that the PGA served as the most proficient followed by Sa-gm (geometric mean of two orthogonal principal periods). The cumulative absolute velocity (CAV) and PGA were proved to be the most sufficient measures. Finally, the study confirmed that differences between the proposed IMs in the literature for bridges cannot be attributed to the nature of ground motions, synthetic or recorded, but rather to specific characteristics of individual and portfolios of bridge classes. Other remarkable studies regard the impact of near- and far-field conditions on bridges (De Risi et al. 2017), as well as the consideration of SSI effects (Kwon and Elnashai 2010) that need to be examined more as an epistemic source of uncertainty.

Two critical developments of fragility analysis pertain to the consideration of residual capacity of structures subjected to aftershock events, and the ageing effects of structures (e.g. due to corrosion). First, the damaging effects of aftershock events are overlooked by the design codes. Additionally, fragility analysis of structures usually addresses only mainshock earthquakes, although some structures can be prone to sequence of seismic events. Jeon et al. (2015) confirmed this statement by showing that the PGV of an aftershock required to cause severe damage to a RC frame was 30% lower compared to the one, the frame was undamaged. The study used the IDA approach for simulating damaged ground motions and the cloud approach for computing aftershock FFs. Although PGV was found the most proficient IM, CAV was the most practical. Apart from deterioration related to mainshock-aftershock events, structural degradation may occur due to corrosion that affects concrete cover and steel reinforcement strength. Panchireddi and Ghosh (2019) introduced a novel study on aged RC bridges subjected to mainshock-aftershock sequences. Results showed that corrosion has a significant impact on the seismic vulnerability of RC bridges, which becomes even more critical when the bridge is subjected to both ground motion sequences and harsh corrosion conditions. The most important considerations of the aforementioned studies on fragility are summarised in Table 9.3.

Table 9.3 Primary considerations of several fragility analyses on building structures and bridges

To illustrate the generation of analytical FFs and further investigate the critical subject of the reliability assessment of structures subjected to aftershocks and experienced corrosion, two analytical CSs are addressed in the ensuing sections accounting for IDA and different IMs.

6 Aftershock Fragility Analysis of a Steel Frame (CS#1)

6.1 Description

The present case study (CS) demonstrates a simple framework of implementing aftershock fragility analysis on existing steel frames. The case study is a three-storey existing steel moment-resisting frame located in Central Italy, which has a trapezoidal floor plan and a storey height of approximately 3.6 m for the three storeys. Figure 9.4 shows the plan layout of the steel building. The external and internal beams are HEA160 and HEA300, while the columns are HEA200. All beams were found to be connected to columns through full penetration welds. Lastly, the masonry infill walls consist of two layers of perforated bricks of thickness 60 mm.

Fig. 9.4
figure 4

Layout of the case study steel building

The numerical model of the bare and infilled frames were implemented in OpenSees (Mazzoni et al. 2006). Beams and columns were modelled as force-based elements with fibre sections, whose property was represented by the Giuffré-Menegotto-Pinto constitutive law (Menegotto and Pinto 1973; Filippou et al. 1983). Due to a lack of onsite material tests, the actual yield strength of steel considered for the numerical model was 215 MPa, assuming a standard deviation of 15 MPa and a confidence factor of 1.2, according to the knowledge levels defined in EC8-3 (EN 1998–3 2004). The elastic modulus of the steel was 210GPa with a strain hardening ratio of 0.02. Aside from the beams and columns, the column panel zones were also accounted for in both models of the bare frame and the infilled frame. The modelling of column panel zone was developed by Gupta and Krawinkler (1999), which physically modelled the rectangular shape of the column panel zone through small rigid elements and utilised a rotational spring to control the shear deformation of the column panel zone. The modelling of the column panel zones was employed in both the bare frame and the infilled frame. Masonry infilled walls were modelled using the single-strut model due to its simplicity and acceptable accuracy. The infill struts had the same thickness as the real infilled walls, and their width was determined based on the properties of the infill walls and the confining frame (Noh et al. 2017). The backbone curve of the masonry infill strut was represented by the multi-linear curve developed by Liberatore and Decanini (2011). Finally, the floor slab on each storey was simplified as two rigid struts placed diagonally in each column grid. Figure 9.5 shows a schematic view of the 3D-model of the infilled frame in OpenSees.

Fig. 9.5
figure 5

3D model of the steel building in OpenSees (slab elements omitted for clarity)

Fig. 9.6
figure 6

Procedure for assessing steel frames with different levels of pre-existing damage

6.2 Methodology

First, a set of 20 bi-directional records of earthquake sequences were selected from worldwide ground motion databases, including PEER (2013), Luzi et al. (2019), K-NET (2019), to be employed in the finite element model. Each earthquake sequence comprises two events (i.e. the mainshocks and the aftershocks in the order of their time of occurrence in reality). Table 9.4 summarises the PGA of the selected earthquake records. It is noticed that only 4 out of 20 records have a greater after-shock PGA than the mainshock PGA. The lowest and highest ratio of aftershock PGA to mainshock PGA is 0.19 and 1.33, respectively.

Table 9.4 Selected mainshock-aftershock earthquake records for fragility analysis

Furthermore, the damage levels adopted in this CS are recommended in the American code ASCE 41–06 (2005) for existing steel MRFs. The IDR limits are 0.7, 2.5 and 5% for immediate occupancy (IO), life safety (LS) and collapse prevention (CP), respectively. The IO level indicates that slight damage occurs on structures, such as minor cracks on infilled walls, but the effects on vertical load resisting systems are negligible. The LS level means moderate damage on structures, with large cracks on infills, significant yielding in steel components and permanent residual drifts. However, structures still have adequate residual strength to sustain the gravity loads so that partial collapse is prevented. Finally, the CP level suggests that partial or total collapse occurs on structures, with large permanent residual drifts and very limited vertical load carrying capacity.

The analysis framework described hereafter is able to assess the seismic vulnerability of structures with different levels of damage caused by mainshocks (i.e. the pre-existing damage on structures before aftershocks). The maximum inter-storey drift ratio was used as the EDP, while PGA, Sa(T1) and CAV of aftershocks were used as the IM. In this CS, three pre-existing damage levels caused by mainshocks were considered, namely ‘no damage’, 0.7% and 2.5% IDR damage level. ‘No damage’ means that the steel frame was subjected to aftershocks only, 0.7% IDR represents the worst case of ‘slight damage’ caused by light mainshocks according to the limit states, and 2.5% IDR represents the worst case of ‘moderate damage’ that is caused by moderate mainshocks. The analysis procedure is summarised in the following steps (see also Fig. 9.6):

  • Scale each mainshock individually based on the results in the first part such that the infilled steel MRF reaches the target maximum IDR after the mainshocks;

  • Perform IDA on the damaged infilled steel MRF based on the increasingly scaled aftershocks up to the CP limit state;

  • Derive aftershock fragility curves for the infilled steel MRF with each of the assumed pre-existing damage caused by mainshocks;

  • Examine the seismic vulnerability of the damage steel MRF by comparing the fragility curves using the case of no damage as a reference;

  • Use 16th and 50th percentile values to quantify the change of the steel MRF’s seismic vulnerability due to pre-existing damage.

6.3 Results and Discussion

Figure 9.7 shows the aftershock fragility curves with respect to the LS limit state, and Fig. 9.8 shows the comparisons between the obtained fragility curves, where the case of no damage was used as a reference. It is evident that the 0.7% IDR damage exhibited negligible impact on the seismic vulnerability of the CS steel MRF, suggesting that this pre-existing damage level is too slight to influence the capacity of the steel frame to resist aftershocks. The comparison of 16th and 50th percentile values of IMs in Fig. 9.9 also indicates the slight impact of 0.7% IDR pre-existing damage. For example, when there is no damage that resulted from mainshocks, the steel MRF is believed to reach the LS damage level at aftershock Sa(T1) of 1.15 g, while in the case of 0.7% IDR pre-existing damage, the structure reaches the same limit state at aftershock Sa(T1) of 1.11 g, which is 3.5% less than the case of no damage. Similar observations are also found by using other IMs, i.e., PGA and CAV in this CS. Conversely, the 2.5% IDR pre-existing damage has more considerable impact on the seismic vulnerability of the steel frame. Since 2.5% IDR is also the onset of LS damage level, this requires the steel frame to experience a smaller IDR during the aftershock than 2.5%. In this case, the 16th percentile value is an ideal representative of the breakpoint beyond which the aftershocks can cause a larger maximum IDR of the steel frame than the mainshocks.

Fig. 9.7
figure 7

Fragility curves of the steel frame with respect to the LS limit state

Fig. 9.8
figure 8

Changes in the probability of exceedance with respect to no pre-existing damage

Fig. 9.9
figure 9

Comparison of the 16th and 50th percentile values for the case of LS limit state

Figures 9.10 and 9.11 present the results of fragility analysis with respect to the CP limit state. The findings are generally similar to the previous case for the LS limit state. Firstly, the 0.7% IDR pre-existing damage has very limited impact on the seismic vulnerability of the steel MRF. The reduction of the 16th and 50th percentile values of aftershock PGA, Sa(T1) and CAV in Fig. 9.9 are all less than 2%. It is therefore anticipated that the steel frame with slight pre-existing damage is able to exhibit full capability of resisting aftershocks. When the pre-existing MS-damage was raised to 2.5% IDR, the effects of pre-existing damage becomes more significant. The large changes in the 16th and 50th percentile values demonstrate that the steel frame with moderate damage may have already lost the majority of its capacity to sustain aftershocks; therefore, there is likely to be a large increase in the structure’s seismic vulnerability (Fig. 9.12).

Fig. 9.10
figure 10

Fragility curves of the steel frame with respect to the CP limit state

Fig. 9.11
figure 11

Changes in the probability of exceedance with respect to no pre-existing damage

Fig. 9.12
figure 12

Comparisons of the 16th and 50th percentile values for the case of CP limit state

Based on the above assessment, it can be concluded that when the steel MRF is slightly damaged by a mainshock (e.g., the structure is identified between no damage and IO limit state), the steel frame is able to maintain its full capacity to resist aftershocks. As a result, it is not necessary in this case to take the effects of the pre-existing damage into consideration when performing code-based assessment of the steel frame. However, when the steel MRF is moderately damaged by a mainshock (e.g., the structure is identified between IO and LS limit state), the steel frame is likely to lose most of its capacity, which makes the structure significantly more vulnerable to aftershocks. In this case, the effects of the pre-existing damage must be appropriately accounted for during the implementation of code-based assessment procedure, such as a reduction in the criteria for determining capacity or an increase in the seismic action for superior limit states. The amount of such reduction or increment may be effectively determined based on the 16th percentile value of IM to be on the safe side or based on the 50th percentile value to be less conservative.

7 Seismic Fragility of a RC Building with Corrosion (CS#2)

There are two main aspects involved with existing RC buildings, namely poor seismic details (e.g. large stirrups spacing or small concrete cover thickness) (Pinto and Franchin 2010; Di Sarno et al. 2017), and deterioration due to exposure to aggressive environmental conditions (corrosion) that alter the most relevant mechanical properties and cause cover spalling, loss of bond between concrete and steel bars, as well as concrete and steel strength reduction, among others (Wang and Liu 2008; Liberatore and Decanini 2011).

Presently, technical codes focus strictly on the design level and even when they deal with existing structures, they limit the checking for strength requirements at the local level without considering and invoking the interaction between elements that are responsible for the structural behaviour and even the structural failure. Although corrosion remains an unpredictable phenomenon, many attempts have been made for incorporating such uncertainties in complicated modelling to allow researchers to account for the life-time deterioration of RC structures. Such an attempt is examined in the following CS by deriving FFs accounting for different corrosion rates.

7.1 Description

Non-linear finite element model (FEM) of an existing four-storey RC building was implemented in an advanced software (SeismoSoft 2019) for seismic simulations (Fig. 9.13). The model consists of 350 × 350 mm2 and 300 × 300 mm2 columns at the ground floor and the remaining floors, respectively, reinforced with 6 smooth Φ16mm longitudinal rebars and Φ6mm transverse stirrups with 150 mm spacing, while beams had variable cross-sections, reinforced mainly with Φ10mm and Φ14mm longitudinal rebars, and Φ6mm transverse bars with 200 mm spacing. The concrete compressive strength was 16.7 MPa, as was typical for buildings designed in the 60 s, while the yielding stress of steel reinforcement was 440 MPa. To guarantee an in-plane stiffness and reduce the number of degrees of freedom, and thus the computational demand, the slabs were modelled through rigid diaphragms, to exhibit neither membrane deformation nor report the associated forces. All the end-joints were rigid connections. An accurate estimation of the gravity loading analysis was conducted and applied to the FE model (see Di Sarno and Pugliese 2020 for further details). Corrosion was applied to the edged beams and columns to simulate a real exposure since the internal components are commonly protected by in-fills.

Fig. 9.13
figure 13

Finite element Model in Seismostruct

A new approach was shown to be efficient and reliable for the evaluation of the residual capacity of RC components exposed to corrosion. The methods consist of splitting the RC cross-section in three different layers accounting for the concrete cover (CC), the ineffective core concrete (UCC), and the effective core concrete (ECC). The ineffective core concrete is taken as twice the average diameter of the longitudinal steel rebars and affected by corrosion. Conversely, the core concrete is considered pristine and without any effects of corrosion. The last statement has a physical meaning, as experimental results on RC columns exposed to different levels of corrosion demonstrated that the core concrete is not subjected to the corrosion effects (Andisheh et al. 2019). The above numerical method has the advantage to include the effects of corrosion both full-sided and no-full-sided attack. The last observation comes handful when assessing RC buildings, whereas infills protect some edges of beams and columns, while bridge piers are more likely to experience a full-sided corrosion penetration. As a result, the concrete compression strength deterioration can be computed as follows:

$${f}_{c}^{*}=\frac{\beta {f}_{c}{A}_{CC}+\beta {f}_{cc}{A}_{UCC}+{f}_{cc}{A}_{ECC}}{{A}_{CC}+{A}_{UCC}+{A}_{ECC}}$$
(9.9)

where fc is the un-corroded concrete compressive strength and A is the area of each concrete layer. β is defined according to Coronelli and Gambarova (2004) using the modified field compression theory of Vecchio and Collins (1986) as follows:

$$\beta =\frac{{f}_{c}^{*}}{{f}_{c}}=\frac{1}{1+K\frac{2\pi X{n}_{bars}}{b{\varepsilon }_{c2}}}$$
(9.10)

where f*c represents the corroded compressive strength, K a constant equal to 0.1 for medium rebar, X the corrosion penetration, \(b\) the width of the cross-section, εc2 strain at the peak and nbars the number of steel reinforcement in the area affected by corrosion.

The degradation effects of corrosion on steel reinforcement are commonly considered by modifying the main parameters of the constitutive models such as yielding and ultimate stress, and the ultimate strain. Many experimental campaigns have been conducted on the impact of corrosion on such mechanical properties and as a result, have demonstrated that yielding and ultimate stress can be easily defined by a linear relationship, while an exponential interpolation is more likely to fit the reduction of the ultimate strain. Moreover, it should be stressed that corrosion can be categorised as uniform, most likely as carbonation due to concrete, and pitting, most likely due to chloride ingress. Both corrosion phenomena may have different impacts on the residual capacity of RC components. Pitting corrosion has larger and more unpredictable effects on the steel diameter and its mechanical properties, while uniform corrosion can be modelled efficiently by uniformly modifying the parameters along the rebar. Results from the literature show that regression analyses produced more or less the same relationships, as follows:

$$\begin{aligned} f_{y}^{*} & = \left( {1 - \beta_{sy} CR\left[ \% \right]} \right)f_{y} \\ f_{u}^{*} & = \left( {1 - \beta_{su} CR\left[ \% \right]} \right)f_{u} \\ \varepsilon_{su}^{*} & = \varepsilon_{su} e^{{ - \beta_{\varepsilon u} CR\left[ \% \right]}} \\ \end{aligned}$$
(9.11)

(parameters with symbol * represent the corroded variables; βsy, βsu and βεu are the regression parameters). Some results for the regression parameters can be found in Wang and Liu (2008), Imperatore et al. (2017).

7.2 Methodology

As stated in Sect. 9.4.1, to obtain an adequate and accurate average inelastic response of a low-rise building, 10–20 seismic records should be considered. Thus, a set of 20 natural ground motions were collected from international databases using a REXEL tool (Iervolino et al. 2010). These ground motions show different features in terms of duration, PGAs, fault rupture and frequency contents (Table 9.5). Since the ground motions were selected and employed in the model considering the two components, the response parameters are then computed using the square root of the sum of squares (SRSS) (RS—Response Parameter):

Table 9.5 Selected mainshock-aftershock earthquake records for fragility analysis
$${RS}_{tot}=\sqrt{{RS}_{x}^{2}+{RS}_{y}^{2}}$$
(9.12)

Robust fragility analysis requires an accurate selection of performance levels that account for local and global response for RC structures. Such performance levels should lead to reliable evaluation of force demands on potential brittle failure, quantification of consequences of strength deterioration on single components, estimation of the inter-storey drift to account for strength and stiffness discontinuities. Technical codes (i.e. EN1998-1 (2004); NTC 2018) usually state that existing RC structures should comply with deformation capacity through chord rotation and cyclic shear resistance; however, the last parameters do not take into account the deterioration of materials, components and, as a result, the global structure. In addition to the above performance levels, some other response parameters could be included, such as the strain of the cover, εCU,COVER, and core concrete crushing, εCU,CONFINED, the interaction bending moment-axial load domain, (NY,COLs and MY,COLs), defining specific limitation on the materials, flexural capacity, MU,BMs, of RC components through the bending moments. Table 9.6 summarises the local performance levels.

Table 9.6 Performance criteria (DL–limited damage; SD–severe damage; NC–Near Collapse)

The values of strains for the structural materials were computed according to the studies of Biskinis and Fardis (2009), Razvi and Saatcioglu (1994) for the unconfined and confined concrete. The latter parameters were then used as limits for the calculation of the interaction domain of each RC component. The global response parameters were taken from non-linear static analyses performed considering the different levels of corrosion and picking the ultimate drift from the capacity curves (Table 9.7). Such performance values represent the capacity of the RC building.

Table 9.7 Limit States expressed as inter-storey drift ratio, IDR [%]

While performing the non-linear analyses, the first element reaching the limit conditions is taken by means of the drift. Among those parameters the minimum is then chosen as demand and checked against the corresponding global capacity parameter according to the limit state. The local EDP over the decision global variable is herein taken as critical demand-to-capacity ratio and defined as (De Risi et al. 2017):

$${Y}_{LS}={\mathrm{max}}_{i=1}^{{N}_{Mech}}{\mathrm{min}}_{j}^{{N}_{comp}}\frac{{D}_{ji}}{{C}_{ji}\left(LS\right)}$$
(9.13)

where Nmech is the number of considered potential failure mechanisms and Ncomp the number of components taking part in the ith mechanism. Dji is the demand evaluated for the jth component of the ith mechanism and Cji (LS) is the limit state capacity for the jth component of the ith mechanism.

In this study, the fragility assessment is based on three different IMs, namely PGA, Sa(T1), and the modified acceleration spectral intensity (MASI) that has been recently introduced, and it is defined as follows:

$$M.A.S.I.= \underset{{T}_{1}}{\overset{{T}_{elongation}}{\int }}{S}_{a}\left(T\right)dT$$
(9.14)

The selection of an IM is challenging, as highlighted in Sect. 9.5, let alone when degradation phenomena due to corrosion are considered in this CS. Thus, there is a higher need to investigate different criteria (e.g. efficiency, proficiency and practicality).

The IDA method is adopted for deriving FFs for the RC building. The scaling of the records until collapse was achieved via the hunt-fill algorithm (Vamvatsikos and Cornell 2004), which defines a first elastic start at 0.005 g, an initial step of 0.1 g and a step increment of 0.05 g. After running each record, the Sa(T1) is estimated based upon the scaling, and the IDA curves are formed using a spline interpolation. Finally, the whole procedure for deriving FFs for the RC building is described in Fig. 9.14.

Fig. 9.14
figure 14

Finite element Model in Seismostruct

7.3 Results and Discussion

The results obtained from non-linear dynamic analyses for all IMs can be seen in Fig. 9.15. The inter-storey displacement from FEMA 356 (2000) equal to 2% for the severe limit state is also included in the power interpolation (light blue line) as a measure of the safety level indication of technical codes when the RC structure is exposed to different levels of corrosion. MASI appears to be the most efficient and proficient seismic IM herein examined. The least standard deviation of the residuals describes its high efficiency, which could probably lie in the relevant inelastic effects of the higher modes included in the range of period T1—2T1 that allows to capture the degree of non-linearity of the structural response. The interested reader is advised to also check Luco and Cornell (2001), who consider a similar IM. By contrast, PGA demonstrated the largest dispersion of the results, and the lowest effectiveness related to Pearson’s coefficient. The last observation could be found in the lack of correlation with both the structural parameters and the inelastic damage. Conversely, Sa(T1) still appears to be efficient, even if less practical than PGA.

Fig. 9.15
figure 15

EDP-IM for different corrosion rates a CR [%] = 0; b CR [%] = 10; c CR [%] = 20

Moreover, the value of the inter-storey drift from technical codes seems to overpredict the safety levels for existing structures over their lifetime, as can be clearly seen for corrosion rates between 10 and 20%. The effects of corrosion are increasing the demand in terms of inter-storey displacement while decreasing the capacity of the structure taken from non-linear static analyses. This is primarily due to the reduction of the main mechanical properties of both concrete and steel reinforcement, which affects the local and the global response of the building itself. The power interpolation provides a real perspective of the lack of information in seismic codes when degradation and damage factors over time alter the pristine condition of RC structures. Limit thresholds in current technical codes are completely defined by means of inter-storey drift ratio (i.e. taken equal to 2% for Severe Damage) for as-pristine structures. However, such codes do not account for the effects of deterioration over time. Environmental factors (i.e. corrosion), in fact, could cause additional damage and therefore lead to an overestimation of the actual response of RC structures. As a result, the fragility curves are presented considering the limit proposed in Di Sarno and Pugliese (2020) and mentioned previously in Table 9.7.

Figure 9.16 shows that corrosion has a significant impact on the seismic capacity of the RC building for the selected ground motion excitations. The occurrence of the SD appears for higher values of scaled records which implies a more evident impact of corrosion. The range of values that would cause the exceedance of the limit state decreased, while the damage probabilities for all IMs increased, in comparison with the structures subjected to earthquake excitations in the pristine condition (i.e. CR[%] = 0). The probability of failure equal to 100% is moving left from 0.6 to 0.25 g for PGA, from 0.55 g to 0.4 g for Sa(T1) and from 0.35 g to 0.15 g for MASI. The last observations demonstrate that corrosion is significantly affecting the safety level for the structure itself.

Fig. 9.16
figure 16

Seismic fragility curves for different corrosion rates and IMs (SD)

Figure 9.17 shows the dramatic increment of the seismic vulnerability for all the examined IMs. The corrosion impact for the RC building, subjected to the same ground motion as for the pristine case, forces the structure to undergo higher inter-storey displacements to such an extent that even smaller earthquakes could cause its failure and collapse. Sa(T1) exhibited the highest values of the decay of the seismic vulnerability with the increase of the corrosion rate (e.g. 75% and 85% for CRs equal to 10% and 20% respectively). Conversely, PGA demonstrated the lowest values of the failure probability, which were less than 50% and 65% for the investigated corrosion rates. MASI produced similar results to Sa(T1) (e.g. an increase of the probability of exceedance equal to 75% and 85% respectively). The similar trend for Sa(T1) and MASI may be justified by the small differences in the characterization of the power interpolation efficiency of the seismic IMs (Fig. 9.15).

Fig. 9.17
figure 17

Seismic fragility curves for different corrosion rates and IMs (SD)

8 Conclusions

Fragility functions are an effective assessment tool that can be used by engineers, analysts and policy-makers to determine pre-emptive measures prior to and planning the response in the aftermath of an earthquake. This chapter identified and explained the major steps to undertake fragility analysis and presented two case studies that further emphasise the methodology. Analytical fragility functions are the most widely employed form due to recent advances in analytical methods and the dearth of actual performance data. Different analysis methods can be employed, since they are all robust; however, attention should be given regarding which method is the most applicable under the conditions of the application. For instance, it was demonstrated in Sect. 9.4.2 that the cloud method can yield the same results for a region of interest in comparison with computationally demanding methods, such as IDA and MSA. Additionally, solution measures can vary depending on the analysis method and the types of uncertainties considered.

Two analytical CSs addressed the seismic reliability of a steel and RC building accounting for aftershock and corrosion effects using the IDA method. The approximate closed-form solutions of lognormal distribution are widely accepted for different structural configurations, and thus they were adopted in the CSs. Finally, the most common indicators of failure, as well as efficient and sufficient IMs were identified and demonstrated through the CSs. For example, the spectral acceleration (Sa) is the most predominant measure for low- and mid-rise buildings; however, in the CS#2, it was demonstrated that MASI was the most proficient IM compared to Sa and PGA. MASI is able to account for the structural elongation due to inelasticity and has also been proved efficient for high-rise buildings.

Overall CS#1 demonstrated that RC buildings are likely to lose most of their capacity due to aftershocks when the structure is classified at or beyond the immediate occupancy damage level. In that case, the code-assessment process should account for the existing damage either through the reduction of performance criteria threshold values or an increase of seismic action. Moreover, CS#2 illustrated that corrosion can significantly affect the resistance of RC buildings regardless of the IM considered. Nevertheless, EN 1998–3 (2004) is not always applicable when the structure has experienced corrosion, which urges a code amendment in the next revised versions.

9 Future Challenges

Although a solid ground has been formed in the framework of fragility analysis of common buildings, there is still research to be done in the fragility analysis of structures in the following aspects:

  • Consideration of ageing effects accounting for the time-dependence of the corrosion phenomenon (i.e. initiation and propagation). This will allow the fragility derivation based on the lifetime and not on the corrosion rates.

  • Examination of aftershock events considering near- and far-field conditions as well as soil structure interaction.

  • Life-cycle assessment of structures accounting for different uncertainties (e.g. corrosion and aftershock effects using numerical analysis), since the cost of a posteriori interventions can overtop the cost of prudent design.

  • Estimation of seismic fragility of non-structural components inside critical facilities and integration of different failure modes that can contribute in the overall risk of the system. The integration is more critical for special structures (e.g. in healthcare facilities or industrial plants).