Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as a function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.
Introduction
During the initial stages of the design of a system, the inputs to and parameters of that system are still very much uncertain, while at the same time decisions made here have the largest effect on risk, performance, and cost of the final design [1–3]. It is therefore of utmost importance to take these uncertainties into account, both when making design decisions and also when considering the allocation of resources (people, time, financial budget, computer resources, experimental resources, etc.) throughout the design process. Techniques for the quantitative evaluation of the impacts of these uncertainties in engineering design are increasingly being studied and developed. Two significant aspects of uncertainty quantification in engineering design are uncertainty analysis and sensitivity analysis. Uncertainty analysis approaches, such as sampling-based techniques [4–7], surrogate modeling techniques [8–10], and surrogate uncertainty representation techniques [11–14], are all employed to ensure the efficient propagation of uncertainty from model inputs to model outputs. Once this has been achieved, sensitivity analysis in the form of screening methods [15], global sensitivity analysis [4,16–18], entropy based methods [4,19], and moment independent methods [20–22] are often conducted to quantitatively evaluate the impact of a given input on output quantities of interest.
Sensitivity analysis plays an important role in engineering decisions under uncertainty, by quantifying the effects of uncertainties on the output quantities of interest. For example, sensitivity analysis informs a ranking of the most influential uncertain input parameters and it yields insight into how reducing uncertainty in those inputs might correspondingly reduce uncertainty in the output quantities of interest [16]. This paper explores these ideas and extends the so-called distributional sensitivity analysis (DSA) that targets decision-making for the allocation of resources throughout the early stages of engineering design.
One goal of sensitivity analysis of model output is “to ascertain how a given model (numerical or otherwise) depends on its input factors” [23]. One way of performing sensitivity analysis is to find the linearized sensitivity of an output with respect to an input around a baseline, the so-called local approach [16]. These methods are usually cheap and based on computing derivatives; however, they do not paint a complete picture, because they are inherently local. Instead, a global sensitivity analysis (GSA) explores the full space of the input factors [16]. A variance-based GSA apportions the variance of a quantity of interest among the contributions from each uncertain factor and from interactions among factors.
GSA results are often used in factor prioritization, which ranks the input parameters based on the expected influence that uncertainty reduction in those parameters has on the system output [17,18]. In this method, one ranks the factors based on which factor, if fixed to its true value, yields the largest expected reduction in output uncertainty; however, Oakley and O’Hagan [24] noted that it is rarely possible to fix a factor to its true value. Allaire and Willcox addressed this limitation of GSA by developing a distributional sensitivity analysis (DSA) [25], which computes the influence on the output variance as a function of the amount of variance reduction in the input parameters. This results in a variance-based sensitivity index function. This function can be used directly to inform decision-making, or alternatively, by considering the amount of variance reduction in each factor as a random variable, the mean value of the variance-based sensitivity index function provides an average main effect sensitivity index used for ranking purposes.
This paper expands and extends the DSA approach to create a sensitivity-based methodology to support resource allocation decisions in engineering design under uncertainty. Specifically, this paper contributes methods to improve the quality and efficiency of the sampling within the DSA, which is essential for application to computational expensive models. Section 2 presents the problem setup. Section 3 describes the overall methodology. We apply the method to an engineering system—the conceptual design of a commercial jetliner—in Sec. 4. Section 5 concludes the paper.
Problem Setup
where y is the quantity of interest (QoI) and x is the vector of system input variables. For clarity of exposition, we consider a scalar quantity of interest, although the methodology is straightforward to apply when there are multiple quantities of interest. These quantities typically represent design objectives and constraints, such as the fuel consumption, maximum takeoff weight, and environmental performance of an aircraft. We consider n input variables, , where xi denotes the ith input variable. The input parameters represent quantities within the designer’s control, such as aircraft flight speed, aircraft geometry, density of the materials used, etc. We consider the model that maps the inputs to the output quantity of interest, , to be a black box function that can be used to evaluate y at any point in the input space.
We consider the input parameters x to be uncertain, which in turn induces uncertainty in the output quantity of interest y. In this paper, our focus is on the task of deciding how to allocate resources in the design process. That is, we pose the question: How should the designer allocate design effort to reduce uncertainty in the input parameters, so as to appropriately manage uncertainty in the output quantity of interest? For example, in the conceptual design of an aircraft, the uncertain input parameters represent different aspects of the aircraft design, including aerodynamic performance, structures, weights, stability, controls, propulsion, and environmental performance. At any current state of the design, uncertainty in the inputs for each of these disciplines results in uncertainty in the corresponding system-level metrics (the quantities of interest). Our goal is to provide quantitative guidance to the designer on which disciplines to focus design effort and by how much to reduce uncertainty in each discipline, so as to achieve desired confidence levels on the quantities of interest. This information is especially important in early-stage design where time and money resources are limited, yet critical decisions are made that lock in later options.
This paper builds on the work of Allaire and Willcox [25] on the distributional sensitivity analysis (DSA) methodology, which is proposed as a basis for providing this kind of quantitative guidance. A DSA provides not only an average main effect sensitivity index that can be used to prioritize efforts but also a function that estimates uncertainty reduction in y as a function of the uncertainty reduction in each component of x. This information can be translated directly into deciding how much effort to expend on reducing uncertainty in a given parameter (e.g., deciding what fidelity of analyses to run, whether to conduct an experiment, whether to task a disciplinary team with conducting more detailed design work, etc.).
A simple notional example to illustrate these ideas is presented in Fig. 1. On the left, a standard GSA computes the sensitivity indices that apportion the variance in the output y: 50% from input parameter x2, 40% from input parameter x1, and the remaining 10% from the interaction between x1 and x2. These sensitivity indices are based on the premise that all variance in each parameter is reduced. On the right, a DSA computes the sensitivity index function, which apportions the expected reduction in output variance as a function of the amount of input variance reduction. For illustration, consider the case that the goal is to reduce the uncertainty of the output y by 50%. A prioritization of effort based on the GSA would suggest focusing on x2: reducing all the variance in x2 is expected to yield the desired reduction in output variance. One could perhaps decide to allocate effort between x2 and x1, but the GSA results provide no guidance on how to do this and no indication of the expected result on uncertainty reduction in the output. Using the DSA results, however, we can combine partial variance reductions of each input parameter and choose our reductions accordingly. For example, the DSA results show that reducing variance in x1 by 45% and reducing variance in x2 by 22% together provide an expected reduction in output variance of 50%. From Sec. 3.2 onward, we further explain the methodology that permits these ideas to be applied in the engineering design setting.
Methodology
Global Sensitivity Analysis (GSA).
where Xi denotes the random variable representing the uncertain input xi.
where Si is the main effect sensitivity index of input variable Xi. These main effect sensitivity indices are usually computed using either the Fourier amplitude sensitivity test (FAST) or the Sobol’ method [18,26,27]. The FAST method uses Fourier transforms, while the Sobol’ method employs Monte Carlo simulation. The Sobol’ method is computationally more intensive and depends on the chosen sample size, but it can more easily compute the higher-order sensitivity indices, which is required when there are interactions between the parameters [28]. A different approach to compute a GSA, employed in this work, is to use a high-dimensional model representation (HDMR) and build up the ANOVA-HDMR.
Here, g0 represents the mean response to over the input space. The first-order component function captures the independent contribution of the ith input variable alone. The component subfunction represents the effect of the interaction between the ith and jth input variable on , and so on, for the higher-order terms.
where is the probability density function of the sth input variable.
where are the main effect sensitivity indices, as defined in Eq. (2). The second-order sensitivity index is , and so on, for the higher-order sensitivity indices.
Distributional Sensitivity Analysis.
In GSA, the factor prioritization is based on the main effect sensitivity indices in Eq. (2), and sometimes also on the total sensitivity indices—the sum of all sensitivity indices to which the ith variable contributes. Such a factor prioritization strategy relies on the assumption that all uncertainties of a particular input variable can be reduced. Distributional sensitivity analysis relaxes this assumption by treating the amount of variance reduction in a particular input variable as a random variable instead of assuming it can be reduced completely to zero [25]. For a given amount of variance reduction in the input variable Xi, DSA considers a family of reasonable distributions and calculates an average change in the variance of Y over this family. In the design setting this is particularly useful, because reducing the uncertainty in an input parameter to zero is generally an impractical option.
Here, and denote the variance of Y and the main effect sensitivity index of Xi, respectively, corresponding to the original input distribution used to represent input variable xi. is an updated random variable for input xi, which results from design effort to reduce that variable’s uncertainty (e.g., further research, experiments, higher-fidelity modeling, etc.). is the updated variance of Y and is the updated main effect sensitivity index, both associated with the updated random variable . The quantity is the ratio of the variance of Xi that is not reduced to the variance of the original distribution. The variance-based sensitivity index function ζi is therefore the relative difference between the global sensitivity index and the expected value of the updated variance with the sensitivity index if we were to only reduce the uncertainty partially. When , this expected value is zero and therefore the variance-based sensitivity index function reduces to the global sensitivity index.
Since it is not known how we can reduce the uncertainty in the input variables, we take the expected value of the reduction in variance of Y over reasonable distributions. is the expected value of the product of the updated sensitivity indices and the updated variance of Y, taken over reasonable distributions. Those reasonable distributions are further defined below for different distribution families. However, it is unknown how much uncertainty can be reduced before doing further research, therefore the parameter λi is considered a uniform random variable Λi on . We note that further research could indicate that the uncertainty of the input variable increases, but that would mean the original input distribution was flawed. Therefore, is not considered in this work. Other distributions for λ could also be considered; for instance, one might wish to bias λ toward smaller reductions in uncertainties. This work could easily accommodate such a model, but it is not included in this paper because choosing such a model is problem dependent.
While the variance-based sensitivity index function provides more insight about how reducing a factor’s uncertainty impacts variance in the output, the average main effect sensitivity index is useful because it permits us to perform factor prioritization. In particular, the average main effect sensitivity index can be used to rank input variables based on the average amount of variance of Y that can be reduced by doing further research on a particular input variable. This is analogous to the typical ranking in a conventional GSA. Note that as written, these average main effect sensitivity indices do not add up to one, whereas the variance-based sensitivity indices do. However, the values of these ηi indices are important in an absolute sense, because they relate to the expected reduction of variance.
Figure 2 shows the overall DSA approach, where is the number of samples used to compute ηi and is the number of times a new distribution is drawn for a given λi. is the variance-based sensitivity index function for the ith input parameter corresponding to the jth sample of λi.
Modeling Variance Reduction.
This section defines the reasonable distributions over which we sample to obtain ζi. We focus on distributions that are relevant in an engineering design setting. The uniform distribution is often used when limited information is available about the input parameter, but its range is available (or can be estimated). The normal distribution is often used for error modeling (e.g., model error in a simulation code). A triangular distribution is often used in an engineering setting when the designer has insight about a most-likely value in addition to the variable’s range. The general DSA methodology is extensible to other distributions; here, we present the details for these three distributions. We build off the work in Refs. [25,36]. Algorithm 1 is taken directly from Ref. [25], whereas Algorithm 1 is a variation where we also compute the variance-based sensitivity function with quadrature. For the normal distribution, Algorithms 3 and 4 deviate from Ref. [25] because we allow the mean of the updated distribution to vary as well. Finally, for the triangular distribution, Algorithm 5 uses the same idea as in Ref. [25], but uses a different implementation.
Uniform Distribution.
The variance of a uniformly distributed random variable is . Consider the original distribution to be and the updated distribution to be . Then we have . All updated distributions for a given λi have the same width, as . In order to compute , we sample over different distributions using Algorithm 1.
Algorithm 1: Computing the variance-based sensitivity index function for a uniform distribution using sampling (from Ref. [25]).
1 Sample λi from a uniform distribution on the interval ;
fortodo
2 Sample from a uniform distribution on the interval ;
3 Let ;
4 Compute and for the distribution ;
end
5 Compute ;
6 Compute variance-based sensitivity index function using Eq. (9) for λi.
A method for computing the variance-based sensitivity index function ζi using Gauss–Legendre quadrature is given in Algorithm 2.
Algorithm 2: Computing the variance-based sensitivity index function for a uniform distribution using Gauss–Legendre quadrature.
1 Sample λi from a uniform distribution on the interval ;
2 Let lower bound for the mean μl be and the upper bound μu be ;
fortodo
3 Let μj be Gauss–Legendre quadrature node on the interval ;
4 Let and ;
5 Compute and for the distribution ;
end
6 Compute , where wj are the Gauss–Legendre quadrature weights on the domain ;
7 Compute variance-based sensitivity index function using Eq. (9) for λi
Normal Distribution.
The procedure for computing the variance-based sensitivity ζi for a normal distribution is similar to that for a uniform distribution. Let the original distribution be given by and the updated distribution by , where . A method for computing ζi by sampling over possible distributions is given in Algorithm 3. Considering the support of the normal distribution is infinite, we need to bind the mean of the updated distribution to sample over it. This is similar to the method for the uniform distribution where we only allow for the updated distributions to be confined to the support of the original distribution. We therefore only allow the mean of the updated distribution to vary within the interval , where is the minimum allowable mean and is the maximum allowable mean of the updated distributions. Here we choose and such that and . This is also illustrated in Fig. 3. One could choose wider bounds, but we do not want the updated distribution to extend too far into the tails of the original distribution, because these regions had a low probability in the original distribution.
Algorithm 3: Computing the variance-based sensitivity index function for a normal distribution using sampling.
1 Sample λi from a uniform distribution on the interval ;
fortodo
2 Sample from a uniform distribution on the interval ;
3 Compute and for the distribution ;
end
4 Compute ;
5 Compute variance-based sensitivity index function using Eq. (9) for λi
Again, instead of sampling over the distributions we can also solve for using Gauss–Legendre quadrature. The algorithm then becomes as given in Algorithm 4.
Algorithm 4: Computing the variance-based sensitivity index function for a normal distribution using Gauss–Legendre quadrature.
1 Sample λi from a uniform distribution on the interval ;
2 Let lower bound for the mean μl be and the upper bound ;
fortodo
3 Let μj be Gauss–Legendre quadrature node on the interval ;
4 Compute and for the distribution ;
end
5 Compute , where wj are the Gauss–Legendre quadrature weights on the domain ;
6 Compute variance-based sensitivity index function using Eq. (9) for λi
Triangular Distribution.
The first strategy considers distributions whose shape does not change. That is, if all distributions were mapped to the domain, they would overlay one another. The maximum allowable shift in mean is then constrained by the fact that the support of the updated distribution should be within the support of the original distribution, i.e., and . This strategy is illustrated in Figs 4(a) and 4(b) and the algorithm for it is shown in Algorithm 5.
Algorithm 5: Computing the variance-based sensitivity index function for a triangular distribution using Gauss–Legendre quadrature where the shape of the distribution is kept constant.
1 Sample λi from a uniform distribution on the interval ;
2 Let lower bound for be and the upper bound ;
fortodo
3 Let be , where xj is the Gauss–Legendre quadrature node on the interval ;
4 Compute and for the distribution ;
end
5 Compute , where wj are the Gauss–Legendre quadrature weights on the domain ;
6 Compute variance-based sensitivity index function using Eq. (9) for λi
In the second strategy, the most-likely value of the distribution is kept fixed and we sample distributions around that, as proposed by Allaire [36]. Thus, we consider . At the same time, the support of the updated distribution has to be within the support of the original distribution, i.e., and . The algorithm for computing the variance-based sensitivity index function using this strategy is shown in Algorithm 6.
Algorithm 6: Computing the variance-based sensitivity index function for a triangular distribution where the most-likely value is kept constant.
1 Sample λi from a uniform distribution on the interval ;
%Compute minimum
2 Solve for using Eq. (12) with and ;
ifthen
3 ;
end
%Compute maximum
4 Solve for using Eq. (12) with and ;
ifthen
5 Solve for using Eq. (12) with and ;
else
6 ;
end
fortodo
7 Sample from a uniform distribution on the interval and set ;
8 Solve for using Eq. (12);
9 Compute and for the distribution ;
end
10 Compute ;
11 Compute variance-based sensitivity index function using Eq. (9) for λi
HDMR-Based Surrogate.
Computing the sensitivity indices for every new distribution using the full model is prohibitively expensive. Reference [25] addressed this challenge by reusing the original Monte Carlo samples from the global sensitivity analysis by means of rejection sampling. We instead use the ANOVA-HDMR both to reduce the initial cost of performing the GSA and to evaluate updated distributions where rejection sampling is inaccurate. One such example is the triangular distribution: when the most-likely value of the updated distribution is almost at the tail of the original distribution, there are very few samples one can use to compute the sensitivity indices.
where is approximated using basis functions , and is the coefficient for the rth basis function, corresponding to the ith input variable. In the same way, is the coefficient corresponding to the pqth basis function and the ijth second-order component function. Here, we take .
Because we need orthonormal basis functions with respect to the distribution, different basis functions are needed for every distribution considered. For the uniform distribution, we use normalized shifted Legendre polynomials, and the integrals in Eqs. (16) and (17) are then solved efficiently using Gauss–Legendre quadrature. For the normal distribution, we use the normalized probabilists’ Hermite polynomials, and we solve the integrals for every and in Eqs. (16) and (17) efficiently using (probabilists’) Gauss–Hermite quadrature. In building up the ANOVA-HDMR for the triangular distributions, one has to deal with the discontinuous derivative of the probability density function. This complicates finding the correct basis functions and also requires splitting the integral into multiple parts. Furthermore, the basis functions now also depend on the shape of the probability density function directly. This in contrast to the uniform and normal distribution, where we map to and to , respectively. Here, we map to , which means that the basis functions depend on μ and therefore depend directly on the shape of the probability density function. The first three basis functions for the triangular distributions are derived by Wang et al. [34] and are used here.
Following the strategy outlined in Fig. 2, we build up the ANOVA-HDMR for the original input distributions and then reuse that surrogate, instead of the black-box model, to find the new standard deviation of the QoI for every updated input distribution. In order to find the sensitivity indices for this updated input distribution, we build up a new ANOVA-HDMR for the new distribution by computing an integral over the new input space using the original ANOVA-HDMR. In other words, we re-solve for and using Eqs. (16) and (17) with the new distributions and replacing with the original ANOVA-HDMR.
We note that other choices of surrogate modeling technique are possible. Our choice of HDMR surrogate model is based on our goal of estimating variance-based sensitivity indices. An HDMR surrogate model approximates the very subfunctions needed to compute these sensitivities, thus it is a natural choice in this setting. In particular, a polynomial chaos expansion might be a suitable alternative choice [37,38], particularly when tailored for computation of global sensitivity indices [39]. One limitation of HDMR is that for higher-dimension problems (e.g., greater than dimension five or six), evaluation of the integrals to estimate the coefficients in the HDMR expansion will be computationally expensive, even with quadrature rules. One way to address this is to use the cut-HDMR [30], which we employ in a high-dimensional example in Sec. 4. A polynomial chaos expansion could address high-dimensional problems by using a sparse polynomial basis and evaluating the coefficients with regression.
Results
Test Function Analysis.
with a = 5 and b = 0.1, as used in Ref. [40]. Here, the Xi’s are considered to be independent and uniformly distributed on . The main effect sensitivity indices for this function and these distributions are and and the only nonzero interaction term is . These were computed using the analytical expressions in Ref. [16].
The variance-based sensitivity index function as computed using Algorithm 2 is shown in Fig. 5. These results were generated using an ANOVA-HDMR consisting of up to eighth-order basis functions and using nine quadrature points in one dimension. With three input variables, we therefore need a total of 729 function evaluations of Eq. (18). This is a substantial reduction in the number of function evaluations required, since, in comparison, the sampling approach in Ref. [25] needed 4096 function evaluations. We compare the accuracy of our method versus the method in Ref. [25] with the analytical average main effect sensitivity index in Table 1. We see that our method provides more accurate results with considerably less samples required.
X1 | X2 | |||||
---|---|---|---|---|---|---|
MC | HDMR | Exact | MC | HDMR | Exact | |
Main effect sensitivity index | 0.4074 | 0.3999 | 0.4007 | 0.2870 | 0.2886 | 0.2881 |
Average main effect sensitivity index | 0.06818 | 0.06861 | 0.06871 | 0.03211 | 0.03162 | 0.03085 |
X1 | X2 | |||||
---|---|---|---|---|---|---|
MC | HDMR | Exact | MC | HDMR | Exact | |
Main effect sensitivity index | 0.4074 | 0.3999 | 0.4007 | 0.2870 | 0.2886 | 0.2881 |
Average main effect sensitivity index | 0.06818 | 0.06861 | 0.06871 | 0.03211 | 0.03162 | 0.03085 |
with . To build up the ANOVA-HDMR, we use up to eighth-order basis functions and ten quadrature points in each dimension. For three input parameters, this results in a total of 1000 function evaluations required to build up the ANOVA-HDMR, a substantial reduction from the 65,536 function evaluations used in the rejection sampling method of [25]. 1000 samples are still a substantial number of samples for a three parameter problem, but we note that this is an inherently complicated function, and that the infinite support of the normal distribution requires us to approximate the function over a wide range. In practice, one could choose the number of samples and the order of the basis functions adaptively by monitoring changes in the approximation; here, we analyze the convergence for different number of samples and basis functions by estimating the error between the surrogate and the actual function. The error is defined as
Parameter | Units | Definition | Lower bound | Upper bound |
---|---|---|---|---|
Sth | — | Turbine cooling Stanton number | 0.094 | 0.096 |
— | Turbine cooling film effectiveness factor | 0.315 | 0.325 | |
— | Low pressure compressor efficiency | 0.936 | 0.937 | |
— | High pressure compressor efficiency | 0.903 | 0.905 | |
— | Low pressure turbine efficiency | 0.875 | 0.877 | |
— | High pressure turbine efficiency | 0.870 | 0.872 | |
K | Turbine metal temperature | 1 172 | 1 272 | |
K | Takeoff turbine inlet total temperature | 1 783 | 1 883 | |
K | Cruise turbine inlet total temperature | 1 541.5 | 1 641.5 | |
OPRD | — | Overall pressure ratio | 24.2 | 28.2 |
— | Fan pressure ratio | 1.609 | 1.611 | |
Psi | Maximum allowable fuselage skin pressurization stress | 14 250 | 15 750 | |
Psi | Maximum allowable fuselage shell bending stress | 28 500 | 31 500 | |
Psi | Maximum allowable wing and tail spar cap stress | 29 500 | 30 500 | |
Psi | Maximum allowable wing and tail spar web shear stress | 19 000 | 21 000 | |
Psi | Young’s modulus wing and tail spar cap | |||
Fuselage pressure-skin material density | 2 672 | 2 726 | ||
Fuselage bending-material density | 2 672 | 2 726 | ||
Wing and tail spar cap material density | 2 672 | 2 726 | ||
Wing and tail spar web material density | 2 672 | 2 726 | ||
— | Maximum lift coefficient perpendicular to the chord of the wing | 2.2 | 2.3 | |
CL | — | Cruise aircraft lift coefficient | 0.56714 | 0.58714 |
Parameter | Units | Definition | Lower bound | Upper bound |
---|---|---|---|---|
Sth | — | Turbine cooling Stanton number | 0.094 | 0.096 |
— | Turbine cooling film effectiveness factor | 0.315 | 0.325 | |
— | Low pressure compressor efficiency | 0.936 | 0.937 | |
— | High pressure compressor efficiency | 0.903 | 0.905 | |
— | Low pressure turbine efficiency | 0.875 | 0.877 | |
— | High pressure turbine efficiency | 0.870 | 0.872 | |
K | Turbine metal temperature | 1 172 | 1 272 | |
K | Takeoff turbine inlet total temperature | 1 783 | 1 883 | |
K | Cruise turbine inlet total temperature | 1 541.5 | 1 641.5 | |
OPRD | — | Overall pressure ratio | 24.2 | 28.2 |
— | Fan pressure ratio | 1.609 | 1.611 | |
Psi | Maximum allowable fuselage skin pressurization stress | 14 250 | 15 750 | |
Psi | Maximum allowable fuselage shell bending stress | 28 500 | 31 500 | |
Psi | Maximum allowable wing and tail spar cap stress | 29 500 | 30 500 | |
Psi | Maximum allowable wing and tail spar web shear stress | 19 000 | 21 000 | |
Psi | Young’s modulus wing and tail spar cap | |||
Fuselage pressure-skin material density | 2 672 | 2 726 | ||
Fuselage bending-material density | 2 672 | 2 726 | ||
Wing and tail spar cap material density | 2 672 | 2 726 | ||
Wing and tail spar web material density | 2 672 | 2 726 | ||
— | Maximum lift coefficient perpendicular to the chord of the wing | 2.2 | 2.3 | |
CL | — | Cruise aircraft lift coefficient | 0.56714 | 0.58714 |
Here, g is the truth model, is the truncated ANOVA-HDMR model, and D is the variance of the truth output quantity of interest Y. The integral in Eq. (20) is evaluated using quasi Monte Carlo sampling with 65,536 samples (the same number of samples as used in Ref. [25]).
Using an ANOVA-HDMR surrogate with up to eighth-order basis functions and nine quadrature points in each dimension (which yields an error of as shown in Fig. 6), the main effect sensitivity indices are found to be , and . GSA-based factor prioritization would then lead to the conclusion to focus research on uncertainty reduction in X3. The distributional sensitivity analysis results in Fig. 7(a), however, show that it might be more worthwhile to invest in uncertainty reduction in X2. This is also indicated by the average main effect sensitivity indices in Fig. 7(b), again found by considering . Those indicate a different ranking for factor prioritization compared to the GSA results. These results were generated using Algorithm 4.
Aircraft Conceptual Design.
The conceptual sizing of a commercial jetliner is considered using the transport aircraft sizing and optimization tool (TASOPT). This tool uses low-order physics-based aircraft sizing models with minimal reliance on empirical and historical data, making it appropriate to use over a wide range of possible designs [41]. In this work, the tool is used in sizing mode, not in the optimization mode. That means that the overall configuration (i.e., sweep angle, material properties, etc.) are fixed, but that other quantities such as the subcomponent weights, overall weight, span, and surface area of wings and tails, are allowed to vary. Two different aircraft configurations are considered and the different impacts of uncertainty on each are investigated.
Problem Setup.
Two different aircraft are considered: the Boeing 737-800 and the D8.6 double-bubble conceptual design. For the Boeing 737-800, we consider a sizing mission with a range of 2950 nautical miles, with 180 passengers, at a cruise altitude of 35,000 ft. The D8.6 flies in the same class as the Boeing 737-800, thus we consider the same mission—carrying 180 passengers over a range of 2950 nautical miles. The D8.6 is part of the D8 family developed as part of the NASA N+3 project [42]. The goal of that project was to come up with a design which has, among other attributes, 70% less fuel burn and 75% less landing and take-off NOx exhausts, all relative to the current Boeing 737-800. In order to achieve these ambitious goals, advanced new technologies are employed in the design, many of which are still in development. This implies uncertainties in the expected gain from these developments, and therefore in the performance of the aircraft.
In our study, two factors are considered to be uncertain: overall pressure ratio of the engine, OPR, and the cruise Mach number, M. OPR represents engine performance, allowing for a trade-off between engine technology and mission parameters. For the Boeing 737-800, OPR and M are modeled as uniform random variables, with and , see Refs. [43,44] for details on how those distributions were derived. For the D8.6, OPR and M are modeled as triangular random variables, with and . These distributions are derived using a combination of historical data and expert opinion, as described in Refs. [44,45]. As the quantity of interest, we consider PFEI, the fuel energy consumption per payload-range.
This case study provides an opportunity to study the influence of nonlinear responses on the DSA results. The Boeing 737-800 flies at a high Mach number () and we expect there to be a nonlinear effect on the fuel efficiency as we increase Mach number. The D8.6, however, is designed to fly at a lower Mach number () for higher efficiency and we therefore expect a more linear response.
Sensitivity Analysis.
In order to perform the sensitivity analysis, an ANOVA-HDMR surrogate is created for both aircraft. For the Boeing 737-800, nine quadrature points per dimension and up to seventh-order basis functions are used. Therefore, a total of 81 TASOPT function evaluations are required for the Boeing 737-800. The reason we need a fairly large number of samples is because we expect the response of PFEI with respect to Mach number to be nonlinear. For the D8.6, we use a five-point quadrature scheme with up to third-order basis functions. This combination results in a low error with the true model (as also shown in Fig. 10(b)) and ensures that the first-order sensitivity indices are converged. Because the domain is split in four parts to account for the discontinuous derivative in the triangular distribution, we need a total of 100 TASOPT function evaluations to create the surrogate for the D8.6.
A GSA on the Boeing 737-800 model gives the main effect sensitivity indices for OPR as 49% and for M as 51%. For the D8.6, we find that the main effect sensitivity index for OPR is 27% and the main effect sensitivity index for M is 73%. The variance-based index function and the average main effect sensitivity indices for the Boeing 737-800 and D8.6—generated using Algorithm 6—are shown in Figs. 8 and 9, respectively. The main observation in Fig. 8 is that the variance-based index function for M is nonlinear. The global sensitivity indices indicate that OPR and M are equally important in uncertainty reduction of the QoI. However, the DSA sensitivity index function indicates that for partial variance reductions, OPR is more important. In this case, DSA allows a designer to make a more informed decision about where to reduce uncertainty. For the D8.6, the variance-based sensitivity indices are linear functions and therefore the ranking is the same between GSA and DSA.
The nonlinear variance-based sensitivity index function for the Mach number of the Boeing 737-800 is the result of the nonlinear response surface of PFEI as a function of Mach number and OPR, as shown in Fig. 10. The response surface of the D8.6 is much smoother, resulting in more linear variance-based sensitivity index functions. The nonlinear response of PFEI as a function of Mach number can be explained by looking at the configuration of the Boeing 737-800. At the design configuration of the Boeing 737-800—designed for M = 0.78—the airplane is designed to have the highest efficiency. This means that sweep angle of the wing is chosen such that enough lift can be provided while avoiding large drag increases due to high Mach number flow along the chord line of the airfoil. Therefore when one sizes an aircraft for a different Mach number, but does not change the configuration (e.g., CL or sweep angle), the drag and corresponding fuel consumption are expected to go up for both an increase and decrease in Mach number. For a decrease in Mach number, the sweep angle is too large for that condition, resulting in more drag in the spanwise direction, increasing fuel burn. For higher Mach numbers, the flow along the chord line of the airfoil is highly transonic, resulting in a drag increase and hence fuel consumption increase. The response of PFEI as a function of Mach number would be closer to linear if the CL and Mach number were allowed to vary in the aircraft sizing.
High-Dimensional Aircraft Design Problem.
Finally, we apply the method to a 22-dimensional aircraft design problem. Specifically, we determine the influence of 22 uncertain design variables on quantities of interest for the conceptual design of the Boeing 737-800 from Sec. 4.2. However, building up a 22-dimensional ANOVA-HDMR using the methods described so far would be computationally intractable, because it would require sample points. Instead, if we neglect higher-order terms in the ANOVA-HDMR using the so-called cut-HDMR [30], the number of required samples reduces to . For this problem, we build up a surrogate using quadrature points in each dimension and up to fifth-order basis functions.
We use a uniform distribution to characterize the uncertainty in all 22 design variables, according to the distribution parameters in Fig. 11, taken from Refs. [43,44]. For reference, the distribution parameters for the 22 design variables are listed in Table 2. In order to find which parameters we should focus uncertainty reduction efforts on, we perform a distributional sensitivity analysis on this problem, using Algorithm 2. Our quantities of interest are MTOW, the maximum take-off weight of the aircraft, PFEI, the fuel energy consumption per payload-range, and L/D, the lift-to-drag ratio of the aircraft during cruise.
The average main effect sensitivity indices from DSA are shown in Fig. 11. The average main effect sensitivity indices provide an informative ranking about the influence of the input parameters on our quantities of interest. We find that propulsion uncertainties (, OPRD), structural uncertainties (), and an aerodynamic uncertainty (CL) have the most influence on our three quantities of interest.
We find that the ranking between GSA and DSA does not change for the most important input parameters. However, rather than obtaining just the ranking in the input parameters, we are now able to quantify, using the variance-based sensitivity index functions, the required reduction in uncertainty in each design variable in order for the uncertainty in the quantities of interest to reduce. These variance-based sensitivity index functions could then be combined with cost models for changes in the input distributions and requirements on the uncertainty and cost. This would allow us to come up with new input distributions which meet those requirements.
Conclusion
A new formulation of the distributional sensitivity analysis method reduces its computational cost and makes it more widely applicable to probability distributions commonly used in engineering design. The paper formulates and illustrates different strategies for performing the sampling on these distributions. Application of the method to a case study in aircraft conceptual design demonstrates how the variance-based sensitivity index function provides useful information to the designer on where to target uncertainty reduction efforts. The results also show that when the output of interest depends nonlinearly on the uncertain input parameters, the distributional sensitivity analysis can lead to different conclusions about the relative importance of the inputs, compared to using a standard global sensitivity analysis. This is particularly important when design resources are limited and must be directed as effectively as possible.
A potential area of future work is to extend the methodology in this paper to perform distributional sensitivity analysis for correlated input variables. This would require use of an HDMR framework that incorporates correlated inputs, an example being the Generalized ANOVA-HDMR by Rahman [46]. Furthermore, the way in which we sample over new input distributions in the DSA framework would also need to be adapted to handle correlated inputs.
Acknowledgment
This work was supported in part by the NASA LEARN program through Grant No. NNX14AC73A, and by the United States Department of Energy Applied Mathematics Program, Awards DE-FG02-08ER2585 and DE-SC0009297, as part of the DiaMonD Multifaceted Mathematics Integrated Capability Center.
Following standard notation, a denotes the minimum value in the distribution, b the maximum value of the distribution, and c the most likely value of the distribution.