## Abstract

Micro- and nanomanufacturing capabilities have rapidly expanded over the past decade to include complex three-dimensional (3D) structure fabrication; however, the metrology required to accurately assess these processes via part inspection and characterization has struggled to keep pace. X-ray computed tomography (CT) is considered an ideal candidate for providing the critically needed metrology on the smallest scales, especially internal features, or inaccessible regions. X-ray CT supporting micro- and nanomanufacturing often push against the poorly understood resolution and variation limits inherent to the machines, which can distort or hide fine structures. We develop and experimentally verify a comprehensive analytical uncertainty propagation signal variation flow graph (SVFG) model for X-ray radiography in this work to better understand resolution and image variability limits on the small scale. The SVFG approach captures, quantifies, and predicts variations occurring in the system that limit metrology capabilities, particularly in the micro/nanodomain. This work is the first step to achieving full uncertainty modeling of CT reconstructions and provides insight into improving X-ray attenuation imaging systems. The SVFG methodology framework is applied to generate a complete basis set of functions describing the major sources of variation in radiographs. Five models are identified, covering variation in energy, intensity, length, blur, and position. Radiographic system experiments are defined to measure the parameters required by the SVFGs. Best practices are identified for these measurements. The SVFG models are confirmed via direct measurement of variation to predict variation within 30% on average.

## 1 Introduction

Recent advances in micro-/nanomanufacturing over the past decade [1–4] have significantly expanded designer's capabilities to access complex three-dimensional (3D) structures. This has marked a major advance over mature lithographically based approaches, and has provided new opportunities to access previously theorized structures like metamaterials [5–7]. However, these new processes and design have struggled to deliver on their full promise due in part to difficulty in ensuring repeatable fabrication, a challenge also noted in the macroscale additive manufacturing field [8]. Metrology can provide answers to these challenges through feedback to understand fabrication physics and stabilize fabrication processes. Unfortunately, most existing precision micro-/nanometrology techniques are planar and surface limited, thus failing to capture the unique challenges posed by new microfabrication methods including internal features, spatially varying properties, and irregular shapes. X-ray computed tomography (CT) is theoretically well suited to provide metrology for a wide variety of such parts and provide a viable alternative to the existing techniques.

X-ray CT metrology offers great potential for metrology of complex micro-/nano-scale structures, through its combination of 3D imaging, nondestructive evaluation of internal features and material distribution, all potentially at sizes below what is possible with optical techniques [9,10]. High performance systems (e.g., nano-CT [11]) have pushed the limits of X-ray CT systems down to the smallest scales and up against the resolution and variation limits inherent to the machines which can distort or hide fine structures. The purpose of this work is to develop a complete generalizable analytical uncertainty propagation model for X-ray radiography to better understand these system variation limits. This model is built using signal variation flow graph (SVFG) techniques, first described in Ref. [12]. The model and insights will allow users to capture, quantify, and predict variations occurring in the system, promoting the systems toward rigorous X-ray metrology with improved resolution and uncertainty bounds on CT reconstructions.

X-ray CT is an optimal technique to provide dimensional metrology of complex small-scale structures, such as those produced by additive manufacturing, due to its capability to nondestructively measure internal features and multimaterial components [9,10,13,14]. X-ray CT is mainly used in the following fields: medical imaging, material analysis, and recently dimensional metrology [9,10,13,15–19]. However, the use of X-ray CT in manufacturing metrology has been limited due (1) to a lack of international standards for metrological testing and uncertainty assessment [13] and (2) an often significant and poorly understood level of variability in the metrology results at the micro/nanoscale.

The lack of standards is partly due to the difficulty involved in calculating uncertainty for CT measurements compared to traditional measurement systems, such as coordinate measuring machines (CMMs). X-ray CT contains intrinsic uncertainties in its multiple complex components, experimental setup, and tomographic reconstruction algorithms. Some error sources during data acquisition and reconstruction include temperature, mechanical vibrations, stage alignment errors, source emission profile, and detector sensing variability. These sources come from the environment and from all system components, like the X-ray source and detector. Additionally, CT systems are used in many different measurement tasks, so uncertainties can vary widely with the task and object of interest.

This work seeks to develop a consistent analytical framework for formulating a total uncertainty budget for X-ray images, as a first of many steps toward a total uncertainty budget for CT reconstructions. Here, we introduce the theory, carry out first steps to the full system state measurement, and show some initial results demonstrating the feasibility of the method for uncertainty prediction. For the sake of brevity, only the motivation, overview, and results of the model are contained in the paper. The full details of the model and metrology process are to be found in the appendices. Future work will chart the process of completing the full system state measurements and extrapolating the uncertainty budget from the two-dimensional (2D) radiographic domain to the 3D CT reconstruction domain.

## 2 Background

### 2.1 Current Methods.

A standardized method for calculating uncertainty in X-ray CT measurements does not exist; instead several different approaches are used in practice: (i) analytical expressions for uncertainty budgets, (ii) theoretical methods using simulations, (iii) experimental methods, (iv) expert knowledge and assessment, and (v) combinations of these methods.

### 2.2 Uncertainty Budget.

Uncertainty budgets are the most common method, but are not widely used for X-ray CT and mostly adopted and based on the substitution method from ISO 15530-3 guidelines in compliance with the Guide to the Expression of Uncertainty in Measurement (GUM) [20,21]. The following case studies primarily use the substitution method, which only accounts for variations of the measurement and any systematic errors/offset neglecting other physical contributions to the uncertainty of the CT measurements [16,22–26].

### 2.3 Simulation Methods.

Simulation methods are used in which the measurement uncertainty is evaluated by a multitude of simulated samples, often Monte Carlo-based simulations, in a virtual experiment setup. This method also requires knowledge of all influence quantities and their functional dependencies, but has the potential to reduce experimental efforts and include some of the physics-based contribution such as hardening and scattering [23,27,28]. Kasperl et al. developed a deterministic software enabling quick simulations of the CT recording process [29]. Other similar simulation-based applications have been used as well [27,30].

### 2.4 Calibration Artifact.

Compared to the methods previously mentioned, calculating measurement uncertainty experimentally with calibrated workpieces and with established CMM guidelines, ISO/TS 15530-3 [31], is regarded as the most promising method. Unfortunately, this method only analyzes a single point in the parameter space, so is not widely generalizable. This method is applied with different workpieces in the following works [16,21,32,33].

### 2.5 Expert Knowledge.

Certified X-ray operators, radiographers, at minimum follow standardized guidelines for proper acquisition and for determining the system's performance [34–39]. In many cases, an estimate of the variability in a CT measurement can be obtained by running the system following predefined standards and observing image quality indicators (also known as IQIs).

### 2.6 Combined Methods.

It is common to use a combination of methods to quantify X-ray CT measurement uncertainty. Bartscher et al. combine the standard and experimental methods following GUM and DIN 1319-3 [40]. Villarraga-Gómez et al. followed guidelines for CMM (ISO 15530 series) to obtain the uncertainty of the CT measurements [41]. Most current methods capture only a small subset of the possible error terms such as Ferrucci et al. where they related geometrical misalignments, particularly angular misalignments, and error motions in the construction and operation of CT instruments to *x*, *y*, *z* coordinate errors in the tomographic volume. This is linked to measurement uncertainty in Ref. [42].

### 2.7 Signal Variation Flow Graph Method.

As stated by Kruth et al., there is a need for new procedures and standards for not only accuracy specification but also for identification of individual error components [19]. The work presented herein aims to meet this need by characterizing and quantifying uncertainty in the X-ray CT system's individual acquisition components via SVFG techniques similar to the one presented in Ref. [12], covering nearly the full parameter space of the system.

The most common methods generally map the uncertainties over a single point or small volume of the system parameter space, providing little to no generalizability or invertibility for use in design. As noted by Hornberger et al. [43] “it is still hardly possible to generalize CT results and to transfer results obtained for individual work-pieces to other measuring objects, which (slightly) differ in size and form. As a consequence, CT users are usually faced with time consuming experimental work and tests.” The newly developed SVFG analytically maps the main physical interaction effects through the parameterized models of the system to the captured radiograph which results in the capability to generate uncertainty predictions over nearly the entire system parameter space.

## 3 Principles

### 3.1 Summary.

This section will describe the principles of the SVFG technique, and the uncertainty models generated using it. We focus on the capture of the variation between the ideal signal and the measured signal. The ideal signal is defined to represent the signal output of a system whose full state behavior is known, with no variation in environment or during operation, and whose photons trajectories are not modified by the object.

A full CT uncertainty analysis is the result of two main steps. First, the radiography regime extends from the source to the collected image, which is 2D in the case of standard CT. The radiography domain maps all upstream uncertainties into the variation observed on the radiograph and captures all variation in the physical domain. Second, the reconstruction regime extends from the captured images to the reconstructed 3D image. This regime is purely in the mathematical domain and accounts for reconstruction algorithm artifacts/anomalies. This work focuses on building the model through the radiography regime as a first step in developing the larger CT uncertainty analysis.

The signal variation occurs in several forms at the radiograph. These forms are gathered into a complete basis set of functions describing all variation on the radiograph. The scale of each basis function is calculated independently via a specific SVFG. This includes zero-dimensional (0D) intensity noise (0DI), 0D energy noise (0DE), one-dimensional (1D) blur (1DB), 1D length (1DL), or 2D position (2DP). These models are orthogonal in that they each explore a space in the signal variation domain that cannot be reached by the other basis functions. The variation basis functions and associated SVFG models are split by output dimensionality, a term loosely used for categorization, and explained further below and in a previously released technical report [44].

### 3.2 Analysis Approach.

This work expands on a signal variation flow graph approach for propagating variation across physical domains that was first described in Ref. [12] and builds on previous work for X-ray uncertainty modeling [19,26,43,45,46], to create a geometric model for understanding small signal perturbation during radiographic image generation. Variation sources are identified from experiments, observations of hardware and previous literature investigations of error and variation [19,43,47–52]. The stochastic variations are defined in the frequency domain via the Laplace transform *s*, and all frequency-dependent expressions are defined as functions of *s*. The models are monochromatic but the analysis can be applied to a polychromatic source, as was done for the measurements, by splitting the source spectrum into a finite number of monochromatic sources, running a model for each, and then summing the calculated variance of all at the output.

The output of a SVFG model is the parameter set to fully describe the variation basis function at the specified location on the detector; typically, this output matches the dimensionality of the variation type. Scalar terms are used for the 0DI and 0DE models. 1D plots are output to describe the 1DB model. The presented 1DL model only needs a scalar term to capture the presently modeled effects that are proportional to length. Future analyses may expand to other orders (zeroth or second-order) effects, which might be best understood via a 1D plot of error versus length. Vector terms are used to describe the 2DP model.

Reduced order linearizations are used to understand the effects of small-scale variations on the main signal. The small-scale nature of the perturbations helps make the calculations tractable. The signal flow is graphically depicted with superimposed variation sources and scaling terms. These signal variation flow graphs are generally sorted into regimes of shared physics, such as the source or detector. The SVFG then becomes a chain of regime model blocks. The equations described by the SVFG can be gathered into a single expression defining the output measurement. The analysis considers the signal flow in the frequency domain. The sensitivities to each of the variation inputs can be calculated via the derivative of the single expression with all variation terms set to 0. The resulting sensitivities then describe a filter mapping from the variation source to the output parameter as a function of frequency.

The output of a SVFG model is the parameter set to fully describe the variation basis function at the specified location on the detector; typically, this output matches the dimensionality of the variation type. Scalar terms are used for the 0DI and 0DE models. 1D plots are output to describe the 1DB model. The presented 1DL model only needs a scalar term to capture the presently modeled effects that are proportional to length. Future analyses may expand to other orders (zeroth or second-order) effects, which might be best understood via a 1D plot of error versus length. Vector terms are used to describe the 2DP model.

### 3.3 Model Dimensionality

#### 3.3.1 Zero-Dimensional.

Zero-dimensional variations are captured in a pointwise analysis at the detector, corresponding to pixels. The physics of interest corresponds to the generation of unwanted variation signals in photon sensing, both counting (0D intensity) and energy (0D energy) discerning. Intensity variation is driven by system perturbations and photon statistics. Energy variation is due to electron generation and capture statistics. Each pixel has a unique set of parameters for the model which produces a unique scalar value for intensity and energy measurement variation.

#### 3.3.2 One-Dimensional.

One-dimensional variations are found in an analysis of the linear profile measurement. They comprise the two main variations on the measurement, including edge blur (1D blur) and length variation (1D length). The edge blur is driven by the stochastic distribution of X-ray trajectories passing through a given point on the object. Due to the scale of photons, such stochastic distribution becomes effectively entirely deterministic on the detector. This is the only stochastic model that can be treated effectively deterministically to attempt to reverse the signal variation. Length variation is driven by spatial variations in the system component locations effecting magnification. While length variation is most generally plotted as a function of length, the presently analyzed variation terms were found to be proportional to length, so can be fully described by a scalar term.

#### 3.3.3 Two-Dimensional.

Two-dimensional variations are found in an analysis of the image centroid location and are analyzed together as a vector quantity (2D position). The combined analysis is used to capture complex cross-axis coupling motions. The position variation is due to spatial variations in the system components effecting the chief ray projection onto the detector.

## 4 Models

This section provides a summary overview of each of the five major SVFGs that cover the forms of variation at the radiograph. A detailed description of the models, all parameters, and associated terminology can be found in Ref. [44], and an abbreviated summary for each is included below.

### 4.1 Zero-Dimensional Intensity

#### 4.1.1 Overview.

The 0D intensity (0DI) model maps the sources of variation in transmission readings captured by the detector pixels. The output of the model is the variance in repeat measurements of attenuation at the same detector spot over time *t _{c}*, taken via measurements of acquisition time

*t*assuming an object of uniform thickness and infinite planar extent. The model covers source physics, emission geometry, scatter effects, detector capture physics, and the algorithmic modification of the signal. This is the most complex model as it covers nearly all the physics models explored in this paper.

_{a}#### 4.1.2 Layout.

The multidomain overview of the 0DI model is shown in Fig. 1, with the main types of variation identified.

The model starts with the emission from the source and captures the rate of photons arriving at a given point with a finite solid angle. The 0DI model first calculates the current nominal and variation scale, then passes this through an angle and energy-dependent source gain function to determine the radiant intensity and spectral composition of emission, as shown in the middle right section of Fig. 2. The model next compares the relative motion of the source and detector to find the orientation of the point of interest (on the detector) relative to the source, as shown in the left portion of Fig. 2. The solid angle of the point is used to generate a photon flux term, the output of the model in Fig. 2.

The signal is next passed through (1) any appropriate filtering material and/or focusing optics, (2) the measured object, where attenuation and scatter play a role in altering the signal, (3) any further filtering elements, (4) the detector where the quantum efficiency, pixel cross talk, and pixel dynamics are incorporated and read out as the measured photon rate from the detector at the point of interest (i.e., pixel location at the detector plane), and finally (5) the acquisition domain where algorithms act on the rate to produce an transmission reading. Only the source SVFG is shown here for compactness; all details and following element models can be found in Ref. [44].

### 4.2 Zero-Dimensional Energy

#### 4.2.1 Overview.

The 0DE model maps the sources of signal variation in energy readings of photons recorded by an energy discerning detector. The output of the model is the variance in repeat measurements of the energy of a photon of energy Ψ over time *t _{c}*. The model is tuned to a semiconductor diode detector; however, the terms could be adapted with little difficulty to other types of detectors. The 0DE model was the simplest model due to the single physical domain and was built on a signal variation model laid out in previous work [51].

#### 4.2.2 Layout.

The 0DE layout follows the path of photon energy from arrival at the detector, through electron generation via a thermally dependent material property. The electron capture and calibration to energy are next applied to generate the measured energy value, which is the output of the model as shown in Fig. 3.

### 4.3 One-Dimensional Blur

#### 4.3.1 Overview.

The 1DB model maps the sources of variation in photon arrival location in images captured by the detector. The output is the variance in photon arrival location for all photons passing through a point in the (assumed thin) object over time *t _{c}*. The blur can be understood via a point spread function (PSF) that is convolved with the unblurred ground truth image. Finite emission and sensing geometries as well as relative motion and scatter can be modeled via PSFs. The width of the PSFs can be compared to detector pixel geometry to determine the relative importance of the variations.

#### 4.3.2 Layout.

The 1DB layout follows the path of photons from a variable start location on the source, given a variable angle due to scatter, and recorded at a point on the detector varying from the actual arrival location. The model references a ray emitted from the center of the source, through a specific point on the object and impinging on the detector at a referenced point. The positional variation is considered as perturbation around the defined ray. The convolution of the PSFs from source size, object scatter, detector cross talk, and motion create a net blurring PSF as a model output, shown in Fig. 4.

The system motion is calculated via a geometric analysis of the relative locations of components, propagated to the detector screen. The 1DB motion SVFG in Fig. 5 turns the six degree-of-freedom (DOF) motion of the components into the x,y vector effective wander of the defined ray on the detector screen.

### 4.4 One-Dimensional Length

#### 4.4.1 Overview.

The 1DL model maps the sources of variation in length measurements in images captured by the detector. The output is the variance in repeat measurements of point-to-point separation centered at the same detector spot over time *t _{c}*, taken via measurements of acquisition time

*t*.

_{a}#### 4.4.2 Layout.

The 1DL layout in Fig. 6 follows the signal path of a small, finite separation between two points on an object, as projected on the radiograph, as the projection is scaled by the magnification, *γ*, defined by the component location. The three columns of the model correspond from left to right to the: source, object, and detector. Each component can move in full 6DOF motion, but only the *z*-component is necessary to understand the effect on magnification. The differential variation of magnification generated by each component is passed to the main signal path, then modulated by the various filtering physics in the system, as well as the calibration.

### 4.5 Two-Dimensional Position

#### 4.5.1 Overview.

The 2DP model maps the sources of time variation in image centroid in images captured by the detector. The output is the variance in repeat measurements of an object feature location centered at the same detector spot over time *t _{c}*, taken via measurements of acquisition time

*t*. This model examines the intersection of the detector surface with a ray emanating from the source origin and passing through a specific point on an object. This is generally presented as a vector variance, to account for the 2D wandering motion.

_{a}#### 4.5.2 Layout.

The 2DP layout in Fig. 7 uses the same topology as the 1DB model, as both focus on identifying projection motion. The key difference is found in the filtering, where the 1DB model focuses on the effects within a single acquisition period, while the 2DP model focuses on the effects occurring over many acquisition periods.

## 5 System State Measurements

### 5.1 Overview.

This section outlines the measurements taken to capture the full system state as required by the models described above. The measurements may be understood more generally as an attempt to make all pertinent parameters of the system both quantifiable and measurable. The use for this information extends well beyond uncertainty models. The measurements in Table 1 include all elements for completeness of best practice. Not all measurements were feasible due to system design, time, and budget limitations. Rather, the measurements below represent first steps toward full system state measurement. The values not explicitly measured were set to 0 or estimated as noted in the measurements section below. The source map, dark field, temperature variation, thermomechanical sensitivity, blur, pixel dynamics, and mechanical vibration measurements were carried out on a commercial X-ray CT system Zeiss (Pleasanton, CA) Xradia 510 Versa.

Source map measurement |

Dark field measurement |

Detector spectral sensitivity measurement |

Temperature variation measurement |

Thermomechanical sensitivity measurement |

Electrical voltage and current variation measurement |

Blur measurement |

Pixel dynamics measurement |

Mechanical vibration measurement |

Alignment error measurement |

Source map measurement |

Dark field measurement |

Detector spectral sensitivity measurement |

Temperature variation measurement |

Thermomechanical sensitivity measurement |

Electrical voltage and current variation measurement |

Blur measurement |

Pixel dynamics measurement |

Mechanical vibration measurement |

Alignment error measurement |

Only the completed measurements will be described below. Further detail and description on all the measurements including suggested methods and best practices can be found in Ref. [44].

### 5.2 Measurements

#### 5.2.1 Source Map.

The source map measurement captures the intensity of the source emission cone spectrum and intensity as a function of angle (*θ _{x}* and

*θ*) around the chief ray of the X-ray system, for which both a full definition and standard procedure for data collection is described in Ref. [44]. A single pixel detector was scanned over the detector area and used to capture a grid of photon flux measurements (Fig. 8).

_{y}The source was mapped to be a generally flat-topped profile with some minor variation across the high intensity region. An interesting trend is noted of higher intensity around the periphery, particularly on the high *θ _{y}*-axis side. This may be due to slight misalignment in the detector at the center, biasing one side over the other to about 1%. The dropoff in the

*θ*-axis is consistent with the approaching the edge of the emission cone; however, the range of the motion stage limited the extent of study.

_{x}#### 5.2.2 Dark Field.

The dark field measurement captures the sensing noise in the detector, known as the darkfield readings. The pixel output was captured with no illumination and turned to a power spectral density. A 49-pt (7 × 7) section at the center of the detector was used to minimize the effect of any damaged pixels; this was normalized out to find the single pixel intensity noise in the analysis below.

The measurements show that the darkfield noise can approximately be considered a white noise random walk (characteristic of the integration of a random carrier flow rate), with standard deviation growing in the form of a constant times the square root of *t _{a}*. Normalizing to rate by dividing by

*t*will leave a factor of 1/

_{a}*t*dependence in the power spectral density (PSD), as is shown in Fig. 9(a). The acquisition time dependence has been removed and the noise PSD clearly plotted in Fig. 9(b) by dividing each acquisition total by √

_{a}*t*rather than

_{a}*t*. The normalization shows that the random walk model captures most of the noise scale; however, it leaves behind a slight offset between the 284 s acquisition measurements and the two other shorter measurements. The longer acquisition times may be better captured by introducing higher order terms to the denominator.

_{a}#### 5.2.3 Temperature Variation.

The temperature variation measurement captures the scale of thermal variation throughout the equipment and provides a rough estimate of the transmission filters linking ambient to component temperatures. Temperature sensors were placed throughout the CT system and the temperature time sequence was recorded.

The sensor noise removal was aided by the sensor noise profile, which was a white noise that was easily discernable from the thermal noise colored section of the spectrum. The complexities of low frequency effects were covered by a simple model of a low frequency gain and a single pole, scaled to an arbitrary order as shown in Fig. 10. The fit model allowed adjustment of the amplitude, drop-off frequency, and falling slope. The total variance was retained via energy conservation scaling of the fit curve. The use of the low pass filter form also limited the complexity of the ambient-to-component filters.

#### 5.2.4 Blur.

The blur measurement captures the sources of variation in photon arrival location. These variation sources include the X-ray source, detector, object scatter, and component relative motion. The scale of scatter is drawn from published literature [53]. The artifact is measured at several locations from near to the detector all the way to near to the source. The measurement technique developed for this work, and described previously in Refs. [54] and [55], uses a forward propagation approach to predict images from known material distributions, then modifies the scale of the variation source PSFs to improve the match with the measured image.

The recommended procedure to capture the full extent of the scatter is shown in Fig. 11. A radiopaque roll bar with multiple edges rolled is combined with a scatter source (preferably about 50%) for best practices to generate a sharp edge where scatter is occurring on the bright side of the edge as well as into the dark side. The scatter plate provides a means to generate scatter at the sharp edge and so deflect the scatter into the dark region behind the roll bar where the small-scale and large-angle effects can be more clearly resolved.

The process was applied to the Zeiss Xradia 510 Versa system, using the roll bar and scatter plate artifact noted above. The initial round of testing in this work focused on evaluating the source and detector sources via resolving the edge of the roll bar as shown in Fig. 12 while applying a predicted scatter evaluated from literature parameters [53]. Later work will apply the algorithm to extract scatter as well as source and detector parameters. The source PSF predicted a FWHM of 3.8 *μ*m, consistent with the manufacturer specification of 4 *μ*m [54]. The detector cross-talk PSF was found to best fit to a sum of two exponential distributions. The component motion PSF was calculated to be a FWHM of about 200 nm. The scatter PSF was generated as a predicted spectrum weighted function for the given material (tungsten) and thickness (50 *μ*m). It was converted to a displacement map by setting the object location to correspond to the conditions in Fig. 12(b), source-to-object = 65.3 mm, source-to-detector = 71 mm. The process is able to extract order of magnitude differences in PSFs including the very long tails in scatter and cross-talk. The assumptions used to predict scatter include allowing only coherent scatter through thin objects, and using literature derived scatter distributions, all of which are covered in more detail in Ref. [53]. The motion blur is predicted from the 1DB SVFG and modeled as a Gaussian distribution.

#### 5.2.5 Pixel Dynamics.

The pixel dynamics measurement captures the time dynamics of the detector pixels, also known as afterglow or latency behavior. The measurement is carried out by pulsing the source then quickly shuttering it with a tungsten plate. All the while, the brightly lit region of the detector is compared to a fully occluded region of the detector. The occluded region of the detector acts as a baseline for pixel intensity comparison. The baseline value from the occluded region is subtracted from the lit region to zero the measurement.

The results from the measurement on the Zeiss Xradia 510 Versa are shown in Fig. 13. The scale of the effect is around 2 × 10^{−3} compared to the main signal, so it is easily hidden by the noise found in a typical radiograph. Binning was found to be useful for increasing the visibility of the effect, as it suppressed the stochastic background noise while retaining the shared signal. The time dynamics are dependent on the system magnification due to the use of different scintillator setups. The 0.4× magnification shows by far the longest time decay constant of the order of 5 min. The exposure time and binning appear to largely change noise without showing a systematic effect on the parameters of interest (initial amplitude and decay constant).

#### 5.2.6 Mechanical Vibration.

The mechanical vibration measurement captures the scale of vibrational position variation throughout the equipment and provides a rough estimate of the transmission filters linking ambient to component vibration. The measurement was carried out using the same framework as the thermal analysis. Acceleration readings were transformed into displacements and mapped to 6DOF motion. The 6DOF signals were transformed into the frequency domain, then a fit function was combined with a sensor pink noise model and fit to the data. The vibration data were found to consistently fit a *n*-order low pass filter topology as shown in Fig. 14.

Unambiguous motion measurement via the acceleration provide difficult given the high stability of the Zeiss Xradia 510 Versa. As it can be seen in Fig. 14(a), the sensor 1/*f* noise (the smooth downward slope in the low frequency section of the spectrum) is only slightly below the actual vibrational spectrum. The low frequency section of the vibrational spectrum is lost under sensor noise. Several assumptions can be made about this section of the spectrum. The low frequency section of the spectrum is assumed to have a rising slope section with increasing frequency up to a peak at around 1–10 Hz, followed by a negative slope. The knee location was generally observed to be about 10 Hz for these measurements, marking the point where the vibration signal transitioned to a separate regime. Measurements of vibration from other sources [56] have shown a consistent trend for displacement noise with a low frequency rising slope of 2–4. The slope of 2 is used to ensure the vibrational PSD is fully bounded by the model.

## 6 Model Evaluation

### 6.1 Method of Evaluation.

Two tests are used to carry out parallel evaluation of the uncertainty models. These tests can be carried out on a measured X-ray system to confirm uncertainties are fully captured. An edge measurement with a scatter plate allows for the simultaneous evaluation of both 0D intensity and 1D blur. A sphere measurement allows for evaluation of 1D length and 2D position. The two tests could be consolidated into a single measurement if desired by including a sphere with the edge artifact. The 0D energy model would require a separate test based on a known isotopic source to provide high resolution measurement. The 0D energy model could not be evaluated on the Zeiss Xradia 510 Versa, so is not included below.

### 6.2 Edge Measurement.

The edge measurement uses a scatter plate to generate an area of partial transmission. A roll bar can be placed in front of the scatter plate to provide a sharper edge. In the case of the measurements below, as noted previously, the edge of the roll bar was used. The edge can then be used to compare edge blur predictions, while the scatter induced by partial transmission area can be used to study low intensity tails behind the roll bar. The calibrated blur PSFs for the source, detector cross-talk, component motion, and scatter effects were applied to the known edge to generate the predicted edge blur, as shown in Fig. 15, and showed a normalized root-mean-square error of 0.89%. This demonstrates the predictive capability for the 1DB model as the *z _{so}* = 24.75 mm measurement set shown in Fig. 15 was not part of the training data to the optimization. The 1DB model can be applied over nearly the full parameter space of the system once correctly calibrated, as the blur from the source, detector, scatter, and component motion are all considered and are independent parameters. The extracted source blur measurement matched to within 5% of the manufacturer specified 4

*μ*m FWHM indicating consistency with other measurement techniques.

A first pass estimate of scatter was used, as described in the experimental process section. This provided rough bounds on the scale of the scatter blur, from which the source and detector effects could be separated. Further work described in later publications will enable further clarity in mapping the scatter effects and replacing scatter prediction with measurement.

The uniform intensity section of an image of the scatter plate was next analyzed for time variation in intensity. Repeat measurements of the same object were aligned and overlaid to capture intensity time sequence data at each pixel. An example set of 121 pixels near the center of the image was chosen to minimize the distortion of possible damaged pixels. The binning term in the model was adjusted to account for the multipixel set. The PSD of the time variation in intensity reading is compared to the model predictions in Fig. 16(a).

The SVFG model predicted that the X-ray detector variation dominates the variation. Figure 16(b) shows how the mechanical and thermal noise PSD components are around 10–15 orders of magnitude below the detector dark noise. The X-ray detector variation includes the binomial variation of the quantum efficiency, the dark noise, and the pixel cross talk. An overall predicted uncertainty map was generated for each pixel in the image and is shown in Fig. 16(b), while the measured variation is shown in Fig. 16(c). After using the PSD chart to identify the dominant noise term and fit that term, the resulting predicted standard deviation in variation is 0.52% and measured standard deviation in variation is 0.34%, an overprediction of about 53%. Several effects could be contributing. First, the relatively high uncertainty on the pixel cross talk term may contribute to the overestimate, the wide tails in the cross talk was found to have a noticeable effect on the predicted noise. Additionally, the dark noise in the detector showed a distinct difference between the higher frequencies (greater than 1 × 10^{−3} Hz) and lower frequencies (less than 1 × 10^{−3} Hz). The overall fit for the dark noise was a least square fit over the full spectrum, so overpredicts in the high frequency regime where the 0DI model measurements were carried out.

The model assumes uniform material, so the most accurate prediction is to be found around the center of the material. Large scatter blur tails can slightly distort the variance near the edges. Future versions of the code could be modified to account for nonuniform material distribution.

### 6.3 Sphere Measurement.

The second batch test uses a ruby sphere located on a post. This can be used to compare to predictions for the 1DL and 2DP variation models. The diameter was first analyzed to capture length variations, as shown in Fig. 17.

The diameter of the sphere was extracted from a least-squares best fit to the circle. The length variation measurement is relatively insensitive to the edge detection method as it is looking for variation rather than accuracy. A large number of fit points (3548) were used as shown in Fig. 17(a) in order to drive the center and diameter estimation variation well below the pixelization limit on the image. Repeat measurements of the same object were used to capture diameter time sequence data as shown in Fig. 17(c). The PSD of the diameter time sequence is plotted against the 1D length SVFG as shown in Fig. 17(d).

One key element of the model was adjusted using the experimental data, the source position thermal sensitivity. The warm up profile curve for the diameter measurement as shown in Fig. 17(c) was compared to the thermal profile for the warm up as shown in Fig. 18. The thermal offset value and the measured length change value were fed into the 1D length model in offset analysis mode. The dominant variation was assumed to be *z*-axis motion of the source relative to the detector. While the *z*-axis motion could be distributed between any of the three components, for simplicity it was assumed to be entirely due to source motion. This was believed to be likely given the significant thermal variation found around the source during warm up. The source position thermal sensitivity to *T _{s} z*-axis term

*α*was adjusted until the 1D length model correctly predicted the measured asymptotic change in length value shown in Fig. 17(c). The source position thermal sensitivity to both ambient and source temperature were assumed to be of the same scale for this analysis, given that both source and ambient variation were able to drive up the internal enclosure temperature and thus couple to the source frame in a similar fashion. This assumption results in a value of

_{PszTs}*α*and

_{PszTs}*α*of 5.7 × 10

_{PszTa}^{−3}°C

^{−1}. The model was updated with the derived source position thermal sensitivity values. The data was detrended to remove the nonequilibrium drift. The predicted PSD accurately captures the scale and profile of the measured PSD. The measurements show spikes at around 1 mHz, 0.5 mHz, 0.25 mHz, and 0.125 mHz, consistent with 15 min periods of thermal cycling and multiples of this period. The model and the thermal spikes strongly indicate that the machine length measurement stability is mainly dominated by thermal effects. The predicted PSD matches closely at the high with the measured spectrum in contour and scale. The measured 1DL standard deviation for the for the lowest frequency data run (116 nm) was compared against predictions (120 nm), showing agreement within 3%.

The sphere's centroid can be analyzed to capture positional variation as defined by the 2DP SVFG. The centroid of the sphere is also extracted from the least-squares best fit to the circle, as shown in Fig. 19(a) so that it is similarly insensitive to edge detection methods. The centroid position datashown in Fig. 19(b) was detrended to remove the nonequilibrium drift. The same data sets and procedure were used for the centroid as the diameter to generate the full PSD spectrum as shown in Figs. 19(c) and 19(d). The predicted PSD matches closely at the high with the measured spectrum in scale at higher frequency, indicating the model is working correctly to propagate the variation through in the realm of <12 h runs. But the PSD deviates at lower frequencies (<1 × 10^{−4} Hz), possibly because of the very large multiday scale thermal variations occurring during the 2D measurement test. The ambient temperature time sequence and power spectral density used as an input to the models was taken in a low diurnal variability season (early winter in California), which led to the consistent day/night measurements as shown in Fig. 20.

The final 2D measurements were taken during a high diurnal variability season (midsummer), which appears to have exposed the measurements to a thermal PSD with increased power at low frequencies. The different shape mainly appears in frequencies <1 × 10^{−4} Hz, where instead of leveling off as shown in Fig. 10(a), the true ambient temperature PSD input appears to continue rising as 1/*f* type noise up to a knee at approximately 1 × 10^{−4.5} Hz. An estimate of the high variability season PSD is shown in Fig. 21.

The model was modified to use the high variability PSD for predictions to correctly match to the season. The time sequence data was detrended and the average diurnal variation extracted for each axis (*X* 0.39 *μ*m, *Y* 0.63 *μ*m). The ambient diurnal temperature variation was previously measured to be approximately 0.08 °C, as shown in Fig. 20, however the new high diurnal variability seasonal PSD estimate shows approximately 40× higher energy in the 24 h period frequency (1 × 10^{−5} Hz), meaning that the high diurnal variability season temperature shift should be about √(40) of the low season diurnal variability, or about 0.5 °C. The thermal expansion coefficients were derived from the measured output motion and known input temperature variation, resulting in estimated values for *α _{PsxTa}* of 4.2 × 10

^{−6}°C

^{−1}and

*α*of 1.3 × 10

_{PsyTa}^{−5}°C

^{−1}. The model was updated with the derived source position thermal sensitivity value. The high diurnal variability input correctly captures the shape and scale of the measured PSD. The 2DP standard deviation calculated between 1 × 10

^{−5}Hz and 1 × 10

^{−3}Hz for the measured (

*X*12 nm,

*Y*36 nm) was compared against predictions (

*X*21 nm,

*Y*28 nm), showing respective errors of (

*X*75%,

*Y*20%).

## 7 Applications

The approach developed in this work marks the first of a multistep approach to predicting the uncertainty in CT reconstructions. The techniques and models developed in this effort not only support the effect to quantify uncertainty but provide a number of other potential benefits. A list of potential uses is provided below.

### 7.1 Quantitative Uncertainty.

The SVFG models enable analytical prediction of uncertainty over the full measurement parameter space, which can be used to provide error bars to radiography measurements.

### 7.2 Metrology Optimization.

The SVFG models can be used to carry out optimization studies to search for system or metrology parameters that optimize combinations of uncertainties. This enables rapid prediction of ideal measurement methods to reach targets such as minimal noise, or maximum accuracy.

### 7.3 System Improvement.

The SVFG models can be used to identify the dominant sources of variation in X-ray radiography systems and determine the scale of improvement. The net effect is a clear picture of how to upgrade radiography performance via hardware improvement.

### 7.4 System Health.

The measurements required to capture all the necessary parameters for the SVFG provide insight into the internal state of the radiography system so could guide maintenance and repair decisions. The measurements of the source emission profile and detector quantum efficiency profile can be used to check for alignment and remaining life. Thermal, vibrational, and alignment variation all provide information about system framework and servostatus.

### 7.5 State Feedback.

The internal state measurements and SVFG-derived sensitivities provide several of the elements needed to implement feedback into the X-ray radiography system in order to cancel variations including thermal and mechanical distortions.

### 7.6 Modeling Improvement.

The multiple internal state measurements captured for the SVFG provide experimental results to tune X-ray radiography predictive models. Models that rely on predicted or semi-empirical models of pixel dynamics, detector spectral response and the component contributions to blur could also be supported with direct measurements of the kind gathered for the full state measurements.

### 7.7 Image Processing.

The 1DB model provides an opportunity to remove blur variation due to its significantly deterministic nature. Such deblurring would require knowledge of the rough material distribution in the object, such as might be provided by an initial reconstruction. The deblurred images could then be used to perform an improved clarity reconstruction. This model also could potentially provide an accurate blur model, which can be incorporated in system model within the reconstruction algorithm.

### 7.8 Technique Generalization.

The methods generated in this work are fundamentally based on understanding a variation in a conic transmission measurement system, which is adaptable to other electromagnetic or wave based NDE technologies. Only minor adaptations would be necessary to enable the use on system geometries, e.g., parallel beam systems.

## 8 Conclusion

X-ray CT metrology offers great potential for metrology of complex micro- and nanoscale structures; however, this metrology routinely operates around the resolution and variation limits in X-ray systems. The purpose of this work is to develop a complete generalizable analytical uncertainty propagation model for cone beam X-ray radiography to better understand these limits. The SVFG model generated in this work allows users to capture, quantify, and predict variations occurring in the system, moving the systems toward rigorous X-ray metrology. This work is the first step in a multistep approach to achieving full uncertainty modeling of CT reconstructions and provides insight into improving X-ray transmission imaging systems to help drive both metrology and machine improvements. Future work will focus on completing the full system state measurements and extrapolating the uncertainty budget from the 2D radiographic domain to the 3D CT reconstruction domain.

The SVFG methodology framework was developed in this work and applied to generate a complete basis set of functions describing all sources of variation in radiographs. Five models were identified, covering variation in energy (0DE), intensity (0DI), length (1DL), blur (1DB), and position (2DP). X-ray radiography system experiments were defined to measure the parameters required by the SVFGs. Best practices were identified for these measurements. The SVFG models were confirmed via direct measurement of variation to predict variation within 30% on average.

The SVFG framework serves well for small variations but its essential assumption of operating around an equilibrium, linearity, and uncorrelated inputs can limit the utility for certain phenomena. The model and approach work well when the system conditions are consistent with small perturbations. Large-scale nonzero mean perturbations or cascading effects can move the system outside of the modeled range. One example is the drift in source performance. While this source life could be captured and parameterized, at present the source properties are captured in a single snapshot and assumed to be a constant.

The methods and models in this work were developed to help move toward a deterministic understanding of metrology using X-ray radiography. They also prove to be a rich source of information for improvement in the system, image, and measurement process. A range of applications were identified for the models and procedures, including (i) quantitative uncertainty maps, (ii) metrology optimization, (iii) system performance improvement, (iv) state feedback, (v) maintenance, (vi) modeling fidelity improvement, (vii) deblurring, and (viii) generalized framework to other transmissive techniques.

## Acknowledgment

This work was supported by LLNL LDRD funds 16-ERD-006 and was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. LLNL-JRNL-758624.

## Funding Data

Lawrence Livermore National Laboratory (16-ERD-006; Funder ID: 10.13039/100006227).