Abstract

Additive manufacturing (AM) is a type of advanced manufacturing process that enables fast prototyping to realize personalized products in complex shapes. However, quality defects existed in AM products can directly lead to significant failures (e.g., cracking caused by voids) in practice. Thus, various inspection techniques have been investigated to evaluate the quality of AM products, where X-ray computed tomography (CT) serves as one of the most accurate techniques to detect geometric defects (e.g., voids inside an AM product). Taking a selective laser melting (SLM) process as an example, voids can be detected by investigating CT images after the fabrication of products with limited disturbance from noises. However, limited by the sensor size and scanning speed issue, CT is difficult to be used for online (i.e., layer-wise) voids detection, monitoring, and process control to mitigate the defects. As an alternative, optical cameras can provide layer-wise images to support online voids detection. The intricate texture of the layer-wise image restricts the accuracy of void detection in AM products. Therefore, we propose a new method called pyramid ensemble convolutional neural network (PECNN) to efficiently detect voids and predict the texture of CT images using layer-wise optical images. The proposed PECNN can efficiently extract informative features based on the ensemble of the multiscale feature-maps (i.e., image pyramid) from optical images. Unlike deterministic ensemble strategies, this ensemble strategy is optimized by training a neural network in a data-driven manner to learn the fine-grained information from the extracted feature-maps. The merits of the proposed method are illustrated by both simulations and a real case study in a SLM process.

1 Introduction

Additive manufacturing (AM) provides tremendous promise to quickly realize personalized products with complex geometries, different materials, and various functionalities [24]. However, quality issues in the AM process impede product realization in a timely manner. One major issue is the voids that locate inside the AM product [5]. Specifically, voids can significantly influence the mechanical performance of the product (e.g., tensile strength, elastic modulus, etc.) and should be detected during the manufacturing process [6].

The formation mechanism of the void is complex in AM. Take a selective laser melting (SLM) process as an example. The SLM is a metal powder bed fusion AM process, and the void can be caused by the balling effect, improper thermal distribution, and process variations during the recoating or laser scanning [5,7]. In practice, to detect the void that is inside a SLM product, the X-ray computed tomography (CT) is usually employed after the fabrication of the product. Based on the CT image, the void can be easily and accurately allocated [8]. However, due to the sensor size and the scanning speed issue, the CT is difficult to be used for online (i.e., layer-wise) voids detection [9], quality monitoring [10], and process control [11] to mitigate the flaw. As an alternative, the optical camera can provide layer-wise images for the product during the process [1214]. As shown in Fig. 1(a), the optical camera can be instrumented on the roof of the machine to capture the layer-wise images after the laser scanning process for a SLM products [1]. Examples of an online layer-wise image and the corresponding offline CT image for a cubic SLM product built by Inconel 718 are presented in Figs. 1(b) and 1(c) in our study. The area in the red box indicates a void defect on the image. Based on the sensing system, the online layer-wise images of all layers can provide rich information regarding the physical fabrication process, which potentially contributes to detecting the voids for a SLM product [9]. It can be found that the locations of the void for both layer-wise image and CT image are almost the same on the product geometry. This provides a possibility to detect the void in a real-time via online layer-wise image, instead of relying on offline CT image after the fabrication in a SLM process. Moreover, for non-void areas on the CT image in Fig. 1(c), it can be found that there is a spatial trend for the brightness of pixels in a grayscale. These characteristics can also potentially help to reconstruct the CT image based on the layer-wise image accordingly.

Fig. 1
Layer-wise images from a SLM process: (a) schematic diagram of a SLM machine with an optical camera system (redrawn from Ref. [1] with authors’ permission); (b) online layer-wise image from the optical camera; and (c) offline CT image
Fig. 1
Layer-wise images from a SLM process: (a) schematic diagram of a SLM machine with an optical camera system (redrawn from Ref. [1] with authors’ permission); (b) online layer-wise image from the optical camera; and (c) offline CT image
Close modal

Therefore, it is desired to use the online layer-wise images to efficiently detect the void in the products and further predict the corresponding CT images, serving as a Virtual CT (i.e., layer-wise approximations for offline CT images) in SLM processes. However, there are many challenges to achieve this objective. The first challenge is how to extract informative features that can effectively represent the texture features of the layer-wise image with acceptable computation workloads. Many statistical methods and machine learning methods (e.g., logistic regression [15] and support vector machine (SVM) [16]) have been proposed for image-based anomaly detection. The detection accuracy of these methods highly relies on feature engineering based on domain-knowledge or trial-and-errors. As shown in Fig. 1(b), it can be found that the characteristics of pixels on void and non-void area are very similar to each other. Moreover, due to the hatch distance between two laser scanning paths during the SLM process, it is difficult to implement the spatio-temporal registration for laser scanning path with the layer-wise image and extract the useful feature to detect the void. Therefore, with these intricate image textures, the commonly used image features, such as wavelet coefficients and the statistical feature for the pixel values (e.g., mean, variance, skewness, etc.), are deficient to represent the characteristic of the image. Even if we can obtain the effective features from the images, it is also a challenge to properly ensemble these features together (i.e., to properly integrate the features from different scales, types, or sources to jointly detect the void and predict the virtual CT image) in a proper way, because the intricate textures on images might violate the assumptions of ensemble structures (e.g., linear relation assumption [17], spatial relation assumption [18], tensor relation assumption [19], etc.) which are the prerequisites to effectively model the underline relations among features. Second, due to the flexibility of AM product design, the geometries, scanning patterns, and the textures on the layer-wise images are usually heterogeneous. This heterogeneity in geometry among layer-wise images reduces the universality of many existing machine learning methods, such as regularized regression [17], which usually assumes the dimension of input layer-wise images and the features to be comparable. Third, the artificial hollow area, such as the strut-based lattice design in the SLM, can also introduce disturbances in model estimation. It is because the characteristics of the pixel on hollow and voids areas are similar to each other on the layer-wise image [20].

In order to tackle the challenges discussed above, we propose a method called pyramid ensemble convolutional neural network (PECNN). The PECNN method can detect the void on layer-wise images by ensemble the informative feature-maps generated from the image pyramid [21] via a deep learning framework and further predicting the corresponding virtual CT image based on the void detection result. Specifically, each layer-wise image will first be partitioned into multiple windows with the same size. Then, the image pyramid method is employed to extract the multiscale informative feature-maps from the layer-wise image. The image pyramid is an effective multiscale image analysis which can implement the smoothing and subsampling for images with customized kernel function to address the important feature and filer the noise at the same time. Next, a deep learning ensemble framework is proposed to properly learn the efficient information from feature-maps and further integrate these informative feature-maps together to support the void detection and CT image texture reconstruction efforts. Finally, based on the characteristics (i.e., the brightness of pixels, spatial gradient of pixels, etc.) for both layer-wise image and the CT images, the virtual CT images can be predicted for each window.

There are several advantages of the proposed method. First, the image pyramid is a computationally efficient method that can extract informative feature-maps from an image with multiple scales. Through a Gaussian kernel function, these multiscale feature-maps can provide different levels of detail of important information to mitigate the noise (i.e., high-frequency features) and keep the void related feature (i.e., low-frequency features) on images with an intricate texture [21]. Therefore, these multiscale feature-maps can potentially improve the void detection accuracy. Moreover, to properly ensemble these feature-maps, the proposed PECNN method uses a deep learning structure to present the ensemble structure and model the intricate textures on images. Therefore, the proposed method can properly learn the ensemble structure among feature-maps by optimizing the weights of hidden layers (i.e., the combination of nonlinear transformation functions) in a data-driven manner. This is different from the traditional methods which simply concatenate the feature-maps or give a fixed weight for each feature-maps. Second, the proposed method is applicable to varied layer-wise images with different geometries, scanning patterns, and textures. By partitioning the layer-wise images into individual windows and further treating one window as one sample, the influence of heterogeneous layer-wise geometry, scanning patterns, and textures can be alleviated. Moreover, it can help to allocate the void in a scope of a specific window, instead of the whole layer-wise image. In addition, each layer-wise image is partitioned into multiple windows (i.e., samples) rather than treating each layer-wise image as one sample, hence reducing the chance of overfitting. Third, the proposed method is robust to the designed hollow area in the product. It is because the proposed PECNN is trained to detect voids from four types of windows: (1) window has voids but do not have hollow areas, (2) window has neither voids nor hollow areas, (3) window has both voids and hollow areas, and (4) window do not have voids but has hollow areas. By considering types (1) and (3) as void samples and considering types (2) and (4) as non-void samples, the significant features which can characterize the void will be automatically learned and identified via a supervised learning framework for the proposed PECNN method.

In order to comprehensively validate the proposed PECNN method, a simulation study and a real case study are implemented. In the simulation study, the layer-wise images with simulated voids for two types of products (i.e., products with and without hollow areas) are generated. Moreover, a physical experiment is conducted to fabricate a defected product via a SLM machine. The void detection accuracy and the prediction accuracy for the CT image of the proposed method are evaluated and compared with three benchmarks: logistic regression [15], SVM [16], and DenseNet [22].

The rest of the paper is organized as follows. Section 2 summarizes the state-of-the-art of the statistical and machine learning methods for quality modeling and void detection in AM. Section 3 introduces the proposed pyramid ensemble neural network method in detail. Section 4 validates the performance of the proposed method via the simulation study. Section 5 validates PECNN in a SLM case study. Lastly, Sec. 6 draws the conclusion and discusses future work.

2 Literature Review

2.1 Quality Modeling in Additive Manufacturing.

In the literature, there has been a series of statistical methods and machine learning methods investigating quality modeling in AM. For example, Sun et al. proposed a functional quantitative and qualitative model to predict two types of quality responses (i.e., number of voids and the surface roughness) via offline setting variables and in situ process variables [23]. Huang et al. developed a series of models to predict product deviations based on engineering knowledge and experiments [2426]. According to the predicted deviation to a specific computer aided design, the optimal compensation plan can be implemented to improve the product geometry accuracy in AM. Li et al. introduced a three-dimensional thermo-mechanical coupling model to model the temperature distribution and the corresponding residual stress of the product in the SLM process [27]. Alizadeh et al. proposed an optimization framework to model the geometric deviation of the product and optimize the energy consumption of the process at the same time. For physics-based models in AM, the finite element analysis (FEA)-based simulation is proposed to predict the elastic response, residual stress, and distortion of the product [28,29]. Olleak and Xi presented a hybrid model to integrate the FEA simulation with the data-driven model to improve the accuracy and efficiency of the simulation based on the limited experiment data. Li et al. proposed a non-parametric surrogate model to efficiently estimate the thermal distribution of the AM process based on the FEA simulation results [30].

On the other hand, from the process monitoring and anomaly detection perspective, Rao et al. presented an advanced Bayesian non-parametric model for in situ sensing data [31]. It can identify failures and the types of failures for the fused filament fabrication (FFF) process in real-time. Khanzadeh et al. proposed a statistical process control strategy to detect process change via thermal images through multilinear principal component analysis [32]. Icten et al. presented a surrogate model based on polynomial chaos expansion to relate the important process parameters with product morphology. A control strategy was proposed based on the model to mitigate product variation [33]. Grasso et al. presented an in situ monitoring method for the SLM process by extracting and learning the informative features from the infrared image of each layer [34].

However, most of the work are mainly based on traditional run-to-run studies to quantify the quality of the product through the design of experiments. For personalized products, which usually have heterogeneous product designs and process settings, it is inefficient and expensive to collect sufficient samples to estimate an accurate model.

Moreover, to overcome the limitation of sample size in AM, Sabbaghi et al. proposed a Bayesian method-based transfer learning framework to efficiently predict the geometric deviation for a new product design based on limited deviation profiles from other products [25,35]. Cheng et al. developed a statistical parametric transfer learning model to predict the deviation profile among different designs [3638]. However, the effect of negative knowledge transfer in transfer learning exists, especially when the similarity between the source domain and target domain is ambiguous [38].

2.2 Image-Based Voids Detection in Additive Manufacturing.

Various imaged-based voids detection studies have been proposed in the AM. For example, Liu et al. proposed an augmented spatial log Gaussian cox process model to detect the void in the AM product based on the offline CT data [20]. Seifi et al. presented a layer-wise void detection system based on the melting pool images to detect the flaw layer [39]. Ye et al. proposed a deep learning framework to detect the geometry shift of the AM process based on the 3D point cloud collected by a light scanner [40]. Imani et al. proposed a deep neural network model to detect void of the product based on the layer-wise image by partitioning the images into subregions with the same number of pixels and generating the feature-map based on semiparametric models [9]. However, these methods cannot predict the texture of a CT image based on the layer-wise image. In addition, restricted by the region of interest for void detection defined in the above methods, it is difficult to estimate a universality model for heterogeneous design and image textures. Therefore, it is vital to improve the applicability of the model in terms of product design. So, the model can provide a universal feature extraction framework to efficiently characterize the void regardless of design variation which is important.

3 Methodology

The schematic of the proposed PECNN model is shown in Fig. 2. In general, the PECNN method can be divided into three steps: (1) feature extraction from each window via an image pyramid, (2) deep learning-based feature-map ensemble for each window, and (3) void detection and virtual CT image prediction. As shown in Fig. 2, to classify the sample Hq from a layer-wise image H and to predict the corresponding virtual CT image Cq, first, a Gaussian low-pass image pyramid with level L is employed to generate multiscale feature-maps (Hq,0, …, Hq,L). Next, to integrate the important information from different levels of detail, a recursive ensemble structure is applied via a convolutional neural network. Specifically, after 2D convolution, average pooling, and dimension transformation, the feature-map from the current pyramid level (i.e., Hq,L) will be concatenated with the pyramid feature-map from the next level (i.e., Hq,L−1). Moreover, this concatenated feature-map is further used to generate a new feature-map Hq,L1 which contains all information transited from the previous feature-maps. It is worth mentioning that the dimension transformation for each feature-map before concatenating with the next level is realized by the 2D convolution. The parameters for this convolutional transformation are based on the window size selected in practice. It can keep the feature-map from the previous level (e.g., Hq,L) which has the proper dimension to concatenate with the feature-map from the next level (e.g., Hq,L−1). Finally, based on the feature-maps generated from the ensemble structure, the classification results will be predicted through the sigmoid function. Moreover, conditional on the classification label, the corresponding virtual CT image will also be predicted through another fully connected layer as shown in the bottom of Fig. 2. To prevent overfitting in the model estimation, the dropout technique with p = 0.5 is also included in the PECNN [41]. The dropout is a widely used regularization technique that can randomly drop neurons from the neural network during training in each iteration to prevent overfitting. By employing the hidden layers in the neural network, the feature-maps from different levels can be effectively ensembled. The hyperparameters in the PECNN model can be tuned based on the data collected from the real SLM process to optimize the prediction performance. The weight of each hidden layer in a neural network can be estimated by minimizing the sum of classification error (i.e., in terms of voids detection) and the CT image prediction error. The detail of each step will be comprehensively discussed in this section. The assumptions for the data in the PECNN model include (1) the resolution of the original optical layer-wise image is sufficient to recognize the void area in the products and (2) the samples in the PECNN model are collected from the AM processes with the same setting variable combinations.

Fig. 2
Schematic of the proposed PECNN model
Fig. 2
Schematic of the proposed PECNN model
Close modal

3.1 Gaussian Image Pyramid.

Denote Hq as the qth window sample from a layer-wise image. In order to identify whether there is any void inside the window and further predicting the virtual CT image, the L levels of corresponding image feature-maps (Hq,0, …, Hq,l, …, Hq,L) are generated from the image pyramid [42]:
Hq,l(x,y)=m=22n=22w(m,n)Hq,l1(2x+m,2y+n),l>0
(1)
where Hq,l(x, y) is the pixel value on (x, y) of the lth level of image representation; w(m, n) is the weighting function (i.e., pyramid kernel) to generate Hq,l based on Hq,l−1. Specifically, Hq,0 = Hq. In this study, a Gaussian low-pass pyramid is employed with kernel function as w(m,n)=1/16[14641] [43]. The Gaussian low-pass pyramid can efficiently extract the low-frequency features from the original image. Therefore, it can partially eliminate the high-frequency intricate textures and maintain the low-frequency features (i.e., voids) at the same time.

Then, feature-maps generated from the image pyramid will be transferred to the same size of the original image to make them comparable with the original image [22]. The number of levels for the image pyramid can be considered as an adjustable parameter in the PECNN model. This is because the resolution of subsequent images generated from a pyramid is continuously decreasing, and the representation with limited pixels might not be informative. On the other hand, more pyramid levels will have higher computation intensity. Therefore, the number of levels for the image pyramid can be selected case-by-case based on a preliminary study by leveraging the accuracy and computation intensity in the practice.

3.2 Pyramid Ensemble Convolutional Neural Network.

The multiscale feature-maps generated from the image pyramid can provide high-dimensional image features related to the layer-wise quality of AM products and the underlying process. Based on these feature-maps, we will further learn informative features in windows through the proposed PECNN model to support the void detection and virtual CT prediction efforts. Different from the traditional deep learning method which simply concatenates feature-maps together as the input of a hidden layer, the proposed PECNN method proposed an ensemble structure to properly integrate the informative features together. As shown in Fig. 2, there is a recursive structure to ensemble the feature-maps. The feature-maps extracted from the (l − 1)th level image representation not only contains the information from the (l − 1)th level itself but also concatenates with the feature-maps extracted from the L th level to the l th level. Moreover, the feature-map transferred to the next level is learned and trained in a data-driven manner (i.e., by maximizing the void detection accuracy and minimizing the CT image prediction). Therefore, instead of potentially undermining important features on individual pyramid feature-maps, the important feature on each level of feature-maps can be learned individually before concatenating with others in the PECNN method.

Based on this ensemble structure, the input feature-map Hq,l for the q, lth level can be determined by
Hq,l=[Hq,l,Tl1(Hq,l1(Tl2(T1(Hq,L))))]
(2)
where Tn(·) is the nth hidden layer. Specifically, the hidden layer structure we designed in the PECNN method includes 2D convolution [44], batch normalization [45], rectified linear units (ReLU) [46], and average pool [47]. As shown in Fig. 2, to give the void detection confidence (i.e., estimated probability that the window has void area) for H based on the feature-maps, the fully connected layer, the ReLU function, and the sigmoid function are employed as
pH=S(R(F(Hq,l)))
(3)
where pH is the corresponding void detection confidence probability of the window Hq, F(·) is the fully connected layer, R(·) is the ReLU function, and S(·) is the sigmoid transformation layer. Moreover, conditional on the void detection confidence probability, the corresponding virtual CT image C^q can be predicted via the fully connected layers as
C^q={F2N(R(F1N(Hq,l)))ifpH<0.5F2V(R(F1V(Hq,l)))ifpH0.5
(4)
where F1N(), F2N(), F1V(), and F1V() are the corresponding fully connected layers shown in Fig. 2.
To enable the ensemble structure, the loss function of PECNN is defined as the summation of the void detection error for each pyramid level (i.e., classification task) and the pixel-wise prediction error for the CT image (i.e., regression task). Therefore, the proposed model can simultaneously select the important features that can identify the void area and predict the CT image. Specifically, the binary cross-entropy loss [48] is employed for the void classification problem and the pixel-wise root-mean-squared error is used for the CT image prediction problem. Therefore, the loss function in the PECNN can be formulated as
Loss=q=1Ql=1L(Dqlog(S(F(Hq,l)))+(1Dq)log(1S(F(Hq,l))))+q=1QCqCq^2
(5)
where Q is the total number of window samples, Dq is the actual label for the qth window sample, Cq^ is the predicted virtual CT image, and S′ is the fully connection layer in the neural network.

To balance the convergence speed and the optimization reliability when estimating Eq. (4), the stochastic gradient descent (SGD) method is employed with 0.01 learning rate. The SGD is an optimization method with less memory requirement and faster computation speed in the deep learning framework. Moreover, benefit from its frequent updates based on a random subset of the original data, the optimization steps have oscillations which can help to get out of local minimums of the loss function. After the optimization based on the training samples, the void detection results from the 0th level of image representation is treated as the final result of the PECNN model. It is because the input feature-map H0 has already contained informative features from all previous feature-maps.

4 Simulation

A simulation study is implemented to evaluate PECNN since it is expensive to create such defective products in reality. The scope of simulation is focus on the void detection classification, not the virtual CT image prediction. There are a total of eight different simulation settings that are summarized in Table 1. The sample size for each simulation case represents how many window samples are generated in total. The sample size should be reasonable to adequately support the estimation for the proposed PECNN model. Specifically, the resolution of the simulated layer-wise image is 1000 × 1000 pixels, the scale of pixel values is on grayscale, and the size of the window is 25 × 25 pixels which are determined by the average size of voids. Moreover, inspired from Fig. 1, the pixel intensity differences are generated to the simulated void area on the simulated image. Specifically, the pixel intensity differences for the individual void in the simulation is generated based on a two-dimensional Gaussian distribution, where μ=[μx=0.1μy=0.1]andΣ=[σx22Cov(x,y)2Cov(x,y)σy2]. It is because the void studied in this research is usually a sphere structure, the brightness of pixels on the void also has a spatial trend that can be roughly presented by the Gaussian distribution. The σ value can affect the size of the void, and 1.5σ is defined as the average radius of the void for each simulation case shown in Table 1 [49]. Especially, the radius of void area simulated in this study is forced to be larger than 1 pixel. Next, the non-void/void ratio is defined as the ratio of the pixel number between non-void and void area. Since it is not reasonable to have too many voids in one layer, the non-void/void ratio cannot be too small. On the other hand, to reduce the computational intensity for image simulation and improve the diversity of the void area geometry, we select the ratio as 20 in this simulation study as a compromise. Finally, in order to validate the proposed method with both normal product design (infill ratio = 1) and lattice product design (infill ratio = 0.5), two types of layer-wise images are generated in the simulation study.

Table 1

Simulation settings

Case no.Sample sizeAverage radius of voidNon-void/void area ratioDesign type
180,000 windows (50 layers)0.25 mm (7 pixels)20Normal
2240,000 windows (150 layers)0.25 mm (7 pixels)20Normal
380,000 windows (50 layers)0.125 mm (3.5 pixels)20Normal
4240,000 windows (150 layers)0.125 mm (3.5 pixels)20Normal
580,000 windows (50 layers)0.125 mm (3.5 pixels)20Lattice
6240,000 windows (150 layers)0.125 mm (3.5 pixels)20Lattice
780,000 windows (50 layers)0.25 mm (7 pixels)20Lattice
8240,000 windows (150 layers)0.25 mm (7 pixels)20Lattice
Case no.Sample sizeAverage radius of voidNon-void/void area ratioDesign type
180,000 windows (50 layers)0.25 mm (7 pixels)20Normal
2240,000 windows (150 layers)0.25 mm (7 pixels)20Normal
380,000 windows (50 layers)0.125 mm (3.5 pixels)20Normal
4240,000 windows (150 layers)0.125 mm (3.5 pixels)20Normal
580,000 windows (50 layers)0.125 mm (3.5 pixels)20Lattice
6240,000 windows (150 layers)0.125 mm (3.5 pixels)20Lattice
780,000 windows (50 layers)0.25 mm (7 pixels)20Lattice
8240,000 windows (150 layers)0.25 mm (7 pixels)20Lattice

To generate the simulated layer-wise images, there are three steps as shown in Fig. 3. First, a background image without any void area is simulated (i.e., powder bed image without void area). Due to the complexity of the image texture from the SLM process, the layer-wise image is difficult to be simulated by ordinary distributions. Therefore, the texture synthesis [50] is employed to generate the simulated background images based on the real layer-wise image collected from the SLM process with the same process parameters. The texture synthesis method can efficiently simulate the texture of the input image and further generate a new texture image accordingly. Moreover, if a lattice structure is employed for the simulated layer-wise image, the corresponding cutout areas are removed from the background. Specifically, a body-centered cubic unit lattice structure [51,52] is employed in this simulation with the infill ratio = 0.5. Second, based on the Gaussian distribution, void areas are randomly generated. The locations of void areas are also randomly assigned to the background image. Specifically, the xy coordinates of the center for each void area are determined by a uniform distribution from 1 to 1000 (i.e., image size). Since the void areas are allowed to superimpose with each other, the void areas can also have irregular geometries. The total void area on each layer-wise image is determined by the non-void/void area ratio of the simulation study shown in Table 1. Moreover, since the location of each void area is random, the location and the geometry of void area on each simulated layer-wise image are also varied. As a result, voids are randomly distributed on a blank image until the corresponding non-void/void area ratio meets the requirement. It is worth mentioning that overlaps among several voids are allowed to generate complex void geometries, instead of the sphere only. Finally, the simulated layer-wise image is generated as the superimposing of the background image, void areas, and a normal distribution noise image with a signal-to-noise ratio equal to 25 [53]. Specifically, since there is no validated method to accurately simulate the CT image based on the corresponding layer-wise image, this simulation study is concentrated on the void sample classification. Moreover, the validation of the CT image will be performed based on the real CT images and layer-wise images in Sec. 5.

Fig. 3
The procedures to generate the simulated layer-wise image
Fig. 3
The procedures to generate the simulated layer-wise image
Close modal

To evaluate the performance of the void detection accuracy, the overall accuracy, type I error, and type II error are used [54]. Moreover, three benchmark models are employed in the simulation: (1) logistic regression [55], (2) SVM [16], and (3) deep learning method based on DenseNet [22]. (1) and (2) are the traditional machine learning methods for the classification problem which will use the summary statistics from samples as the predictors. (3) is a widely adopted deep learning method for the image processing domain. Other machine learning methods such as smooth-sparse image decomposition [56] and tensor-on-tensor regression [57] are not selected because they cannot provide a classification label for void detection effort. These methods will be used as benchmark methods to predict the virual CT image in a real case study. Moreover, other deep learning methods such as very deep convolutional networks are not selected because the computation intensity is not comparable with the proposed method and the DenseNet [58]. In order to evaluate the void detection accuracy of the proposed method and benchmark methods, for each simulation case, 80% of the samples are randomly selected and used as training samples and left 20% of the samples are testing samples. Moreover, to evaluate the effect of the pyramid representations to the prediction accuracy, both results which are with and with/o pyramid feature-maps (i.e., with and with/o summary statistics of pyramid feature-maps as additional predictors) are calculated for benchmark (1) and (2). For the proposed PECNN method, as discussed in Sec. 3, to leverage the computation intensity and the model accuracy for the PECNN model, the pyramid level is selected as three based on a preliminary study result (for simulation case 6, the overall accuracy of the proposed method is 94.492% for level = 2, 96.429% for level = 3, and 96.478 for level = 4). Moreover, the model was trained on the computer with Intel i7-8850H CPU and NVIDIA Quadro P3200 GPU. The training and testing times for each simulation are shown in Table 2.

Table 2

Simulation time consumption for training and testing

Case no.Training time consumptionTesting time consumption
Case 11 h 16 min<1 min
Case 23 h 42 min<1 min
Case 31 h 37 min<1 min
Case 43 h 51 min<1 min
Case 51 h 53 min<1 min
Case 64 h 16 min<1 min
Case 71 h 48 min<1 min
Case 84 h 5 min<1 min
Case no.Training time consumptionTesting time consumption
Case 11 h 16 min<1 min
Case 23 h 42 min<1 min
Case 31 h 37 min<1 min
Case 43 h 51 min<1 min
Case 51 h 53 min<1 min
Case 64 h 16 min<1 min
Case 71 h 48 min<1 min
Case 84 h 5 min<1 min

The complete simulation results for all eight cases are shown in Table 3. It can be observed that the PECNN model yields the best void detection accuracy, the lowest type I error, and the lowest type II errors for all eight simulation cases. Even though there is a limited sample size (i.e., 50 layers), the proposed PECNN model can still efficiently learn the useful information from each pyramid feature-maps and integrate the feature-maps through the recursive ensemble structure. The performance of the PECNN model is robust to the size of the voids. For the logistic regression and the SVM, it can be observed that both methods are very sensitive to the size of the void and the lattice structure. It is also worth mentioning that with the additional predictors from the pyramid feature-maps, the performance of these two methods can be marginally improved. These two methods do not have promising prediction performance, because the capability of predictors (i.e., summary statistics) for these methods is not sufficient to distinguish the non-void and the void sample. Without a reasonable way to extract the useful feature from the original image, it is difficult to accurately detect the void for the logistic regression and the SVM. The DenseNet has comparable results with the PECNN model when the product does not have a lattice structure (cases 1–4). However, for the simulation cases which have lattice products, the accuracy of the DenseNet is significantly worse than the proposed method, especially for case 5. The DenseNet is also more sensitive to the size of the void. It is because, in the DenseNet, the pyramid feature-maps are simply concatenated together which might eliminate the informative features from each level.

Table 3

Simulation results

Case no.Model nameOverall accuracy (%)Type I error (%)Type II error (%)
1Logistic regression81.7708.55064.732
SVM82.5007.94963.723
Logistic regression (pyramid)84.7877.76851.241
SVM (pyramid)85.1007.66048.828
DenseNet97.0502.0947.306
PECNN (proposed)98.6880.7234.194
2Logistic regression82.1958.25764.776
SVM82.5337.86764.702
Logistic regression (pyramid)86.2506.46750.087
SVM (pyramid)86.8926.08248.166
DenseNet98.0130.9377.426
PECNN (proposed)98.7480.7474.258
3Logistic regression72.46222.54039.980
SVM71.77520.14547.937
Logistic regression (pyramid)75.72520.46535.311
SVM (pyramid)73.21319.53944.941
DenseNet96.2631.4829.216
PECNN (proposed)98.1790.7664.902
4Logistic regression73.22521.53939.960
SVM72.11320.96946.776
Logistic regression (pyramid)76.22519.24135.130
SVM (pyramid)73.85018.64743.539
DenseNet97.3171.1036.625
PECNN (proposed)98.2130.5774.173
5Logistic regression68.80023.01251.732
SVM68.91221.29855.634
Logistic regression (pyramid)70.01322.57448.576
SVM (pyramid)70.71220.14452.214
DenseNet86.41210.15122.715
PECNN (proposed)95.1753.9447.159
6Logistic regression68.32124.34450.169
SVM71.17919.74151.709
Logistic regression (pyramid)69.26323.98947.748
SVM (pyramid)73.21318.11748.643
DenseNet93.6123.88712.762
PECNN (proposed)96.4292.4086.585
7Logistic regression76.66312.82364.713
SVM78.92511.14660.148
Logistic regression (pyramid)80.1509.31261.321
SVM (pyramid)79.43810.58259.840
DenseNet92.9254.21518.778
PECNN (proposed)96.5622.6476.758
8Logistic regression79.81310.25568.173
SVM81.2839.23964.504
Logistic regression (pyramid)81.7888.03767.371
SVM (pyramid)83.7176.72462.464
DenseNet94.2004.14613.489
PECNN (proposed)97.7421.3286.749
Case no.Model nameOverall accuracy (%)Type I error (%)Type II error (%)
1Logistic regression81.7708.55064.732
SVM82.5007.94963.723
Logistic regression (pyramid)84.7877.76851.241
SVM (pyramid)85.1007.66048.828
DenseNet97.0502.0947.306
PECNN (proposed)98.6880.7234.194
2Logistic regression82.1958.25764.776
SVM82.5337.86764.702
Logistic regression (pyramid)86.2506.46750.087
SVM (pyramid)86.8926.08248.166
DenseNet98.0130.9377.426
PECNN (proposed)98.7480.7474.258
3Logistic regression72.46222.54039.980
SVM71.77520.14547.937
Logistic regression (pyramid)75.72520.46535.311
SVM (pyramid)73.21319.53944.941
DenseNet96.2631.4829.216
PECNN (proposed)98.1790.7664.902
4Logistic regression73.22521.53939.960
SVM72.11320.96946.776
Logistic regression (pyramid)76.22519.24135.130
SVM (pyramid)73.85018.64743.539
DenseNet97.3171.1036.625
PECNN (proposed)98.2130.5774.173
5Logistic regression68.80023.01251.732
SVM68.91221.29855.634
Logistic regression (pyramid)70.01322.57448.576
SVM (pyramid)70.71220.14452.214
DenseNet86.41210.15122.715
PECNN (proposed)95.1753.9447.159
6Logistic regression68.32124.34450.169
SVM71.17919.74151.709
Logistic regression (pyramid)69.26323.98947.748
SVM (pyramid)73.21318.11748.643
DenseNet93.6123.88712.762
PECNN (proposed)96.4292.4086.585
7Logistic regression76.66312.82364.713
SVM78.92511.14660.148
Logistic regression (pyramid)80.1509.31261.321
SVM (pyramid)79.43810.58259.840
DenseNet92.9254.21518.778
PECNN (proposed)96.5622.6476.758
8Logistic regression79.81310.25568.173
SVM81.2839.23964.504
Logistic regression (pyramid)81.7888.03767.371
SVM (pyramid)83.7176.72462.464
DenseNet94.2004.14613.489
PECNN (proposed)97.7421.3286.749

5 A Real Case Study

In order to evaluate the performance of the proposed PECNN model, we apply the proposed model to a SLM product in practice. The product design is shown in Fig. 4, and the size of the cubic block is 2 cm × 2 cm × 1 cm. Due to the limited budget, one product was fabricated on the EOS 290 m SLM machine with the Inconel 718 metal powder. In total, there are 200 layers for this product. The intentional sphere voids are generated with radii from 0.16 mm to 1 mm. Specifically, the radius for each intentional sphere void is decided from a uniform distribution. Moreover, the Cartesian coordinates of the center for each void are also generated from a uniform distribution within the cubic block to randomly assigned voids. The embedded voids represent the lack of fusion problem in SLM (i.e., a zone of material that is not scanned in a product), which is caused by multiple root causes such as the improper laser intensity, powder bed quality, hatch distance, and scanning speed [9,12,59]. The experimental setting for the case study is shown in Table 4 based on the engineer experience.

Fig. 4
The isometric view of the cubic product with random voids
Fig. 4
The isometric view of the cubic product with random voids
Close modal
Table 4

Experimental settings

Setting parameter namesValue
Laser power200 W
Scanning speed1100 mm/s
Hatch distance0.1 mm
Layer thickness50 μm
Setting parameter namesValue
Laser power200 W
Scanning speed1100 mm/s
Hatch distance0.1 mm
Layer thickness50 μm

In order to collect the layer-wise image during the fabrication process, an optical camera (i.e., a Canon T6i SLR camera) and a data acquisition system (a NI-GPIO system) are installed on the SLM machine. Since the optical camera is too big to perpendicularly point on the build area, an angle (around 45 deg) exists between the camera lens and the build area (as shown in Fig. 1(a)). A camera calibration [60] and image rotation [61] effort is executed to accurately obtain the top-view layer-wise image of the product after the laser scanning process with acceptable distortion. The resolution of the layer-wise image after the processing is 961 × 961. In order to identify whether the voids are successfully fabricated in the block, a CT scanning is conducted for the product after the SLM process. According to the design information, the layer-wise image is registered with the corresponding CT image.

To evaluate the performance of the proposed PECNN model, logistic regression, SVM, and DenseNet models are used as the benchmark methods for void detection. Moreover, the smooth-sparse image decomposition and tensor-to-tensor regression are employed as the benchmark methods for virtual CT prediction. A three-level image pyramid is selected based on the preliminary study to generate representations from the window. Based on the size of voids in the product, the window size is determined as 16 × 16 without overlap. The windows on the edge of the optical images will contain a blank area to fill-up the window of 16 × 16. Since there are limited void samples from the original dataset (i.e., the ratio between non-void/void is less than 500), we employed the data augmentation method [62] to boost the number of void samples based on the real void sample collected from the experiment. Specifically, we randomly permute the factors in the subroutine of image augmentation to generate the extra void samples [63]. The parameters and the corresponding value ranges are rotation angle factor (0–180 deg), scaling factor (0–1), width shift factor (0–0.3), height shift factor (0–0.3), and shear range factor (0–0.3). On the other hand, since the number of non-void samples is much larger than the void samples, only a part of non-void samples is used in model estimation to reduce the computational intensity. It can also help to balance the ratio between void and non-void samples in the model training. In total, we collected 15,000 non-void windows and 5000 void windows (with windows generated from data augmentation) in the training stage from the cubic product. The tenfold cross-validation is employed to evaluate the performance of the model. Same as the simulation study, the model was trained on the computer with Intel i7-8850H and NVIDIA Quadro P3200. The training time consumption is 34 min, and the testing time consumption is less than 1 min. The overall void detection accuracy, the type I error, the type II error, and the pixel-wise normalized root-mean-squared error (NRMSE) between the approximated virtual CT image (based on the original window, instead of windows generated by data augmentation) and the actual CT image are used as performance measurements.

The results of void detection accuracy and the prediction accuracy for the virtual CT image are shown in Table 5. It can be concluded that the PECNN model yields the best void detection accuracy compared with benchmark models. It is because the traditional machine learning methods only utilize the summary statistics from the image as the predictors, which are limited to comprehensively reflect the information from the image texture. On the other hand, by adopting the deep learning framework to extract and learn informative features, the DenseNet and the proposed model have a much higher void detection accuracy compared with traditional machine learning methods. Moreover, by properly ensemble the feature-maps, PECNN leads to better performance than the DenseNet.

Table 5

Results for the void detection accuracy

Model nameOverall accuracy (%)Type I error (%)Type II error (%)
Logistic regression (pyramid)92.0%5.5%18.0%
SVM (pyramid)90.8%6.5%20.0%
DenseNet97.6%1.0%8.0%
PECNN (proposed)98.4%1.0%4.0%
Model nameOverall accuracy (%)Type I error (%)Type II error (%)
Logistic regression (pyramid)92.0%5.5%18.0%
SVM (pyramid)90.8%6.5%20.0%
DenseNet97.6%1.0%8.0%
PECNN (proposed)98.4%1.0%4.0%

Moreover, the virtual CT prediction normalized root-mean-square errors and an example of predicted virtual CT images based on the PECNN model are shown in Table 6 and Fig. 5, respectively. It can be observed that the proposed PECNN model can accurately predict the off-line CT image based on the layer-wise image and reflect the void area on the virtual CT image. On the other hand, the benchmark methods cannot effectively predict the virtual CT image. It is because the spatio-temporal relationship among feature-maps can be significantly disturbed by the scanning pattern and the intricate powder texture on the image [19]. Therefore, it is difficult to form informative tensors from the image to support the tensor regression method. Specifically, for the smooth-sparse decomposition, the smooth spline basis cannot model the intricate texture on the layer-wise image. Similarly, limited by the regression model structure, the tensor-to-tensor regression cannot effectively model the spatial relationship among pixels on the layer-wise image.

Fig. 5
Examples of layer-wise images, CT images, and predicted virtual CT image from PECNN
Fig. 5
Examples of layer-wise images, CT images, and predicted virtual CT image from PECNN
Close modal
Table 6

Results for the virtual CT image prediction

Smooth-sparse decompositionTensor-to-tensor regressionPECNN (proposed)
Pixel-wise NRMSE43.57%21.28%12.62%
Smooth-sparse decompositionTensor-to-tensor regressionPECNN (proposed)
Pixel-wise NRMSE43.57%21.28%12.62%

6 Conclusion

SLM is a design-driven manufacturing process that can efficiently realize personalized products in a timely manner. However, the voids inside SLM products can significantly affect the quality and the reliability of the product. To accurately detect these anomalies, the offline X-ray CT is employed to generate CT images for products after the fabrication. However, due to the limitation of sensing capability, CT cannot be used in online void detection which can potentially mitigate the flaw during the process. As an alternative, the online optical layer-wise image for the product can be obtained. However, due to the intricate texture on the layer-wise image, the statistical and machine learning methods might not be efficient to accurately detect the void. Therefore, in this research, we proposed a new model called PECNN. It can efficiently detect the void and further predict the corresponding CT images based on the layer-wise optical images. The proposed PECNN model provides a way to detect the void of the AM products during the fabrication process and make it possible to commentate or fix the defected product in a real-time [64]. Moreover, the proposed pyramid ensemble method can also be generally extended to other domains, such as healthcare application which usually has personalized cases and signal-based anomaly detection problems [65,66].

This research also leads to several future research directions. First, we will consider extending the proposed PECNN model to other AM processes, such as FFF and binder jetting process which also suffer from void anomalies. Moreover, the process parameters can be considered as covariates in the PECNN model to identify the relationship between process parameters and voids. In addition, the proposed PECNN model can also potentially be extended to other image detectable defects in AM, such as uneven powder surface, geometric deviation, etc. Next, a real-time anomaly monitoring system [67,68] for the SLM process can be developed based on the proposed model. Lastly, more data sources can be integrated into the PECNN model, such as the thermal distribution of the product, to better study the interaction between the physical process and the anomaly during the process [69,70].

Acknowledgment

This work was supported in part by the National Science Foundation (Grant No. CMMI-1436592). The authors would like to thank Mr. Jeffrey Burdick from CCAM (Commonwealth Center for Advanced Manufacturing) for his efforts on data collection and sensor installation of this research.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The authors attest that all data for this study are included in the paper. Data provided by a third party are listed in Acknowledgement.

References

1.
Wang
,
L.
,
Jin
,
R.
, and
Henkel
,
D.
,
2018
, “
Data Fusion for In Situ Layer-Wise Modeling and Feedforward Control of Selective Laser Melting Processes
,”
Proceedings of the IISE Annual Conference 2018
,
Orlando, FL
, pp.
1084
1089
.
2.
Frazier
,
W. E.
,
2014
, “
Metal Additive Manufacturing: A Review
,”
J. Mater. Eng. Perform.
,
23
(
6
), pp.
1917
1928
.
3.
Gibson
,
I.
,
Rosen
,
D. W.
, and
Stucker
,
B.
,
2014
,
Additive Manufacturing Technologies
,
Springer
,
New York
.
4.
Yap
,
C. Y.
,
Chua
,
C. K.
,
Dong
,
Z. L.
,
Liu
,
Z. H.
,
Zhang
,
D. Q.
,
Loh
,
L. E.
, and
Sing
,
S. L.
,
2015
, “
Review of Selective Laser Melting: Materials and Applications
,”
Appl. Phys. Rev.
,
2
(
4
), p.
041101
.
5.
Liverani
,
E.
,
Toschi
,
S.
,
Ceschini
,
L.
, and
Fortunato
,
A.
,
2017
, “
Effect of Selective Laser Melting (SLM) Process Parameters on Microstructure and Mechanical Properties of 316l Austenitic Stainless Steel
,”
J. Mater. Process. Technol.
,
249
(
1
), pp.
255
263
.
6.
Mishurova
,
T.
,
Artzt
,
K.
,
Haubrich
,
J.
,
Requena
,
G.
, and
Bruno
,
G.
,
2019
, “
New Aspects About the Search for the Most Relevant Parameters Optimizing SLM Materials
,”
Addit. Manuf.
,
25
(
1
), pp.
325
334
.
7.
Stojanov
,
D.
,
Wu
,
X.
,
Falzon
,
B. G.
, and
Yan
,
W.
,
2017
, “
Axisymmetric Structural Optimization Design and Void Control for Selective Laser Melting
,”
Struct. Multidiscipl. Optim.
,
56
(
5
), pp.
1027
1043
.
8.
Pantělejev
,
L.
,
Koutnỳ
,
D.
,
Paloušek
,
D.
, and
Kaiser
,
J.
,
2017
,
Mechanical and Microstructural Properties of 2618 Al-Alloy Processed by SLM Remelting Strategy
, Vol.
891
,
Trans Tech Publ
,
Stafa-Zurich, Switzerland
, pp.
343
349
.
9.
Imani
,
F.
,
Chen
,
R.
,
Diewald
,
E.
,
Reutzel
,
E.
, and
Yang
,
H.
,
2019
, “
Deep Learning of Variant Geometry in Layerwise Imaging Profiles for Additive Manufacturing Quality Control
,”
ASME J. Manuf. Sci. Eng.
,
141
(
11
), p.
111001
.
10.
Jiang
,
Q.
, and
Yan
,
X.
,
2019
, “
Multimode Process Monitoring Using Variational Bayesian Inference and Canonical Correlation Analysis
,”
IEEE Trans. Autom. Sci. Eng.
,
16
(
4
), pp.
1814
1824
.
11.
Kehoe
,
B.
,
Patil
,
S.
,
Abbeel
,
P.
, and
Goldberg
,
K.
,
2015
, “
A Survey of Research on Cloud Robotics and Automation
,”
IEEE Trans. Autom. Sci. Eng.
,
12
(
2
), pp.
398
409
.
12.
Tapia
,
G.
, and
Elwany
,
A.
,
2014
, “
A Review on Process Monitoring and Control in Metal-Based Additive Manufacturing
,”
ASME J. Manuf. Sci. Eng.
,
136
(
6
), p.
060801
.
13.
Luan
,
H.
,
Post
,
B. K.
, and
Huang
,
Q.
,
2017
, “
Statistical Process Control of In-Plane Shape Deformation for Additive Manufacturing
,”
2017 13th IEEE Conference on Automation Science and Engineering (CASE)
,
Xi'an China
,
Aug. 20–23
.
14.
Chen
,
X.
,
Wang
,
L.
,
Wang
,
C.
, and
Jin
,
R.
,
2018
, “
Predictive Offloading in Mobile-Fog-Cloud Enabled Cyber-Manufacturing Systems
,”
2018 IEEE Industrial Cyber-Physical Systems (ICPS)
,
Saint Petersburg, Russia
,
May 15–18
.
15.
Hosmer Jr.
,
D. W.
,
Lemeshow
,
S.
, and
Sturdivant
,
R. X.
,
2013
,
Applied Logistic Regression
, Vol.
398
,
John Wiley & Sons
,
Hoboken, NJ
.
16.
Suykens
,
J. A.
, and
Vandewalle
,
J.
,
1999
, “
Least Squares Support Vector Machine Classifiers
,”
Neural Process. Lett.
,
9
(
3
), pp.
293
300
.
17.
Tibshirani
,
R.
,
1996
, “
Regression Shrinkage and Selection Via the Lasso
,”
J. R. Stat. Soc. Ser. B (Methodol.)
,
58
(
1
), pp.
267
288
.
18.
Gramacy
,
R. B.
,
2020
,
Surrogates: Gaussian Process Modeling, Design, and Optimization for the Applied Sciences
,
CRC Press
,
Boca Raton, FL
.
19.
Zhou
,
H.
,
Li
,
L.
, and
Zhu
,
H.
,
2013
, “
Tensor Regression With Applications in Neuroimaging Data Analysis
,”
J. Am. Stat. Assoc.
,
108
(
502
), pp.
540
552
.
20.
Liu
,
J.
,
Liu
,
C.
,
Bai
,
Y.
,
Rao
,
P.
,
Williams
,
C. B.
, and
Kong
,
Z.
,
2019
, “
Layer-Wise Spatial Modeling of Porosity in Additive Manufacturing
,”
IISE Trans.
,
51
(
2
), pp.
109
123
.
21.
Adelson
,
E. H.
, and
Burt
,
P. J.
,
1980
, “
Image Data Compression With the Laplacian Pyramid
,” Citeseer.
22.
Huang
,
G.
,
Liu
,
Z.
,
Van Der Maaten
,
L.
, and
Weinberger
,
K. Q.
,
2017
, “
Densely Connected Convolutional Networks
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Honolulu, HI
,
July 21–26
.
23.
Sun
,
H.
,
Rao
,
P. K.
,
Kong
,
Z. J.
,
Deng
,
X.
, and
Jin
,
R.
,
2017
, “
Functional Quantitative and Qualitative Models for Quality Modeling in a Fused Deposition Modeling Process
,”
IEEE Trans. Autom. Sci. Eng.
,
15
(
1
), pp.
393
403
.
24.
Song
,
L.
,
Huang
,
W.
,
Han
,
X.
, and
Mazumder
,
J.
,
2016
, “
Real-Time Composition Monitoring Using Support Vector Regression of Laser-Induced Plasma for Laser Additive Manufacturing
,”
IEEE Trans. Ind. Electron.
,
64
(
1
), pp.
633
642
.
25.
Sabbaghi
,
A.
, and
Huang
,
Q.
,
2018
, “
Model Transfer Across Additive Manufacturing Processes Via Mean Effect Equivalence of Lurking Variables
,”
Ann. Appl. Stat.
,
12
(
4
), pp.
2409
2429
.
26.
Luan
,
H.
,
Grasso
,
M.
,
Colosimo
,
B. M.
, and
Huang
,
Q.
,
2019
, “
Prescriptive Data-Analytical Modeling of Laser Powder Bed Fusion Processes for Accuracy Improvement
,”
ASME J. Manuf. Sci. Eng.
,
141
(
1
), p.
011008
.
27.
Li
,
Y.
,
Zhou
,
K.
,
Tan
,
P.
,
Tor
,
S. B.
,
Chua
,
C. K.
, and
Leong
,
K. F.
,
2018
, “
Modeling Temperature and Residual Stress Fields in Selective Laser Melting
,”
Int. J. Mech. Sci.
,
136
(
1
), pp.
24
35
.
28.
Bhandari
,
S.
, and
Lopez-Anido
,
R.
,
2018
, “
Finite Element Analysis of Thermoplastic Polymer Extrusion 3d Printed Material for Mechanical Property Prediction
,”
Addit. Manuf.
,
22
(
1
), pp.
187
196
.
29.
Chen
,
Q.
,
Liang
,
X.
,
Hayduke
,
D.
,
Liu
,
J.
,
Cheng
,
L.
,
Oskin
,
J.
,
Whitmore
,
R.
, and
To
,
A. C.
,
2019
, “
An Inherent Strain Based Multiscale Modeling Framework for Simulating Part-Scale Residual Deformation for Direct Metal Laser Sintering
,”
Addit. Manuf.
,
28
(
1
), pp.
406
418
.
30.
Li
,
J.
,
Jin
,
R.
, and
Hang
,
Z. Y.
,
2018
, “
Integration of Physically-Based and Data-Driven Approaches for Thermal Field Prediction in Additive Manufacturing
,”
Mater. Des.
,
139
(
5
), pp.
473
485
.
31.
Rao
,
P. K.
,
Liu
,
J. P.
,
Roberson
,
D.
,
Kong
,
Z. J.
, and
Williams
,
C.
,
2015
, “
Online Real-Time Quality Monitoring in Additive Manufacturing Processes Using Heterogeneous Sensors
,”
ASME J. Manuf. Sci. Eng.
,
137
(
6
), p.
061007
.
32.
Khanzadeh
,
M.
,
Tian
,
W.
,
Yadollahi
,
A.
,
Doude
,
H. R.
,
Tschopp
,
M. A.
, and
Bian
,
L.
,
2018
, “
Dual Process Monitoring of Metal-Based Additive Manufacturing Using Tensor Decomposition of Thermal Image Streams
,”
Addit. Manuf.
,
23
(
1
), pp.
443
456
.
33.
Içten
,
E.
,
Nagy
,
Z. K.
, and
Reklaitis
,
G. V.
,
2015
, “
Process Control of a Dropwise Additive Manufacturing System for Pharmaceuticals Using Polynomial Chaos Expansion Based Surrogate Model
,”
Comput. Chem. Eng.
,
83
(
5
), pp.
221
231
.
34.
Grasso
,
M.
,
Demir
,
A.
,
Previtali
,
B.
, and
Colosimo
,
B.
,
2018
, “
In Situ Monitoring of Selective Laser Melting of Zinc Powder Via Infrared Imaging of the Process Plume
,”
Rob. Comput. Int. Manuf.
,
49
(
1
), pp.
229
239
.
35.
Francis
,
J.
,
Sabbaghi
,
A.
,
Ravi Shankar
,
M.
,
Ghasri-Khouzani
,
M.
, and
Bian
,
L.
,
2020
, “
Efficient Distortion Prediction of Additively Manufactured Parts Using Bayesian Model Transfer Between Material Systems
,”
ASME J. Manuf. Sci. Eng.
,
142
(
5
), p.
051001
.
36.
Cheng
,
L.
,
Tsung
,
F.
, and
Wang
,
A.
,
2017
, “
A Statistical Transfer Learning Perspective for Modeling Shape Deviations in Additive Manufacturing
,”
IEEE Rob. Autom. Lett.
,
2
(
4
), pp.
1988
1993
.
37.
Cheng
,
L.
,
Wang
,
K.
, and
Tsung
,
F.
,
2020
, “
A Hybrid Transfer Learning Framework for In-Plane Freeform Shape Accuracy Control in Additive Manufacturing
,”
IISE Trans.
,
53
(
3
), pp.
1
15
.
38.
Kontar
,
R.
,
Raskutti
,
G.
, and
Zhou
,
S.
,
2020
, “
Minimizing Negative Transfer of Knowledge in Multivariate Gaussian Processes: A Scalable and Regularized Approach
,”
IEEE Transactions on Pattern Analysis and Machine Intelligence
.
39.
Seifi
,
S. H.
,
Tian
,
W.
,
Doude
,
H.
,
Tschopp
,
M. A.
, and
Bian
,
L.
,
2019
, “
Layer-Wise Modeling and Anomaly Detection for Laser-Based Additive Manufacturing
,”
ASME J. Manuf. Sci. Eng.
,
141
(
8
), p.
081013
.
40.
Ye
,
Z.
,
Liu
,
C.
,
Tian
,
W.
, and
Kan
,
C.
,
2020
, “
A Deep Learning Approach for the Identification of Small Process Shifts in Additive Manufacturing Using 3d Point Clouds
,”
Proc. Manuf.
,
48
(
1
), pp.
770
775
.
41.
Srivastava
,
N.
,
Hinton
,
G.
,
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Salakhutdinov
,
R.
,
2014
, “
Dropout: A Simple Way to Prevent Neural Networks From Overfitting
,”
J. Mach. Learn. Res.
,
15
(
1
), pp.
1929
1958
.
42.
Burt
,
P.
, and
Adelson
,
E.
,
1983
, “
The Laplacian Pyramid as a Compact Image Code
,”
IEEE Trans. Commun.
,
31
(
4
), pp.
532
540
.
43.
Toet
,
A.
,
1989
, “
Image Fusion by a Ratio of Low-Pass Pyramid
,”
Pattern Recognit. Lett.
,
9
(
4
), pp.
245
253
.
44.
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Hinton
,
G. E.
,
2012
, “
Imagenet Classification With Deep Convolutional Neural Networks
,”
Advances in Neural Information Processing Systems
,
Stateline, NV
,
Dec. 3–8
, pp.
1097
1105
.
45.
Ioffe
,
S.
, and
Szegedy
,
C.
,
2015
, “
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
,”
International Conference on Machine Learning
,
Lille, France
,
July 6–11
.
46.
Glorot
,
X.
,
Bordes
,
A.
, and
Bengio
,
Y.
,
2011
, “
Deep Sparse Rectifier Neural Networks
,”
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
,
Fort Lauderdale, FL
,
Apr. 11–13
, pp.
315
323
.
47.
Hariharan
,
B.
,
Arbeláez
,
P.
,
Girshick
,
R.
, and
Malik
,
J.
,
2015
, “
Hypercolumns for Object Segmentation and Fine-Grained Localization
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Boston, MA
,
June 7–12
, pp.
447
456
.
48.
Zhang
,
Z.
, and
Sabuncu
,
M.
,
2018
, “
Generalized Cross Entropy Loss for Training Deep Neural Networks With Noisy Labels
,”
Advances in Neural Information Processing Systems
,
Montréal, Canada
,
Dec. 2–8
, pp.
8778
8788
.
49.
Goodman
,
N. R.
,
1963
, “
Statistical Analysis Based on a Certain Multivariate Complex Gaussian Distribution (an Introduction)
,”
Ann. Math. Stat.
,
34
(
1
), pp.
152
177
.
50.
Efros
,
A. A.
, and
Freeman
,
W. T.
,
2001
, “
Image Quilting for Texture Synthesis and Transfer
,”
Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques
,
Los Angeles, CA
,
Aug. 12–17
, pp.
341
346
.
51.
Tao
,
W.
, and
Leu
,
M. C.
,
2016
, “
Design of Lattice Structure for Additive Manufacturing
,”
2016 International Symposium on Flexible Automation (ISFA)
,
Cleveland, OH
,
Aug. 1–3
, IEEE, pp.
325
332
.
52.
Al-Saedi
,
D. S.
,
Masood
,
S.
,
Faizan-Ur-Rab
,
M.
,
Alomarah
,
A.
, and
Ponnusamy
,
P.
,
2018
, “
Mechanical Properties and Energy Absorption Capability of Functionally Graded f2bcc Lattice Fabricated by SLM
,”
Mater. Des.
,
144
(
15
), pp.
32
44
.
53.
Heermann
,
D. W.
,
1990
,
Computer-Simulation Methods
,
Springer
,
New York
, pp.
8
12
.
54.
Hastie
,
T.
,
Tibshirani
,
R.
, and
Friedman
,
J.
,
2009
,
The Elements of Statistical Learning: Data Mining, Inference, and Prediction
,
Springer Science & Business Media
,
Secaucus, NJ
.
55.
Menard
,
S.
,
2002
,
Applied Logistic Regression Analysis
, Vol.
106
,
Sage
,
Newbury Park, CA
.
56.
Yan
,
H.
,
Paynabar
,
K.
, and
Shi
,
J.
,
2017
, “
Anomaly Detection in Images With Smooth Background Via Smooth-Sparse Decomposition
,”
Technometrics
,
59
(
1
), pp.
102
114
.
57.
Gahrooei
,
M. R.
,
Yan
,
H.
,
Paynabar
,
K.
, and
Shi
,
J.
,
2020
, “
Multiple Tensor-on-Tensor Regression: An Approach for Modeling Processes With Heterogeneous Sources of Data
,”
Technometrics
,
63
(
2
), pp.
1
23
.
58.
Simonyan
,
K.
, and
Zisserman
,
A.
,
2015
, “
Very Deep Convolutional Networks for Large-Scale Image Recognition
,”
3rd International Conference on Learning Representations
,
San Diego, CA
,
May 7–9
.
59.
Imani
,
F.
,
Gaikwad
,
A.
,
Montazeri
,
M.
,
Rao
,
P.
,
Yang
,
H.
, and
Reutzel
,
E.
,
2018
, “
Process Mapping and In-Process Monitoring of Porosity in Laser Powder Bed Fusion Using Layerwise Optical Imaging
,”
ASME J. Manuf. Sci. Eng.
,
140
(
10
), p.
101009
.
60.
Zhang
,
Z.
,
2000
, “
A Flexible New Technique for Camera Calibration
,”
IEEE Trans. Pattern. Anal. Mach. Intell.
,
22
(
11
), pp.
1330
1334
.
61.
Cox
,
R. W.
, and
Jesmanowicz
,
A.
,
1999
, “
Real-Time 3d Image Registration for Functional MRI
,”
Magn. Reson. Med.
,
42
(
6
), pp.
1014
1018
.
62.
Shorten
,
C.
, and
Khoshgoftaar
,
T. M.
,
2019
, “
A Survey on Image Data Augmentation for Deep Learning
,”
J. Big Data
,
6
(
1
), p.
60
.
63.
Gulli
,
A.
, and
Pal
,
S.
,
2017
,
Deep learning with Keras
,
Packt Publishing Ltd
,
Birmingham, UK
.
64.
Afazov
,
S.
,
Denmark
,
W. A.
,
Toralles
,
B. L.
,
Holloway
,
A.
, and
Yaghi
,
A.
,
2017
, “
Distortion Prediction and Compensation in Selective Laser Melting
,”
Addit. Manuf.
,
17
(
1
), pp.
15
22
.
65.
Bandettini
,
P. A.
,
Jesmanowicz
,
A.
,
Wong
,
E. C.
, and
Hyde
,
J. S.
,
1993
, “
Processing Strategies for Time-Course Data Sets in Functional MRI of the Human Brain
,”
Magn. Reson. Med.
,
30
(
2
), pp.
161
173
.
66.
Rampil
,
I. J.
,
1998
, “
A Primer for Eeg Signal Processing in Anesthesia
,”
Anesthesiol.: J. Am. Soc. Anesthesiolog.
,
89
(
4
), pp.
980
1002
.
67.
MacGregor
,
J. F.
, and
Kourti
,
T.
,
1995
, “
Statistical Process Control of Multivariate Processes
,”
Control Eng. Pract.
,
3
(
3
), pp.
403
414
.
68.
Grasso
,
M.
, and
Colosimo
,
B. M.
,
2017
, “
Process Defects and In Situ Monitoring Methods in Metal Powder Bed Fusion: A Review
,”
Meas. Sci. Technol.
,
28
(
4
), p.
044005
.
69.
Bartel
,
T.
,
Guschke
,
I.
, and
Menzel
,
A.
,
2019
, “
Towards the Simulation of Selective Laser Melting Processes Via Phase Transformation Models
,”
Comput. Math. Appl.
,
78
(
7
), pp.
2267
2281
.
70.
Wang
,
X.
, and
Chou
,
K.
,
2019
, “
Microstructure Simulations of Inconel 718 During Selective Laser Melting Using a Phase Field Model
,”
Int. J. Adv. Manuf. Technol.
,
100
(
9–12
), pp.
2147
2162
.