Abstract

We present the deep learning model for internal forced convection heat transfer problems. Conditional generative adversarial networks (cGAN) are trained to predict the solution based on a graphical input describing fluid channel geometries and initial flow conditions. Without interactively solving the physical governing equations, a trained cGAN model rapidly approximates the flow temperature, Nusselt number (Nu), and friction factor (f) of a flow in a heated channel over Reynolds number ranging from 100 to 27,750. For an effective training, we optimize the dataset size, training epoch, and a hyperparameter λ. The cGAN model exhibited an accuracy up to 97.6% when predicting the local distributions of Nu and f. We also show that the trained cGAN model can predict for unseen fluid channel geometries such as narrowed, widened, and rotated channels if the training dataset is properly augmented. A simple data augmentation technique improved the model accuracy up to 70%. This work demonstrates the potential of deep learning approach to enable cost-effective predictions for thermofluidic processes.

1 Introduction

Deep learning is a class of machine learning methods that automatically discovers features from raw data for pattern analysis or classification [1,2]. Unlike the conventional machine learning techniques, deep learning algorithms allow us to readily discover features from high-dimensional data, e.g., images, and have been impacting various areas dealing with a large amount of data such as image recognition [3], speech recognition [4], science [57], business, and government [8,9]. Recently, the interest in deep learning has been growing in the fields of fluid mechanics and heat transfer where nonlinear patterns of data are frequently encountered due to complex physics. Efforts are underway to develop deep learning techniques that can infer the patterns of thermofluidic processes from provided conditional information, e.g., system geometry, boundary, and initial conditions. Previous studies show that the deep learning techniques are able to predict the patterns of flows [1013] or temperature distributions [12,1416] of thermofluidic processes if physical conditions are prescribed. When predicting the solutions of physics problems, the deep learning techniques approximate the output without iteratively calculating the governing physical equations; thus, they demand lower computational costs than the conventional numerical simulation techniques, e.g., finite difference method, finite volume method, and finite element method. Although the conventional numerical approaches can offer accurate solutions to the intricate problems, e.g., transient, two-dimensional (2D), three-dimensional (3D), or conjugate problems, the computational costs are often tremendous particularly when requiring high-resolution, large-scale, or long-period solutions. Thus, researchers have been investigating the deep learning techniques as an alternative modeling approach for thermofluidic processes.

Several recent publications explored how to cost-effectively infer the thermofluidic processes using deep learning techniques [1012,14,15,17]. Some studies employed conditional generative adversarial networks (cGAN) [12], fully convolutional encoder-decoder network [14], or auto-encoder [15] to generate the solutions for steady-state 2D heat conduction problems. When the temperature along the boundary of the model domain was given in a 2D image format, the deep learning model inferred the corresponding temperature distribution within the domain similar to the conventional numerical techniques without solving the heat diffusion equation. Another demonstration used the cGAN model to predict the cooling effectiveness distributions of an effusion cooling technique while varying the design of a porous plate [17]. The deep learning techniques based on generative adversarial networks (GAN) [11], cGAN [12], or convolutional neural networks [11] also succeeded in approximating the solutions for fluid mechanics problems. When the input data were three-channel images describing the 2D velocity vectors and pressure fields along the 2D domain boundary, the cGAN model was able to predict the corresponding flow and pressure fields within the domain [12]. For the unsteady flow over a cylinder, GAN and convolutional neural networks models predicted the flow fields around the cylinder during a short period of time, if the flow fields during the past moment were provided [11]. More recently, a physics-informed deep neural network was developed that was capable of inferring the flow and pressure fields from several snapshots of solute concentration fields [10]. While the early works demonstrate the potential of deep learning for heuristic modeling, still many questions remain regarding how to leverage the potential of deep learning to solve complex thermofluidic problems. Some questions are related to the sizes of training data required for different problems, settings in algorithms to avoid underfitting and overfitting issues, and methods of generating sufficient amount of training data.

In this article, we present a heuristic model for forced convection heat transfer problems based on cGAN that rapidly infers the convection properties and temperature fields from boundary conditions. The cGAN model learns forced convection heat transfer through provided numerical simulation solutions, and predicts the solutions for unseen boundary conditions. To improve the accuracy of the cGAN model, we investigate the influence of important factors such as the size of training set, training epoch, and hyperparameters, e.g., the trade-off parameter λ, and learning rate.

2 Convection Problem

We consider a simple internal forced convection problem occurring in a 2D straight channel where the channel width (w) is 66.6 mm and length (l) is 153 mm as shown in Fig. 1(a). At an inlet, water at 20 °C enters with a uniform velocity distribution while the wall temperature is constantly 60 °C. At the inlet, the Reynolds number (Re) varies from 100 to 27,750.

Fig. 1
(a) Structures of input and output images and (b) conditional generative adversarial networks architecture
Fig. 1
(a) Structures of input and output images and (b) conditional generative adversarial networks architecture
Close modal

3 Deep Learning Methodology

3.1 Conditional Generative Adversarial Networks.

We train a modern deep learning model, cGAN, to generate the property fields of interest from graphical inputs. Due to the ability to generate data that own similar characteristics to the input data, the cGAN has been one of the popular deep learning methods. Figure 1(a) illustrates the structures of input and output images that are employed for our cGAN convection model. Conditional inputs are two-dimensional (2D), 256 × 256-resolution, three-channel images representing the boundary and initial conditions of the convection problem. The pixel values in the first channel represent Re. In wall region (top and bottom areas without label), Re is an arbitrary constant, i.e., 0, while Re is calculated by the flow properties in fluid region (middle area with label). The pixel values in the second channel correspond to Prandtl number (Pr) distribution. In wall region (top and bottom areas without label), Pr is an arbitrary constant 50,000 while in fluid region (middle area with label), Pr is that of liquid water, 7. The pixel values in the third channel are the temperature distribution at an initial moment. The trained cGAN model approximates an output image by statistically learning a possible output data group for input images. The outputs are 2D, 256 × 256-resolution, three-channel images. The pixel values along the fluid domain boundary (labeled interface areas) are friction factor (f) in the first channel and are Nusselt number (Nu) in the second channel. The pixel values of the third channel represent the temperature distribution at steady-state. Figure 1(b) shows a cGAN architecture. The cGAN algorithm uses two neural networks: a generator neural network (G) and a discriminator neural network (D). The generator creates an output image (Y) when a random noise vector (z) and a conditional input (c) are provided.

The discriminator learns to distinguish the ground truth images from the generator outputs. The ground truth images contain numerically calculated Nu, f, and the temperature. The discriminator receives batches of both ground truth images (X) and generated images (Y), and classifies the images into real and fake classes. During the training, the discriminator learns to maximize the probability that it correctly classifies the images, while the generator tries to minimize the probability by generating realistic samples. Thus, the objective of a cGAN is formulated as
(1)
where D(c, X or Y) is the probability that the discriminator classifies X or Y as ground truth for a given c. Ec,XorY is the expected value over the entire group of X or Y. The generator attempts to minimize LcGAN while an adversarial discriminator tries to maximize LcGAN. The cGAN algorithm also considers a traditional loss function such as L1 distance that estimates the errors in Y against X
(2)
Thus, the final objective becomes
(3)

where the hyperparameter λ is a weight for LL1(G).

To train the neural networks, a finite volume model (FVM) was used to prepare a dataset. A commercial software, ansys 19.0, was used to develop the FVM that simulated the steady-state flow and heat transfer in the internal forced convection problem. A dataset consisting of N pairs of conditional inputs and outputs was prepared by changing Re of flow. As Re was linearly varied from 100 to 27,750, the data for transition and turbulent flow regimes were five times more than the data for laminar flow regime. Then, the dataset was split into training and testing sets with a ratio of 9:1.

3.2 Optimization.

To optimize the networks, we studied the effects of dataset size N, training epoch, and the hyperparameter λ. Generally, deep neural networks require N to be greater than 1000. However, if generating such a large dataset is not practical, it is necessary to examine an appropriate N that ensures a sufficient accuracy level. Figure 2 compares the temperature maps inferred by the cGAN and a ground truth image calculated by the FVM for Re = 300. During training, the input and output pair for Re = 300 was not provided to cGAN, indicating that Fig. 2 shows the test result with an unseen input. When N =60, the cGAN did not correctly approximate the wall temperature, exhibiting dark spots near the channel exit and thin thermal boundary layer. However, as N increased to 180, the cGAN accurately produced the thermal boundary layers as well as temperature in other regions. To quantitatively examine the effect of N, we evaluated the root-mean-square error (RMSE), i.e., the standard deviation of the prediction error of the cGAN model against the ground truth, and maximum absolute error (MAX), i.e., the maximum absolute temperature difference between the ground truth and cGAN output. The RMSE and MAX were calculated in the thermal boundary layer region, i.e., fluid domain spanning 33 vertical pixels both from the top and bottom walls, since the difference between the ground truth and cGAN output was the most profound in this region due to a large temperature gradient. The RMSE and MAX were 3.71 °C and 22.3 °C, respectively, with N =60, but RMSE and MAX drastically reduced to 0.31 °C and 2.74 °C with N =180.

Fig. 2
Temperature maps predicted by (a) FVM and (b–d) cGAN. The dataset size N was selected as (b) 60, (c) 120, and (d) 180.
Fig. 2
Temperature maps predicted by (a) FVM and (b–d) cGAN. The dataset size N was selected as (b) 60, (c) 120, and (d) 180.
Close modal

Figures 3(a) and 3(b) show the local Nu generated by the cGAN along with the ground truth for Re = 300 and 23,500. During training, the input and output pairs for Re = 300 and 23,500 were not provided to cGAN, indicating that Fig. 3 presents the test result with unseen inputs. When N =60 and Re = 300, the cGAN predicted Nu distribution with an accuracy of 32.4%. Here, we define the accuracy as percentage error that is averaged over all channel locations, i.e., accuracy = ∑(1− | NuX – NuY |∕ NuX)∕n where NuX is true Nu obtained from the FVM, NuY is Nu inferred by the cGAN model and n is the total number of nodes along the channel. As N increased to 180, the accuracy improved to 95.9% for Re = 300. In turbulent regime (Re > 10,000), the cGAN exhibited higher accuracies probably because there were five times more training data as compared to the laminar flow regime. When N =60 and Re = 23,500, the accuracy was 97.2%. When N =180 and Re = 23,500, the accuracy was 97.6%. Figures 3(c) and 3(d) present f predicted by the cGAN when Re = 300 and 23,500. Similar to Nu prediction, the cGAN approximated more accurately for the turbulent regime than the laminar regime.

Fig. 3
Local Nusselt number and friction factor predicted by cGAN when the dataset size N was varied from 60 to 180: (a) Nu at Re = 300, (b) Nu at Re = 23,500, (c) f at Re = 300, and (d) f at Re = 23,500
Fig. 3
Local Nusselt number and friction factor predicted by cGAN when the dataset size N was varied from 60 to 180: (a) Nu at Re = 300, (b) Nu at Re = 23,500, (c) f at Re = 300, and (d) f at Re = 23,500
Close modal

The epoch of neural networks training is raised until the loss functions and errors become sufficiently small. With an epoch of 500, MAX was 3.8–13.9 °C and training duration was 1.5 h. However, with an epoch of 2000, MAX reduced to 3.3–6.3 °C and the training duration increased to 9.6 h. To balance the error and training duration, we selected the epoch as 1000 in the subsequent cGAN trainings, resulting in a training duration of 3.3 h.

The hyperparameter λ balances the mismatch in the orders of magnitudes of LcGAN and LL1. Figures 4(a) and 4(b) show LcGAN and LL1 as a function of epoch for λ = 105. In our trainings, LcGAN was of the order of 10 while LL1 was of the order of 10−2. Thus, when λ was varied from 104 to 5 × 105, the total generator loss G* was adjusted to the order of 102–103 (Fig. 4(c)).

Fig. 4
Losses as a function of epoch: (a) generator loss, (b) L1 distance loss, and (c) total generator loss
Fig. 4
Losses as a function of epoch: (a) generator loss, (b) L1 distance loss, and (c) total generator loss
Close modal

To understand the influence of dataset size N and hyperparameter λ, MAX and accuracies of Nu and f were evaluated as a function of both parameters. In Tables 1 and 2, the cGAN model accuracies are shown for laminar flow data (Re = 300, Table 1) and turbulent flow data (Re = 23,500, Table 2). Overall, the cGAN model exhibited greater accuracy for the turbulent flow than the laminar flow. In general, a large N improves the model accuracy. However, we did not observe a simple relation between λ, N, and accuracies. It seems that λ should be tuned for a specific N. Considering MAX and accuracies of Nu and f at the same time, we selected N =180 and λ = 105 to minimize MAX and maximize the accuracies in both flow regions.

Table 1

The MAX (°C) of temperature maps and accuracies of Nu and f at Re = 300 as a function of N and λ

λ
N1045 × 1041055 × 105
60MAX24.222.5522.3417.8
Nu0.670.40.320.93
f0.150.60.40.55
120MAX12.1411.0410.737.67
Nu0.870.710.90.74
f0.810.60.60.74
180MAX13.717.12.7410.99
Nu0.920.840.960.79
f0.660.620.930.63
λ
N1045 × 1041055 × 105
60MAX24.222.5522.3417.8
Nu0.670.40.320.93
f0.150.60.40.55
120MAX12.1411.0410.737.67
Nu0.870.710.90.74
f0.810.60.60.74
180MAX13.717.12.7410.99
Nu0.920.840.960.79
f0.660.620.930.63
Table 2

The MAX (°C) of temperature maps and accuracies of Nu and f at Re = 23,500 as a function of N and λ

λ
N1045 × 1041055 × 105
60MAX3.242.082.451.5
Nu0.990.990.970.99
f0.950.950.940.96
120MAX1.581.750.811.44
Nu0.990.990.960.96
f0.880.980.970.84
180MAX16.651.141.41.27
Nu0.990.990.980.93
f0.950.970.990.91
λ
N1045 × 1041055 × 105
60MAX3.242.082.451.5
Nu0.990.990.970.99
f0.950.950.940.96
120MAX1.581.750.811.44
Nu0.990.990.960.96
f0.880.980.970.84
180MAX16.651.141.41.27
Nu0.990.990.980.93
f0.950.970.990.91

3.3 Test and Cross-Validation.

The cGAN model trained with optimally selected parameters is tested and validated with the unseen dataset during training. Figures 5(a) and 5(b) compare the temperature maps for a developing laminar flow at Re = 300 that are obtained by the FVM (denoted as ground truth) and the cGAN. Despite a large temperature variation across the thermal boundary layer region, the RMSE and MAX of the cGAN prediction are merely 0.36 °C and 2.74 °C, respectively. Figures 5(c) and 5(d) depict a ground truth image and cGAN prediction for a transition flow at Re = 10,875. Due to the flow mixing in the transition flow, the temperature is much more uniform than in the laminar flow with significantly reduced RMSE (0.17 °C) and MAX (0.79 °C). Figures 5(e) and 5(f) illustrate the temperature maps for a turbulent flow at Re = 23,500. For the turbulent flow, RMSE is 0.19 °C and MAX is 1.4 °C.

Fig. 5
Comparison of the temperature maps obtained bycGAN and FVM (ground truth): (a, b) Re = 300 (MAX = 2.74 °C, RMSE = 0.36 °C), (c, d) Re = 10,875 (MAX = 0.79 °C, RMSE = 0.17 °C), and (e, f) Re = 23,500 (MAX = 1.4 °C, RMSE = 0.19 °C)
Fig. 5
Comparison of the temperature maps obtained bycGAN and FVM (ground truth): (a, b) Re = 300 (MAX = 2.74 °C, RMSE = 0.36 °C), (c, d) Re = 10,875 (MAX = 0.79 °C, RMSE = 0.17 °C), and (e, f) Re = 23,500 (MAX = 1.4 °C, RMSE = 0.19 °C)
Close modal

The optimally trained cGAN model accurately predicts the local distribution of convection properties. Figure 6(a) shows the predicted Nu at three Re. Note that the cGAN model directly infers the distribution of Nu from provided input without requiring additional calculations with the predicted temperature distribution. Although the Nu distribution dramatically and nonlinearly changes with Re-and the location in channel, the predicted Nu is highly accurate. Figure 6(b) shows the predicted f. Across the flow regimes and along the channel, f changes more than an order of magnitude, but the cGAN was able to infer such trends.

Fig. 6
(a) Local Nusselt number and (b) friction factor predicted by an optimized cGAN at unseen Re = 300, 10,875, and 23,500
Fig. 6
(a) Local Nusselt number and (b) friction factor predicted by an optimized cGAN at unseen Re = 300, 10,875, and 23,500
Close modal

To further validate the accuracy of the cGAN model, a 10-fold cross-validation has been performed. The total dataset including 180 images was divided into ten subsets with each subset containing the same numbers of laminar-flow, transition-flow, and turbulent-flow samples. For each round of validation, one of the subsets was retained as testing data, and the other nine subsets were used as training data [1822]. Figure 7(a) shows the variation of MAX for different test datasets. The box-and-whisker plot provides the median (50%), lower, upper, first-quartile (25%), and third-quartile (75%) values of MAX. The red symbol indicates the mean value of MAX for each test dataset. Although the maximum values of MAX vary from 0.52 °C to 10.12 °C, the average of mean MAX is merely 1.6 °C. Figures 7(b) and 7(c) present the variation of accuracies for Nu and f. The average of mean accuracy is 0.975 for Nu and 0.958 for f. The cross-validation shows that the accuracy of the cGAN model may vary with different training data, but the accuracy would be sufficient as long as an adequate size of training data is provided.

Fig. 7
Cross-validations for (a) MAX, (b) accuracy of inferred Nu, and (c) accuracy of inferred f with different test datasets
Fig. 7
Cross-validations for (a) MAX, (b) accuracy of inferred Nu, and (c) accuracy of inferred f with different test datasets
Close modal

4 Model Testing With Unseen Geometries

The trained cGAN is able to infer the solution for unseen input geometries at a certain extent. To understand the capability of the cGAN, we tested the cGAN with several modified input geometries including narrowed and widened channels, and a 90-degree rotated channel (Fig. 8). The narrowed channel width is 70% of the original width and the widened channel width is 130% of the original width.

Fig. 8
Inputs, ground truths and outputs for unseen channel geometries with Re = 300: (a) non-rotated narrowed channel (MAX = 15.05 °C, RMSE = 0.7 °C), (b) non-rotated widened channel (MAX = 13.54 °C, RMSE = 2.99 °C), (c) rotated narrowed channel (MAX = 13.32 °C, RMSE = 0.77 °C), and (d) rotated widened channel (MAX = 12.22 °C, RMSE = 2.98 °C)
Fig. 8
Inputs, ground truths and outputs for unseen channel geometries with Re = 300: (a) non-rotated narrowed channel (MAX = 15.05 °C, RMSE = 0.7 °C), (b) non-rotated widened channel (MAX = 13.54 °C, RMSE = 2.99 °C), (c) rotated narrowed channel (MAX = 13.32 °C, RMSE = 0.77 °C), and (d) rotated widened channel (MAX = 12.22 °C, RMSE = 2.98 °C)
Close modal

To facilitate the prediction of unseen geometries, we amplified the training data for the cGAN model via a simple image transformation technique. Data augmentation for GAN has been widely hired in previous research [2224] as an auxiliary method to enrich the training datasets in classification works. Figure 9 illustrates how an original input image was transformed into new data via three possible ways. At each training epoch, randomly selected original input and output images may be simply rotated 90 deg with probability. An image may be cropped and combined with its mirror image segment. The mirrored image may be also rotated 90°. If all the images in the training dataset are transformed and added to the dataset, 162 new samples, with either or both different channel widths and 90 deg- rotated channels, can be created at every training epoch. Thus, the total number of training samples may increase up to 162,000 at an epoch of 1000. Through the data augmentation, we were able to amplify the number of training samples without additional data preparations with the FVM. Note that the computation time for image transformation is less than 1 s which is significantly shorter than the FVM runtime. However, randomly mirrored images may contain physically incorrect information depending on how the image is cropped and how an actual flow field is. Thus, to improve the cGAN accuracy, the data augmentation method must be refined.

Fig. 9
Data augmentation process
Fig. 9
Data augmentation process
Close modal

Figure 8 shows the temperature maps inferred by the cGAN for unseen channel geometries, i.e., narrowed, widened, and 90-degree rotated channel geometries at Re = 300. Overall, the cGAN predictions are close to the ground truth while exhibiting RMSE ≤ 3 °C and MAX ≤ 15 °C for the narrowed and widened channels, and RMSE ≤ 3 °C and MAX ≤ 13 °C for the rotated channels. When training the cGAN with an original dataset, MAX was as large as 44.5 °C. However, after employing the dataset augmented by the simple image transformations, the cGAN provided 66% reduced MAX.

Figure 10 compares Nu and f of narrowed channels produced by the cGAN with the ground truth data. The accuracies for Nu and f predictions are 63.6% and 65%, respectively. If the narrowed channel is rotated, the accuracy for Nu reduces to 47.8%, but the accuracy for f maintains at a similar level (68.52%). Figure 11 shows predicted Nu and f of widened channels. The accuracies for Nu and f approximations are 87.3% and 54%, respectively. However, if the widened channel is rotated, the accuracy for Nu reduces to 75%, but the accuracy for f increases to 63.79%.

Fig. 10
(a) Local Nusselt number and (b) friction factor predicted for unseen narrowed channels
Fig. 10
(a) Local Nusselt number and (b) friction factor predicted for unseen narrowed channels
Close modal
Fig. 11
(a) Local Nusselt number and (b) friction factor predicted for unseen widened channels
Fig. 11
(a) Local Nusselt number and (b) friction factor predicted for unseen widened channels
Close modal

Even with the simple data augmentation that we employed, the cGAN was able to approximate the convection properties for unseen geometrical inputs with an accuracy > 50%. Especially, the augmentation by image rotations enabled the cGAN to predict even for the channels with arbitrary angular orientations. Although the accuracy for new geometries is greatly smaller than the accuracy for trained geometry, the cGAN predictions seem still useful for a rough and rapid estimation without solving a numerical model.

5 Conclusions

We developed a deep learning model for forced convection heat transfer problems based on cGAN. The cGAN was trained by a set of graphical inputs containing the geometric and flow conditions and graphical outputs representing the convection properties. A single, trained cGAN model successfully predicted the distributions of temperature, Nu and f of a heated internal channel flow over a wide range of Re (Re = 100–27,750). To achieve a high accuracy, we optimized the dataset size, training epoch, and the hyperparameter λ of the cGAN. The optimized cGAN model exhibited an accuracy ≤ 97.6% for the Nu estimation and RMSE < 0.3 °C MAX ≤ 2.7 °C for the temperature approximation. The inference ability of the cGAN model was further validated through a 10-fold cross-validation test. We also demonstrated the capability of cGAN model for unseen channel geometries when combined with a data augmentation technique. After trained with an amplified dataset, the cGAN was able to predict unseen channel geometries such as widened, narrowed, and rotated channels. For the new channel geometries, the cGAN inferred Nu and f with an accuracy ≤ 87.3% and temperature distribution with RMSE ≤ 3 °C and MAX ≤ 13 °C.

The presented cGAN convection model will enable to rapidly approximate the spatial distributions of convection properties, e.g., temperature, Nu, f, in a 2D domain, if the input information is provided in a 2D image format. Although our method was demonstrated for a simple 2D steady-state convection problem, this approach can be readily extended to a variety of problems involving complex surfaces like rough surfaces and extended surfaces. Moreover, if rapid and repetitive estimations of convection properties in geometrically complex systems are needed over a wide range of flow conditions, the cGAN convection model can serve as a good alternative to the traditional numerical simulation techniques.

Funding Data

  • National Science Foundation (Grant No. 2053413; Funder ID: 10.13039/100000001).

Acknowledgment

This work was supported by a National Science Foundation grant under Grant No. 2053413.

References

1.
Lecun
,
Y.
,
Bengio
,
Y.
, and
Hinton
,
G.
,
2015
, “
Deep Learning
,”
Nature
,
521
(
7553
), pp.
436
444
.10.1038/nature14539
2.
Deng
,
L.
, and
Yu
,
D.
,
2014
, “
Deep Learning: Methods and Applications
,”
Found. Trends Signal Process.
,
7
(
3–4
), pp.
197
387
.10.1561/2000000039
3.
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Hinton
,
G. E.
,
2017
, “
ImageNet Classification With Deep Convolutional Neural Networks
,”
Commun. ACM
,
60
(
6
), pp.
84
90
.10.1145/3065386
4.
Mikolov
,
T.
,
Deoras
,
A.
,
Povey
,
D.
,
Burget
,
L.
, and
Černocký
,
J.
,
2011
, “
Strategies for Training Large Scale Neural Network Language Models
,”
Proceedings of the IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)
,
Waikoloa, HI
, Dec. 11–15, INSPEC Accession No. 12577641, pp.
196
201
.
5.
Brunton
,
S. L.
,
Noack
,
B. R.
, and
Koumoutsakos
,
P.
,
2020
, “
Machine Learning for Fluid Mechanics
,”
Annu. Rev. Fluid Mech.
,
52
(
1
), pp.
477
508
.10.1146/annurev-fluid-010719-060214
6.
Gawehn
,
E.
,
Hiss
,
J. A.
, and
Schneider
,
G.
,
2016
, “
Deep Learning in Drug Discovery
,”
Mol. Inform.
,
35
(
1
), pp.
3
14
.10.1002/minf.201501008
7.
Kang
,
S.
, and
Cho
,
K.
,
2019
, “
Conditional Molecular Design With Deep Generative Models
,”
J. Chem. Inf. Model.
,
59
(
1
), pp.
43
52
.10.1021/acs.jcim.8b00263
8.
Valter
,
P.
,
Lindgren
,
P.
, and
Prasad
,
R.
,
2018
, “
Advanced Business Model Innovation Supported by Artificial Intelligence and Deep Learning
,”
Wirel. Pers. Commun.
,
100
(
1
), pp.
97
111
.10.1007/s11277-018-5612-x
9.
Tien Bui
,
D.
,
Hoang
,
N. D.
,
Martínez-Álvarez
,
F.
,
Ngo
,
P. T. T.
,
Hoa
,
P. V.
,
Pham
,
T. D.
,
Samui
,
P.
, and
Costache
,
R.
,
2020
, “
A Novel Deep Learning Neural Network Approach for Predicting Flash Flood Susceptibility: A Case Study at a High Frequency Tropical Storm Area
,”
Sci. Total Environ.
,
701
, p.
134413
.10.1016/j.scitotenv.2019.134413
10.
Raissi
,
M.
,
Yazdani
,
A.
, and
Karniadakis
,
G. E.
,
2020
, “
Hidden Fluid Mechanics: Learning Velocity and Pressure Fields From Flow Visualizations
,”
Science
,
367
(
6481
), pp.
1026
1030
.10.1126/science.aaw4741
11.
Lee
,
S.
, and
You
,
D.
,
2019
, “
Data-Driven Prediction of Unsteady Flow Over a Circular Cylinder Using Deep Learning
,”
J. Fluid Mech.
,
879
, pp.
217
254
.10.1017/jfm.2019.700
12.
Farimani
,
A. B.
,
Gomes
,
J.
, and
Pande
,
V. S.
,
2017
, “
Deep Learning the Physics of Transport Phenomena
,” arXiv:1709.02432.
13.
McClure
,
E. R.
, and
Carey
,
V. P.
,
2021
, “
Genetic Algorithm and Deep Learning to Explore Parametric Trends in Nucleate Boiling Heat Transfer Data
,”
ASME J. Heat Transfer-Trans. ASME
,
143
(
12
), p.
121602
.10.1115/1.4052435
14.
Sharma
,
R.
,
Farimani
,
A. B.
,
Gomes
,
J.
,
Eastman
,
P.
, and
Pande
,
V.
,
2018
, “
Weakly-Supervised Deep Learning of Heat Transport Via Physics Informed Loss
,” arXiv:1807.11374.
15.
Edalatifar
,
M.
,
Tavakoli
,
M. B.
,
Ghalambaz
,
M.
, and
Setoudeh
,
F.
,
2021
, “
Using Deep Learning to Learn Physics of Conduction Heat Transfer
,”
J. Therm. Anal. Calorim.
,
146
(
3
), pp.
1435
1452
.10.1007/s10973-020-09875-6
16.
Cai
,
S.
,
Wang
,
Z.
,
Wang
,
S.
,
Perdikaris
,
P.
, and
Karniadakis
,
G. E.
,
2021
, “
Physics-Informed Neural Networks for Heat Transfer Problems
,”
ASME J. Heat Transfer-Trans. ASME
,
143
(
6
), p.
060801
.10.1115/1.4050542
17.
Yang
,
L.
,
Dai
,
W.
,
Rao
,
Y.
, and
Chyu
,
M. K.
,
2019
, “
Optimization of the Hole Distribution of an Effusively Cooled Surface Facing Non-Uniform Incoming Temperature Using Deep Learning Approaches
,”
Int. J. Heat Mass Transfer
,
145
, p.
118749
.10.1016/j.ijheatmasstransfer.2019.118749
18.
Bowles
,
C.
,
Gunn
,
R.
,
Hammers
,
A.
, and
Rueckert
,
D.
,
2018
, “
GANsfer Learning Combining Labelled and Unlabelled Data for GAN Based Data Augmentation
,” arXiv:1811.10669.
19.
Kiyasseh
,
D.
,
Tadesse
,
G. A.
,
Nhan
,
L. N. T.
,
Van Tan
,
L.
,
Thwaites
,
L.
,
Zhu
,
T.
, and
Clifton
,
D.
,
2020
, “
PlethAugment: GAN-Based PPG Augmentation for Medical Diagnosis in Low-Resource Settings
,”
IEEE J. Biomed. Heal. Inf.
,
24
(
11
), pp.
3226
3235
.10.1109/JBHI.2020.2979608
20.
Cirillo
,
M. D.
,
Abramian
,
D.
, and
Eklund
,
A.
,
2020
, “
Vox2Vox: 3D-GAN for Brain Tumour Segmentation
,” arXiv:2003.13653.
21.
Maleki
,
F.
,
Muthukrishnan
,
N.
,
Ovens
,
K.
,
Reinhold
,
C.
, and
Forghani
,
R.
,
2020
, “
Machine Learning Algorithm Validation From Essentials to Advanced Applications and Implications for Regulatory Certification and Deployment
,”
Neuroimaging Clin.
,
30
(
4
), pp.
433
445
.10.1016/j.nic.2020.08.004
22.
Ghassemi
,
N.
,
Shoeibi
,
A.
, and
Rouhani
,
M.
,
2020
, “
Deep Neural Network With Generative Adversarial Networks Pre-Training for Brain Tumor Classification Based on MR Images
,”
Biomed. Signal Process. Control
,
57
, p.
101678
.10.1016/j.bspc.2019.101678
23.
Cheng
,
J.
,
Huang
,
W.
,
Cao
,
S.
,
Yang
,
R.
,
Yang
,
W.
,
Yun
,
Z.
,
Wang
,
Z.
, and
Feng
,
Q.
,
2015
, “
Enhanced Performance of Brain Tumor Classification Via Tumor Region Augmentation and Partition
,”
PLoS One
,
10
(
10
), p.
e0140381
.10.1371/journal.pone.0140381
24.
Frid-Adar
,
M.
,
Diamant
,
I.
,
Klang
,
E.
,
Amitai
,
M.
,
Goldberger
,
J.
, and
Greenspan
,
H.
,
2018
, “
GAN-Based Synthetic Medical Image Augmentation for Increased CNN Performance in Liver Lesion Classification
,”
Neurocomputing
,
321
, pp.
321
331
.10.1016/j.neucom.2018.09.013