Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
26
result(s) for
"Heifetz, Alexander"
Sort by:
Physics-informed neural network with transfer learning (TL-PINN) based on domain similarity measure for prediction of nuclear reactor transients
by
Prantikos, Konstantinos
,
Chatzidakis, Stylianos
,
Tsoukalas, Lefteri H.
in
639/4077/4091/4093
,
639/705/117
,
Deep learning
2023
Nuclear reactor safety and efficiency can be enhanced through the development of accurate and fast methods for prediction of reactor transient (RT) states. Physics informed neural networks (PINNs) leverage deep learning methods to provide an alternative approach to RT modeling. Applications of PINNs in monitoring of RTs for operator support requires near real-time model performance. However, as with all machine learning models, development of a PINN involves time-consuming model training. Here, we show that a transfer learning (TL-PINN) approach achieves significant performance gain, as measured by reduction of the number of iterations for model training. Using point kinetic equations (PKEs) model with six neutron precursor groups, constructed with experimental parameters of the Purdue University Reactor One (PUR-1) research reactor, we generated different RTs with experimentally relevant range of variables. The RTs were characterized using Hausdorff and Fréchet distance. We have demonstrated that pre-training TL-PINN on one RT results in up to two orders of magnitude acceleration in prediction of a different RT. The mean error for conventional PINN and TL-PINN models prediction of neutron densities is smaller than 1%. We have developed a correlation between TL-PINN performance acceleration and similarity measure of RTs, which can be used as a guide for application of TL-PINNs.
Journal Article
Physics-Informed Neural Network Solution of Point Kinetics Equations for a Nuclear Reactor Digital Twin
by
Prantikos, Konstantinos
,
Tsoukalas, Lefteri H.
,
Heifetz, Alexander
in
Big Data
,
Differential equations
,
digital twin
2022
A digital twin (DT) for nuclear reactor monitoring can be implemented using either a differential equations-based physics model or a data-driven machine learning model. The challenge of a physics-model-based DT consists of achieving sufficient model fidelity to represent a complex experimental system, whereas the challenge of a data-driven DT consists of extensive training requirements and a potential lack of predictive ability. We investigate the performance of a hybrid approach, which is based on physics-informed neural networks (PINNs) that encode fundamental physical laws into the loss function of the neural network. We develop a PINN model to solve the point kinetic equations (PKEs), which are time-dependent, stiff, nonlinear, ordinary differential equations that constitute a nuclear reactor reduced-order model under the approximation of ignoring spatial dependence of the neutron flux. The PINN model solution of PKEs is developed to monitor the start-up transient of Purdue University Reactor Number One (PUR-1) using experimental parameters for the reactivity feedback schedule and the neutron source. The results demonstrate strong agreement between the PINN solution and finite difference numerical solution of PKEs. We investigate PINNs performance in both data interpolation and extrapolation. For the test cases considered, the extrapolation errors are comparable to those of interpolation predictions. Extrapolation accuracy decreases with increasing time interval.
Journal Article
Unsupervised learning-enabled pulsed infrared thermographic microscopy of subsurface defects in stainless steel
by
Zhang, Xin
,
Bakhtiari, Sasan
,
Saniie, Jafar
in
639/301/930/2735
,
639/624/1107/510
,
639/705/117
2024
Metallic structures produced with laser powder bed fusion (LPBF) additive manufacturing method (AM) frequently contain microscopic porosity defects, with typical approximate size distribution from one to 100 microns. Presence of such defects could lead to premature failure of the structure. In principle, structural integrity assessment of LPBF metals can be accomplished with nondestructive evaluation (NDE). Pulsed infrared thermography (PIT) is a non-contact, one-sided NDE method that allows for imaging of internal defects in arbitrary size and shape metallic structures using heat transfer. PIT imaging is performed using compact instrumentation consisting of a flash lamp for deposition of a heat pulse, and a fast frame infrared (IR) camera for measuring surface temperature transients. However, limitations of imaging resolution with PIT include blurring due to heat diffusion, sensitivity limit of the IR camera. We demonstrate enhancement of PIT imaging capability with unsupervised learning (UL), which enables PIT microscopy of subsurface defects in high strength corrosion resistant stainless steel 316 alloy. PIT images were processed with UL spatial–temporal separation-based clustering segmentation (STSCS) algorithm, refined by morphology image processing methods to enhance visibility of defects. The STSCS algorithm starts with wavelet decomposition to spatially de-noise thermograms, followed by UL principal component analysis (PCA), fine-tuning optimization, and neural learning-based independent component analysis (ICA) algorithms to temporally compress de-noised thermograms. The compressed thermograms were further processed with UL-based graph thresholding K-means clustering algorithm for defects segmentation. The STSCS algorithm also includes online learning feature for efficient re-training of the model with new data. For this study, metallic specimens with calibrated microscopic flat bottom hole defects, with diameters in the range from 203 to 76 µm, were produced using electro discharge machining (EDM) drilling. While the raw thermograms do not show any material defects, using STSCS algorithm to process PIT images reveals defects as small as 101 µm in diameter. To the best of our knowledge, this is the smallest reported size of a sub-surface defect in a metal imaged with PIT, which demonstrates the PIT capability of detecting defects in the size range relevant to quality control requirements of LPBF-printed high-strength metals.
Journal Article
Multi-Task Learning of Scanning Electron Microscopy and Synthetic Thermal Tomography Images for Detection of Defects in Additively Manufactured Metals
by
Scott, Sarah
,
Chen, Wei-Ying
,
Heifetz, Alexander
in
3D printing
,
Additive manufacturing
,
additive manufacturing of metals
2023
One of the key challenges in laser powder bed fusion (LPBF) additive manufacturing of metals is the appearance of microscopic pores in 3D-printed metallic structures. Quality control in LPBF can be accomplished with non-destructive imaging of the actual 3D-printed structures. Thermal tomography (TT) is a promising non-contact, non-destructive imaging method, which allows for the visualization of subsurface defects in arbitrary-sized metallic structures. However, because imaging is based on heat diffusion, TT images suffer from blurring, which increases with depth. We have been investigating the enhancement of TT imaging capability using machine learning. In this work, we introduce a novel multi-task learning (MTL) approach, which simultaneously performs the classification of synthetic TT images, and segmentation of experimental scanning electron microscopy (SEM) images. Synthetic TT images are obtained from computer simulations of metallic structures with subsurface elliptical-shaped defects, while experimental SEM images are obtained from imaging of LPBF-printed stainless-steel coupons. MTL network is implemented as a shared U-net encoder between the classification and the segmentation tasks. Results of this study show that the MTL network performs better in both the classification of synthetic TT images and the segmentation of SEM images tasks, as compared to the conventional approach when the individual tasks are performed independently of each other.
Journal Article
Anomaly Detection in Liquid Sodium Cold Trap Operation with Multisensory Data Fusion Using Long Short-Term Memory Autoencoder
by
Akins, Alexandra
,
Kultgen, Derek
,
Heifetz, Alexander
in
Algorithms
,
anomaly detection
,
artificial intelligence
2023
Sodium-cooled fast reactors (SFR), which use high temperature fluid near ambient pressure as coolant, are one of the most promising types of GEN IV reactors. One of the unique challenges of SFR operation is purification of high temperature liquid sodium with a cold trap to prevent corrosion and obstructing small orifices. We have developed a deep learning long short-term memory (LSTM) autoencoder for continuous monitoring of a cold trap and detection of operational anomaly. Transient data were obtained from the Mechanisms Engineering Test Loop (METL) liquid sodium facility at Argonne National Laboratory. The cold trap purification at METL is monitored with 31 variables, which are sensors measuring fluid temperatures, pressures and flow rates, and controller signals. Loss-of-coolant type anomaly in the cold trap operation was generated by temporarily choking one of the blowers, which resulted in temperature and flow rate spikes. The input layer of the autoencoder consisted of all the variables involved in monitoring the cold trap. The LSTM autoencoder was trained on the data corresponding to cold trap startup and normal operation regime, with the loss function calculated as the mean absolute error (MAE). The loss during training was determined to follow log-normal density distribution. During monitoring, we investigated a performance of the LSTM autoencoder for different loss threshold values, set at a progressively increasing number of standard deviations from the mean. The anomaly signal in the data was gradually attenuated, while preserving the noise of the original time series, so that the signal-to-noise ratio (SNR) averaged across all sensors decreased below unity. Results demonstrate detection of anomalies with sensor-averaged SNR < 1.
Journal Article
Transformers and Long Short-Term Memory Transfer Learning for GenIV Reactor Temperature Time Series Forecasting
by
Tsoukalas, Lefteri H.
,
Pantopoulou, Stella
,
Heifetz, Alexander
in
Forecasting
,
High temperature
,
LSTM
2025
Automated monitoring of the coolant temperature can enable autonomous operation of generation IV reactors (GenIV), thus reducing their operating and maintenance costs. Automation can be accomplished with machine learning (ML) models trained on historical sensor data. However, the performance of ML usually depends on the availability of large amount of training data, which is difficult to obtain for GenIV, as this technology is still under development. We propose the use of transfer learning (TL), which involves utilizing knowledge across different domains, to compensate for this lack of training data. TL can be used to create pre-trained ML models with data from small-scale research facilities, which can then be fine-tuned to monitor GenIV reactors. In this work, we develop pre-trained Transformer and long short-term memory (LSTM) networks by training them on temperature measurements from thermal hydraulic flow loops operating with water and Galinstan fluids at room temperature at Argonne National Laboratory. The pre-trained models are then fine-tuned and re-trained with minimal additional data to perform predictions of the time series of high temperature measurements obtained from the Engineering Test Unit (ETU) at Kairos Power. The performance of the LSTM and Transformer networks is investigated by varying the size of the lookback window and forecast horizon. The results of this study show that LSTM networks have lower prediction errors than Transformers, but LSTM errors increase more rapidly with increasing lookback window size and forecast horizon compared to the Transformer errors.
Journal Article
Dynamic Control of Sodium Cold Trap Purification Temperature Using LSTM System Identification
2024
This study investigates the dynamic regulation of the sodium cold trap purification temperature at Argonne National Laboratory’s liquid sodium test facility, employing long short-term memory (LSTM) system identification techniques. The investigation introduces an innovative hybrid approach by integrating model predictive control (MPC) based on first principles dynamic models with a multi-step time–frequency LSTM model in predicting the temperature profiles of a sodium cold trap purification system. The long short-term memory–model predictive controller (LSTM-MPC) model employs a sliding window scheme to gather training samples for multi-step prediction, leveraging historical data to construct predictive models that capture the non-linearities of the complex system dynamics without explicitly modeling the underlying physical processes. The performance of the LSTM-MPC and MPC were evaluated through simulation experiments, where both models were assessed on their capacity to maintain the cold trap temperature within predefined set-points while minimizing deviations and overshoots. Results obtained show how the data-driven LSTM-MPC model demonstrates stability and adaptability. In contrast, the traditional MPC model exhibits irregularities, particularly evident as overshoots around set-point limits, which can potentially compromise its effectiveness over long prediction time intervals. The findings obtained offer valuable insights into integrating data-driven techniques for enhancing real-time monitoring systems.
Journal Article
Optical Methodology for Detecting Histologically Unapparent Nanoscale Consequences of Genetic Alterations in Biological Cells
2008
Recently, there has been a major thrust to understand biological processes at the nanoscale. Optical microscopy has been exceedingly useful in imaging cell microarchitecture. Characterization of cell organization at the nanoscale, however, has been stymied by the lack of practical means of cell analysis at these small scales. To address this need, we developed a microscopic spectroscopy technique, single-cell partial-wave spectroscopy (PWS), which provides insights into the statistical properties of the nanoscale architecture of biological cells beyond what conventional microscopy reveals. Coupled with the mesoscopic light transport theory, PWS quantifies the disorder strength of intracellular architecture. As an illustration of the potential of the technique, in the experiments with cell lines and an animal model of colon carcinogenesis we show that increase in the degree of disorder in cell nanoarchitecture parallels genetic events in the early stages of carcinogenesis in otherwise microscopically/histologically normal-appearing cells. These data indicate that this advance in single-cell optics represented by PWS may have significant biomedical applications.
Journal Article
Monitoring of Temperature Measurements for Different Flow Regimes in Water and Galinstan with Long Short-Term Memory Networks and Transfer Learning of Sensors
2022
Temperature sensing is one of the most common measurements of a nuclear reactor monitoring system. The coolant fluid flow in a reactor core depends on the reactor power state. We investigated the monitoring and estimation of the thermocouple time series using machine learning for a range of flow regimes. Measurement data were obtained, in two separate experiments, in a flow loop filled with water and with liquid metal Galinstan. We developed long short-term memory (LSTM) recurrent neural networks (RNNs) for sensor predictions by training on the sensor’s own prior history, and transfer learning LSTM (TL-LSTM) by training on a correlated sensor’s prior history. Sensor cross-correlations were identified by calculating the Pearson correlation coefficient of the time series. The accuracy of LSTM and TL-LSTM predictions of temperature was studied as a function of Reynolds number (Re). The root-mean-square error (RMSE) for the test segment of time series of each sensor was shown to linearly increase with Re for both water and Galinstan fluids. Using linear correlations, we estimated the range of values of Re for which RMSE is smaller than the thermocouple measurement uncertainty. For both water and Galinstan fluids, we showed that both LSTM and TL-LSTM provide reliable estimations of temperature for typical flow regimes in a nuclear reactor. The LSTM runtime was shown to be substantially smaller than the data acquisition rate, which allows for performing estimation and validation of sensor measurements in real time.
Journal Article