Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Efficient training sets for surrogate models of tokamak turbulence with Active Deep Ensembles
by
Zanisi, L
, contributors, JET
, Pamela, S
, Casson, F
, Citrin, J
, Buchanan, J
, Gopakumar, V
, A Ho
, Madula, T
, Barr, J
in
Active learning
/ Aircraft engines
/ Data points
/ Datasets
/ Design optimization
/ Flight simulators
/ Iterative methods
/ Machine learning
/ Mathematical models
/ Nuclear electric power generation
/ Nuclear power plants
/ Parameters
/ Plasma turbulence
/ Random sampling
/ Simulation
/ Workflow
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Efficient training sets for surrogate models of tokamak turbulence with Active Deep Ensembles
by
Zanisi, L
, contributors, JET
, Pamela, S
, Casson, F
, Citrin, J
, Buchanan, J
, Gopakumar, V
, A Ho
, Madula, T
, Barr, J
in
Active learning
/ Aircraft engines
/ Data points
/ Datasets
/ Design optimization
/ Flight simulators
/ Iterative methods
/ Machine learning
/ Mathematical models
/ Nuclear electric power generation
/ Nuclear power plants
/ Parameters
/ Plasma turbulence
/ Random sampling
/ Simulation
/ Workflow
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Efficient training sets for surrogate models of tokamak turbulence with Active Deep Ensembles
by
Zanisi, L
, contributors, JET
, Pamela, S
, Casson, F
, Citrin, J
, Buchanan, J
, Gopakumar, V
, A Ho
, Madula, T
, Barr, J
in
Active learning
/ Aircraft engines
/ Data points
/ Datasets
/ Design optimization
/ Flight simulators
/ Iterative methods
/ Machine learning
/ Mathematical models
/ Nuclear electric power generation
/ Nuclear power plants
/ Parameters
/ Plasma turbulence
/ Random sampling
/ Simulation
/ Workflow
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Efficient training sets for surrogate models of tokamak turbulence with Active Deep Ensembles
Paper
Efficient training sets for surrogate models of tokamak turbulence with Active Deep Ensembles
A Ho,
2023
Request Book From Autostore
and Choose the Collection Method
Overview
Model-based plasma scenario development lies at the heart of the design and operation of future fusion powerplants. Including turbulent transport in integrated models is essential for delivering a successful roadmap towards operation of ITER and the design of DEMO-class devices. Given the highly iterative nature of integrated models, fast machine-learning-based surrogates of turbulent transport are fundamental to fulfil the pressing need for faster simulations opening up pulse design, optimization, and flight simulator applications. A significant bottleneck is the generation of suitably large training datasets covering a large volume in parameter space, which can be prohibitively expensive to obtain for higher fidelity codes. In this work, we propose ADEPT (Active Deep Ensembles for Plasma Turbulence), a physics-informed, two-stage Active Learning strategy to ease this challenge. Active Learning queries a given model by means of an acquisition function that identifies regions where additional data would improve the surrogate model. We provide a benchmark study using available data from the literature for the QuaLiKiz quasilinear transport model. We demonstrate quantitatively that the physics-informed nature of the proposed workflow reduces the need to perform simulations in stable regions of the parameter space, resulting in significantly improved data efficiency. We show an up to a factor of 20 reduction in training dataset size needed to achieve the same performance as random sampling. We then validate the surrogates on multichannel integrated modelling of ITG-dominated JET scenarios and demonstrate that they recover the performance of QuaLiKiz to better than 10\\%. This matches the performance obtained in previous work, but with two orders of magnitude fewer training data points.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.