Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
48
result(s) for
"Cartier, Thomas"
Sort by:
Explainable machine learning for breakdown prediction in high gradient rf cavities
by
Millar, William
,
Wollmann, Daniel
,
Cartier-Michaud, Thomas
in
Artificial intelligence
,
Breakdown
,
Explainable artificial intelligence
2022
The occurrence of vacuum arcs or radio frequency (rf) breakdowns is one of the most prevalent factors limiting the high-gradient performance of normal conducting rf cavities in particle accelerators. In this paper, we search for the existence of previously unrecognized features related to the incidence of rf breakdowns by applying a machine learning strategy to high-gradient cavity data from CERN’s test stand for the Compact Linear Collider (CLIC). By interpreting the parameters of the learned models with explainable artificial intelligence (AI), we reverse-engineer physical properties for deriving fast, reliable, and simple rule–based models. Based on 6 months of historical data and dedicated experiments, our models show fractions of data with a high influence on the occurrence of breakdowns. Specifically, it is shown that the field emitted current following an initial breakdown is closely related to the probability of another breakdown occurring shortly thereafter. Results also indicate that the cavity pressure should be monitored with increased temporal resolution in future experiments, to further explore the vacuum activity associated with breakdowns.
Journal Article
The european primary care monitor: structure, process and outcome indicators
2010
Abstract Background: Scientific research has provided evidence on benefits of well developed primary care systems. The relevance of some of this research for the European situation is limited. There is currently a lack of up to date comprehensive and comparable information on variation in development of primary care, and a lack of knowledge of structures and strategies conducive to strengthening primary care in Europe. The EC funded project Primary Health Care Activity Monitor for Europe (PHAMEU) aims to fill this gap by developing a Primary Care Monitoring System (PC Monitor) for application in 31 European countries. This article describes the development of the indicators of the PC Monitor, which will make it possible to create an alternative model for holistic analyses of primary care. Methods: A systematic review of the primary care literature published between 2003 and July 2008 was carried out. This resulted in an overview of: (1) the dimensions of primary care and their relevance to outcomes at (primary) health system level; (2) essential features per dimension; (3) applied indicators to measure the features of primary care dimensions. The indicators were evaluated by the project team against criteria of relevance, precision, flexibility, and discriminating power. The resulting indicator set was evaluated on its suitability for Europe-wide comparison of primary care systems by a panel of primary care experts from various European countries (representing a variety of primary care systems). Results: The developed PC Monitor approaches primary care in Europe as a multidimensional concept. It describes the key dimensions of primary care systems at three levels: structure, process, and outcome level. On structure level, it includes indicators for governance, economic conditions, and workforce development. On process level, indicators describe access, comprehensiveness, continuity, and coordination of primary care services. On outcome level, indicators reflect the quality, and efficiency of primary care. Conclusions: A standardized instrument for describing and comparing primary care systems has been developed based on scientific evidence and consensus among an international panel of experts, which will be tested to all configurations of primary care in Europe, intended for producing comparable information. Widespread use of the instrument has the potential to improve the understanding of primary care delivery in different national contexts and thus to create opportunities for better decision making.
Journal Article
An approach to increase reliability of HPC simulation, application to the Gysela5D code
2016
Reproducibility of results is a strong requirement in most fields of research for experimental results to be called science. For results obtained through simulation software using high performance computing (HPC) this translates as code quality requirements. While there are many works focusing on software quality, these typically do not take the specificities of HPC scientific simulation softwareinto account. This paper presents an approach to introduce quality procedures in HPC scientific simulation softwarewhile remaining the less invasive as possible so as to ease its adoption. The approach relies on quality procedures including human code review and automated testing and offers a dedicated procedure to help correct defects found this way. These procedures are integrated in a development work-flow designed to improve the traceability of defects. By implementing this approach for the development of the Gysela code, we show that it is indeed viable and that the return on investment is positive. We also identify multiple reusable elements developed for this experiment that should reduce the cost of adopting the approach for other codes as well as some aspects that can still be improved to ensure a widespread propagation of the approach in the community.
Journal Article
Evaluating Kernels on Xeon Phi to accelerate Gysela application
2016
This work describes the challenges presented by porting parts of the Gysela code to the Intel Xeon Phi coprocessor, as well as techniques used for optimization, vectorization and tuning that can be applied to other applications. We evaluate the performance of some generic micro-benchmark on Phi versus Intel Sandy Bridge. Several interpolation kernels useful for the Gysela application are analyzed and the performances are shown. Some memory-bound and compute-bound kernels are accelerated by a factor 2 on the Phi device compared to Sandy architecture. Nevertheless, it is hard, if not impossible, to reach a large fraction of the peak performance on the Phi device, especially for real-life applications as Gysela. A collateral benefit of this optimization and tuning work is that the execution time of Gysela (using 4D advections) has decreased on a standard architecture such as Intel Sandy Bridge.
Journal Article
Optimization of the Gyroaverage operator based on Hermite interpolation
by
Rozar, Fabien
,
Cartier-Michaud, Thomas
,
Mehrenberger, Michel
in
Algorithms
,
Computer Science
,
Data Structures and Algorithms
2016
Gyrokinetic modeling is appropriate for describing Tokamak plasma turbulence, and the gyroaverage operator is a cornerstone of this approach. In a gyrokinetic code, the gyroaveraging scheme needs to be accurate enough to avoid spoiling the data but also requires a low computation cost because it is applied often on the main unknown, the 5D guiding-center distribution function, and on the 3D electric potentials. In the present paper, we improve a gyroaverage scheme based on Hermite interpolation used in the Gysela code. This initial implementation represents a too large fraction of the total execution time. The gyroaverage operator has been reformulated and is now expressed as a matrix-vector product and a cache-friendly algorithm has been setup. Different techniques have been investigated to quicken the computations by more than a factor two. Description of the algorithms is given, together with an analysis of the achieved performance.
Journal Article
Collisions in magnetised plasmas
by
Esteve, Damien
,
Ghendrih, Philippe
,
Cartier-Michaud, Thomas
in
Approximation
,
Collision dynamics
,
Debye length
2015
Approximations for closing the kinetic equation for the one particle distribution function are calculated by using propagators. These provide the formal structure of the collision term in the Landau approximation. The method allows one to investigate the effect of inhomogeneities at the Debye scale and to analyse magnetised collisions, when the Larmor radius is smaller than the Debye length. This method also allows one developing a simple renormalisation scheme to derive the Lenard-Balescu collision operator.
Une méthode de propagateurs est utilisée pour fermer l’équation cinétique de la fonction de distribution à une particule. Celle-ci donne accès à la structure formelle de l’opérateur de collision dans l’approximation de Landau. Elle permet alors de calculer l’effet d’inhomogénéité à l’échelle de la longueur de Debye et d’analyser les collisions magnétisées, lorsque le rayon de Larmor est inférieur à la longueur de Debye. Cette méthode permet également de développer une méthode simple de renormalisation pour calculer l’opérateur de collision de Lenard-Balescu.
Journal Article
Clinical and Survival Impact of FDG PET in Patients with Suspicion of Recurrent Ovarian Cancer: A 6-Year Follow-Up
2015
The aim of this retrospective study was to evaluate the contribution of fluorine-18-fluoro-deoxyglucose (FDG) positron emission tomography (PET) to the clinical management and survival outcome of patients (pts) suspected of recurrent ovarian carcinoma, with the hypothesis that early diagnosis of recurrent ovarian cancer may improve overall survival (OS).
Fifty-three FDG PET/CT scans were retrospectively analyzed for 42 pts. CT and PET/CT findings were confirmed by imaging and clinical follow-up, and/or pathology, which were considered as the gold standard diagnosis. The treatment plan based on CT staging was compared with that based on PET/CT findings. Medical records were reviewed for pts characteristics, progression-free survival (PFS), and OS. PFS and OS were analyzed using the Cox proportional hazards regression model.
The final diagnosis of recurrence was established pathologically (n = 16), or by a median clinical follow-up of 6.5 years (range 0.5-7.5) after the PET/CT (n = 37). PET/CT provided a higher detection sensitivity (92.2%, 47/51) than CT (60.8%, 31/51) (p < 0.001). Globally, PET/CT modified the treatment plan in 56.6% (30/53) and in 65.2% (15/23) when the CT was negative prior to PET/CT. In 30 cases, those benefited from a modified treatment plan, these changes led to the intensification of a previous treatment procedure in 83.3% (25/30), and to a reduction in the previous treatment procedure in 16.6% of cases (5/30). The Cox regression multivariate analysis showed that the number of lesions visualized by CT and presence of lung lesions detected by PET/CT were significantly associated with PFS (p = 0.002 and p = 0.035, respectively).
On account of its impact on treatment planning, and especially in predicting patient outcome, FDG PET is a valuable diagnostic tool for cases of suspected ovarian cancer recurrence.
Journal Article
Verification of turbulent simulations using PoPe: quantifying model precision and numerical error with data mining of simulation output
by
Ghendrih, Philippe
,
Cartier-Michaud, Thomas
,
Serre, Eric
in
Amplitudes
,
Classical Physics
,
Computational Physics
2018
Verification of a 1D-1V kinetic code with the PoPe method [1] is presented. Investigation of the impact of reducing the precision of the numerical scheme is analysed by following 3 indicators of the physics solved by the code, namely the plasma response to an external high frequency electric field wave. The response of the distribution function in the vicinity of the particle-wave resonance is found to be most sensitive to the resolution. Consistently, a rapid growth of the error indicator determined with PoPe is observed. However, no critical value of this indicator allowing us to retain the physics in a situation of degraded precision could be observed. The response of the amplitude of the electric potential fluctuations is characterised by a transient growth followed by a plateau. It is found that the loss of this plateau is governed by the resolution in v-space, but due to the generation of a symmetry in the problem rather than to errors in the numerical scheme. The analysis of the transient indicates that the growth rate of the amplitude of the electric potential is very robust down to very low resolution, step in velocity of 2 thermal velocities. However, a transition prior to this resolution, with step 0.5 thermal velocity, can be identified corresponding to a PoPe indicator of order zero, namely for errors of order 100 %.
Journal Article
An approach to increase reliability of HPC simulation, application to the G ysela5D code
by
Rozar, Fabien
,
Passeron, Chantal
,
Cartier-Michaud, Thomas
in
Computer simulation
,
Defects
,
Reproducibility
2016
Reproducibility of results is a strong requirement in most fields of research for experimental results to be called science. For results obtained through simulation software using high performance computing (HPC) this translates as code quality requirements. While there are many works focusing on software quality, these typically do not take the specificities of HPC scientific simulation softwareinto account. This paper presents an approach to introduce quality procedures in HPC scientific simulation softwarewhile remaining the less invasive as possible so as to ease its adoption. The approach relies on quality procedures including human code review and automated testing and offers a dedicated procedure to help correct defects found this way. These procedures are integrated in a development work-flow designed to improve the traceability of defects. By implementing this approach for the development of the Gysela code, we show that it is indeed viable and that the return on investment is positive. We also identify multiple reusable elements developed for this experiment that should reduce the cost of adopting the approach for other codes as well as some aspects that can still be improved to ensure a widespread propagation of the approach in the community.
Journal Article
Evaluating Kernels on Xeon Phi to accelerate G ysela application
by
Rozar, Fabien
,
Cartier-Michaud, Thomas
,
Latu, Guillaume
in
Architecture
,
Central processing units
,
Computer memory
2016
This work describes the challenges presented by porting parts of the Gysela code to the Intel Xeon Phi coprocessor, as well as techniques used for optimization, vectorization and tuning that can be applied to other applications. We evaluate the performance of some generic micro-benchmark on Phi versus Intel Sandy Bridge. Several interpolation kernels useful for the Gysela application are analyzed and the performances are shown. Some memory-bound and compute-bound kernels are accelerated by a factor 2 on the Phi device compared to Sandy architecture. Nevertheless, it is hard, if not impossible, to reach a large fraction of the peak performance on the Phi device, especially for real-life applications as Gysela. A collateral benefit of this optimization and tuning work is that the execution time of Gysela (using 4D advections) has decreased on a standard architecture such as Intel Sandy Bridge.
Journal Article