Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,885
result(s) for
"Golubkov, D."
Sort by:
Tensor polarizability of the vector mesons from SU(3) lattice gauge theory
by
Luschevskaya, E. V.
,
Ishkuvatov, R. A.
,
Teryaev, O. V.
in
Classical and Quantum Gravitation
,
Elementary Particles
,
Gauge theory
2018
A
bstract
The magnetic dipole polarizabilities of the vector
ρ
0
and
ρ
±
mesons in SU(3) pure gauge theory are calculated in the article. Based on this the authors explore the contribution of the dipole magnetic polarizabilities to the tensor polarization of the vector mesons in external abelian magnetic field. The tensor polarization leads to the dilepton asymmetry observed in non-central heavy ion collisions and can be also estimated in lattice gauge theory.
Journal Article
The ATLAS Production System Evolution: New Data Processing and Analysis Paradigm for the LHC Run2 and High-Luminosity
2017
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn't exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented \"train\" model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
Journal Article
Predictive analytics tools to adjust and monitor performance metrics for the ATLAS Production System
2018
Having information such as an estimation of the processing time or possibility of system outage (abnormal behaviour) helps to assist to monitor system performance and to predict its next state. The current cyber-infrastructure of the ATLAS Production System presents computing conditions in which contention for resources among high-priority data analyses happens routinely, that might lead to significant workload and data handling interruptions. The lack of the possibility to monitor and to predict the behaviour of the analysis process (its duration) and system's state itself provides motivation for a focus on design of the built-in situational awareness analytic tools.
Journal Article
Scaling up ATLAS production system for the LHC Run 2 and beyond: project ProdSys2
by
Vaniachine, A
,
Klimentov, A
,
Garcia, J
in
Applications programs
,
Big Data
,
Data base management systems
2015
The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the ATLAS workload management system (PanDA) and executed on the Grid. Our experience shows that the rate of task submission grows exponentially over the years. To scale up the ATLAS production system for new challenges, we started the ProdSys2 project. PanDA has been upgraded with the Job Execution and Definition Interface (JEDI). Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. These workflows are being implemented in the Database Engine for Tasks (DEfT) that generates individual tasks for processing by JEDI. We report on the ATLAS experience with many-task workflow patterns in preparation for the LHC Run 2.
Journal Article
Calibration system of the LHCb hadronic calorimeter
2017
The Hadron Calorimeter of LHCb (HCAL) is one of the four sub-detectors of the experiment calorimetric system, which also includes: Scintillator Pad Detector (SPD), Pre-Shower Detector (PS), and electromagnetic (ECAL) calorimeter. The main purpose of HCAL is to provide data for Level-0 trigger for selection events with high transverse energy hadrons. It is important to have a precise and reliable calibration system which produces result immediately after the calibration run. LHCb HCAL is equipped with a calibration system based on 137Cs radioactive source embedded into the calorimeter structure. It allows to obtain absolute calibration with good precision and monitor technical condition of the detector.
Journal Article
Task Management in the New ATLAS Production System
by
Vaniachine, A
,
Klimentov, A
,
Potekhin, M
in
Concrete construction
,
Data base management systems
,
Data processing
2014
This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.
Journal Article
Numerical Simulation of Turbulent Flow in a Rotating Rectangular 90° Bend Channel
by
Golubkov, V. D.
,
Garbaruk, A. V.
in
Accuracy
,
Atoms and Molecules in Strong Fields
,
Classical and Continuum Physics
2023
The article presents numerical simulation of turbulent flow in a rotating rectangular 90° bend channel using the WMLES method, and the effect of rotation on the flow structure is studied. The article also presents a study of the accuracy of various semi-empirical turbulence models for closing the Reynolds equations for flows of this type by comparison with the WMLES results for the cases with and without rotation.
Journal Article
Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns
2014
During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.
Journal Article
A detailed map of Higgs boson interactions by the ATLAS experiment ten years after the discovery
2022
The standard model of particle physics
1
–
4
describes the known fundamental particles and forces that make up our Universe, with the exception of gravity. One of the central features of the standard model is a field that permeates all of space and interacts with fundamental particles
5
–
9
. The quantum excitation of this field, known as the Higgs field, manifests itself as the Higgs boson, the only fundamental particle with no spin. In 2012, a particle with properties consistent with the Higgs boson of the standard model was observed by the ATLAS and CMS experiments at the Large Hadron Collider at CERN
10
,
11
. Since then, more than 30 times as many Higgs bosons have been recorded by the ATLAS experiment, enabling much more precise measurements and new tests of the theory. Here, on the basis of this larger dataset, we combine an unprecedented number of production and decay processes of the Higgs boson to scrutinize its interactions with elementary particles. Interactions with gluons, photons, and
W
and
Z
bosons—the carriers of the strong, electromagnetic and weak forces—are studied in detail. Interactions with three third-generation matter particles (bottom (
b
) and top (
t
) quarks, and tau leptons (
τ
)) are well measured and indications of interactions with a second-generation particle (muons,
μ
) are emerging. These tests reveal that the Higgs boson discovered ten years ago is remarkably consistent with the predictions of the theory and provide stringent constraints on many models of new phenomena beyond the standard model.
Ten years after the discovery of the Higgs boson, the ATLAS experiment at CERN probes its kinematic properties with a significantly larger dataset from 2015–2018 and provides further insights on its interaction with other known particles.
Journal Article
Study of the doubly charmed tetraquark Tcc
2022
Quantum chromodynamics, the theory of the strong force, describes interactions of coloured quarks and gluons and the formation of hadronic matter. Conventional hadronic matter consists of baryons and mesons made of three quarks and quark-antiquark pairs, respectively. Particles with an alternative quark content are known as exotic states. Here a study is reported of an exotic narrow state in the D
0
D
0
π
+
mass spectrum just below the D
*+
D
0
mass threshold produced in proton-proton collisions collected with the LHCb detector at the Large Hadron Collider. The state is consistent with the ground isoscalar
T
c
c
+
tetraquark with a quark content of
c
c
u
¯
d
¯
and spin-parity quantum numbers J
P
= 1
+
. Study of the DD mass spectra disfavours interpretation of the resonance as the isovector state. The decay structure via intermediate off-shell D
*+
mesons is consistent with the observed D
0
π
+
mass distribution. To analyse the mass of the resonance and its coupling to the D
*
D system, a dedicated model is developed under the assumption of an isoscalar axial-vector
T
c
c
+
state decaying to the D
*
D channel. Using this model, resonance parameters including the pole position, scattering length, effective range and compositeness are determined to reveal important information about the nature of the
T
c
c
+
state. In addition, an unexpected dependence of the production rate on track multiplicity is observed.
The existence and properties of tetraquark states with two heavy quarks and two light antiquarks have been widely debated. Here, the authors use a unitarized model to study the properties of an exotic narrow state compatible with a doubly charmed tetraquark.
Journal Article