Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
29
result(s) for
"Arrabito, L"
Sort by:
Influence of uncertainty in hadronic interaction models on the sensitivity estimation of Cherenkov Telescope Array
2020
Very-high-energy (VHE) interaction between cosmic-ray proton and nuclei in the atmosphere is still not perfectly understood and efforts to improve interaction models used in simulations are ongoing, with feedback from various collider and air shower experiments. Imaging Atmospheric Cherenkov Telescopes (IACTs) are indirect VHE gamma-ray detectors on the ground and cosmic-ray proton is a major background to gamma-ray measurements in these systems. Rejection power of background protons determines most part of the gamma-ray sensitivity curve of IACTs. As for an IACT system in design phase, simulated proton events are used to estimate the residual background level. We investigated the influence of the uncertainty in the current hadronic interaction models on the estimated gamma-ray sensitivity of Cherenkov Telescope Array, using several interaction models available in CORSIKA.
Journal Article
DIRAC in Large Particle Physics Experiments
2017
The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. A number of High Energy Physics and Astrophysics collaborations have adopted DIRAC as the base for their computing models. DIRAC was initially developed for the LHCb experiment at LHC, CERN. Later, the Belle II, BES III and CTA experiments as well as the linear collider detector collaborations started using DIRAC for their computing systems. Some of the experiments built their DIRAC-based systems from scratch, others migrated from previous solutions, ad-hoc or based on different middlewares. Adaptation of DIRAC for a particular experiment was enabled through the creation of extensions to meet their specific requirements. Each experiment has a heterogeneous set of computing and storage resources at their disposal that were aggregated through DIRAC into a coherent pool. Users from different experiments can interact with the system in different ways depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. In this contribution we will summarize the experience of using DIRAC in particle physics collaborations. The problems of migration to DIRAC from previous systems and their solutions will be presented. An overview of specific DIRAC extensions will be given. We hope that this review will be useful for experiments considering an update, or for those designing their computing models.
Journal Article
The Cherenkov Telescope Array production system for Monte Carlo simulations and analysis
2017
The Cherenkov Telescope Array (CTA), an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale, is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27 PB/year, including archive and data processing. The start of CTA operation is foreseen in 2018 and it will last about 30 years. The installation of the first telescopes in the two selected locations (Paranal, Chile and La Palma, Spain) will start in 2017. In order to select the best site candidate to host CTA telescopes (in the Northern and in the Southern hemispheres), massive Monte Carlo simulations have been performed since 2012. Once the two sites have been selected, we have started new Monte Carlo simulations to determine the optimal array layout with respect to the obtained sensitivity. Taking into account that CTA may be finally composed of 7 different telescope types coming in 3 different sizes, many different combinations of telescope position and multiplicity as a function of the telescope type have been proposed. This last Monte Carlo campaign represented a huge computational effort, since several hundreds of telescope positions have been simulated, while for future instrument response function simulations, only the operating telescopes will be considered. In particular, during the last 18 months, about 2 PB of Monte Carlo data have been produced and processed with different analysis chains, with a corresponding overall CPU consumption of about 125 M HS06 hours. In these proceedings, we describe the employed computing model, based on the use of grid resources, as well as the production system setup, which relies on the DIRAC interware. Finally, we present the envisaged evolutions of the CTA production system for the off-line data processing during CTA operations and the instrument response function simulations.
Journal Article
Prototype of a production system for Cherenkov Telescope Array with DIRAC
2015
The Cherenkov Telescope Array (CTA) - an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale - is the next generation instrument in the field of very high energy gamma-ray astronomy. CTA will operate as an open observatory providing data products to the scientific community. An average data stream of about 10 GB s for about 1000 hours of observation per year, thus producing several PB year, is expected. Large CPU time is required for data-processing as well for massive Monte Carlo simulations needed for detector calibration purposes. The current CTA computing model is based on a distributed infrastructure for the archive and the data off-line processing. In order to manage the off-line data-processing in a distributed environment, CTA has evaluated the DIRAC (Distributed Infrastructure with Remote Agent Control) system, which is a general framework for the management of tasks over distributed heterogeneous computing environments. In particular, a production system prototype has been developed, based on the two main DIRAC components, i.e. the Workload Management and Data Management Systems. After three years of successful exploitation of this prototype, for simulations and analysis, we proved that DIRAC provides suitable functionalities needed for the CTA data processing. Based on these results, the CTA development plan aims to achieve an operational production system, based on the DIRAC Workload Management System, to be ready for the start of CTA operation phase in 2017-2018. One more important challenge consists of the development of a fully automatized execution of the CTA workflows. For this purpose, we have identified a third DIRAC component, the so-called Transformation System, which offers very interesting functionalities to achieve this automatisation. The Transformation System is a 'data-driven' system, allowing to automatically trigger data-processing and data management operations according to pre-defined scenarios. In this paper, we present a brief summary of the DIRAC evaluation done so far, as well as the future developments planned for the CTA production system. In particular, we will focus on the developments of CTA automatic workflows, based on the Transformation System. As a result, we also propose some design optimizations of the Transformation System, in order to fully support the most complex workflows, envisaged in the CTA processing.
Journal Article
DIRAC framework evaluation for the Fermi -LAT and CTA experiments
2014
DIRAC (Distributed Infrastructure with Remote Agent Control) is a general framework for the management of tasks over distributed heterogeneous computing environments. It has been originally developed to support the production activities of the LHCb (Large Hadron Collider Beauty) experiment and today is extensively used by several particle physics and biology communities. Current (Fermi Large Area Telescope – LAT) and planned (Cherenkov Telescope Array – CTA) new generation astrophysical/cosmological experiments, with very large processing and storage needs, are currently investigating the usability of DIRAC in this context. Each of these use cases has some peculiarities: Fermi-LAT will interface DIRAC to its own workflow system to allow the access to the grid resources, while CTA is using DIRAC as workflow management system for Monte Carlo production and analysis on the grid. We describe the prototype effort that we lead toward deploying a DIRAC solution for some aspects of Fermi-LAT and CTA needs.
Journal Article
Application of the DIRAC framework to CTA: first evaluation
2012
The Cherenkov Telescope Array (CTA) - an array of several tens of Cherenkov telescopes - is the next generation of ground-based instrument in the field of very high energy gamma-ray astronomy. The CTA observatory is expected to produce a main data stream for permanent storage of the order of 1-to-5 GB/s for about 1000 hours of observation per year, thus producing a total data volume of the order of several PB per year. The CPU time needed to calibrate and process one hour of data taking will be of the order of some thousands CPU hours with current technology. The high data rate of CTA, together with the large computing power requirements for Monte Carlo (MC) simulations, need dedicated computing resources. Massive MC simulations are needed to study the physics of cosmic-ray atmospheric showers as well as telescope response and performance for different detectors and layout configurations. Given these large storage and computing requirements, the Grid approach is well suited, and a vast number of MC simulations are already running on the European Grid Infrastructure (EGI). In order to optimize resource usage and to handle all production and future analysis activities in a coherent way, a high-level framework with advanced functionalities is desirable. For this purpose we have preliminarly evaluated the DIRAC framework for distributed computing and tested it for the CTA workload and data management systems. In this paper we present a possible implementation of a Distributed Computing Infrastructure (DCI) Computing Model for CTA as well as the benchmark test results of DIRAC.
Journal Article
Extending the Fermi -LAT Data Processing Pipeline to the Grid
2012
The Data Handling Pipeline (“Pipeline”) has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Level 1, can run continuously for weeks or months at a time. In addition it receives heavy use in performing production Monte Carlo tasks. In daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download, typically completing the processing of the data before the next download arrives. The need for manual intervention has been reduced to less than 0.01% of submitted jobs. The Pipeline software is written almost entirely in Java and comprises several modules. The software comprises web-services that allow online monitoring and provides charts summarizing work flow aspects and performance information. The server supports communication with several batch systems such as LSF and BQS and recently also Sun Grid Engine and Condor. This is accomplished through dedicated job control services that for Fermi are running at SLAC and the other computing site involved in this large scale framework, the Lyon computing center of IN2P3. While being different in the logic of a task, we evaluate a separate interface to the Dirac system in order to communicate with EGI sites to utilize Grid resources, using dedicated Grid optimized systems rather than developing our own. More recently the Pipeline and its associated data catalog have been generalized for use by other experiments, and are currently being used by the Enriched Xenon Observatory (EXO), Cryogenic Dark Matter Search (CDMS) experiments as well as for Monte Carlo simulations for the future Cherenkov Telescope Array (CTA).
Journal Article
Major Changes to the LHCb Grid Computing Model in Year 2 of LHC Data
by
Arrabito, L
,
Charpentier, P
,
Lanciotti, E
in
Computation
,
Computational grids
,
Data processing
2012
The increase of luminosity of the LHC in 2011 also introduced an increase of computing requirements for data processing. This paper describes the data processing operations during 2011 prompt reconstruction as well as the end of year re-processing of the full data sample. It further gives an outlook to next evolutionary steps in the LHCb computing model for 2012 data processing and beyond.
Journal Article
Erratum to: Measurement of $$\\psi (2S)$$ ψ(2S) meson production in pp collisions at $$\\sqrt{s}=7\\,\\hbox {TeV}$$ s=7TeV
2020
This erratum corrects measurements of the prompt and secondary (from-b).
Journal Article
Erratum to: Measurement of$$\\psi (2S)$$meson production in pp collisions at$$\\sqrt{s}=7\\,\\hbox {TeV}
2020
This erratum corrects measurements of the prompt and secondary (from- b ).
Journal Article