Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
5,387 result(s) for "Di Simone, A."
Sort by:
Modernizing the ATLAS simulation infrastructure
The ATLAS Simulation infrastructure has been used to produce upwards of 50 billion proton-proton collision events for analyses ranging from detailed Standard Model measurements to searches for exotic new phenomena. In the last several years, the infrastructure has been heavily revised to allow intuitive multithreading and significantly improved maintainability. Such a massive update of a legacy code base requires careful choices about what pieces of code to completely rewrite and what to wrap or revise. The initialization of the complex geometry was generalized to allow new tools and geometry description languages, popular in some detector groups. The addition of multithreading requires Geant4-MT and GaudiHive, two frameworks with fundamentally different approaches to multithreading, to work together. It also required enforcing thread safety throughout a large code base, which required the redesign of several aspects of the simulation, including truth, the record of particle interactions with the detector during the simulation. These advances were possible thanks to close interactions with the Geant4 developers.
The distributed production system of the SuperB project: description and results
The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.
Fast simulation of electromagnetic showers in the ATLAS calorimeter: Frozen showers
One of the most time consuming process simulating pp interactions in the ATLAS detector at LHC is the simulation of electromagnetic showers in the calorimeter. In order to speed up the event simulation several parametrisation methods are available in ATLAS. In this paper we present a short description of a frozen shower technique, together with some recent benchmarks and comparison with full simulation.
Testing and evaluating storage technology to build a distributed Tier1 for SuperB in Italy
The SuperB asymmetric energy e+e−- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab−-1 and a luminosity target of 1036 cm−-2 s−-1. This luminosity translate in the requirement of storing more than 50 PByte of additional data each year, making SuperB an interesting challenge to the data management infrastructure, both at site level as at Wide Area Network level. A new Tier1, distributed among 3 or 4 sites in the south of Italy, is planned as part of the SuperB computing infrastructure. Data storage is a relevant topic whose development affects the way to configure and setup storage infrastructure both in local computing cluster and in a distributed paradigm. In this work we report the test on the software for data distribution and data replica focusing on the experiences made with Hadoop and GlusterFS.
Computing for the next generation flavour factories
The next generation of Super Flavor Factories, like SuperB and SuperKEKB, present significant computing challenges. Extrapolating the BaBar and Belle experience to the SuperB nominal luminosity of 1036 cm−2s−1, we estimate that the data size collected after a few years of operation is 200 PB and the amount of CPU required to process them of the order of 2000 KHep-Spec06. Already in the current phase of detector design, the amount of simulated events needed for estimating the impact on very rare benchmark channels is huge and has required the development of new simulation tools and the deployment of a worldwide production distributed system. With the collider is in operation, very large data set have to be managed and new technologies with potential large impact on the computational models, like the many core CPUs, need to be effectively exploited. In addition SuperB, like the LHC experiments, will have to make use of distributed computing resources accessible via the Grid infrastructures while providing an efficient and reliable data access model to its final users. To explore the key issues, a dedicated R&D program has been launched and is now in progress. A description of the R&D goals and the status of ongoing activities is presented.
The ATLAS detector digitization project for 2009 data taking
The ATLAS digitization project is steered by a top-level Python digitization package which ensures uniform and consistent configuration across the sub-detectors. The properties of the digitization algorithms were tuned to reproduce the detector response seen in lab tests, test beam data and cosmic ray running. Dead channels and noise rates are read from database tables to reproduce conditions seen in a particular run. The digits are then persistified as Raw Data Objects with or without intermediate simulation of the exact data acquisition format depending on the detector type. Emphasis is put on the description of the digitization project configuration, its flexibility in handling events for processing and in the global detector configuration. Other options available, including detector noise simulation, random number service, metadata and details of pile-up background events to be overlaid, are also described. The LHC beam bunch spacing is also configurable, as well as the number of bunch crossings to overlay and the default detector conditions (including noisy channels, dead electronics associated with each detector layout). Cavern background calculation, beam halo and beam gas treatment and pile-up with real data are also part of this report.
The status of the simulation project for the ATLAS experiment in view of the LHC startup
The Simulation suite for ATLAS is in a mature phase, ready to cope with the challenge of the 2009 data. The simulation framework, which is integrated to the ATLAS framework (Athena) offers a set of pre-configured applications for the full simulation of ATLAS, combined test beam setups, cosmic ray setups and old standalone test-beams. Each detector component has been carefully described in detail and monitored for performance. The few pieces of the apparatus (forward and very forward detectors), inert material and services (toroid supports, support rails, detector feet) that are still missing are about to be integrated in the current simulation suite. Detailed descriptions of the ideal and real geometries for each ATLAS subcomponent allow optimization studies and validation. Small scale productions are monitored daily through a set of tests for different samples of physics events, and large scale productions on the Grid verify the robustness of the implementation as well as possible errors only visible on large statistics. The conditions used in the simulation process are now stored in the output file as metadata, and can be utilized to process the data properly. A fast shower simulation suite has also been developed in ATLAS and performance comparisons are part of the overall evaluation.
SuperB evaluation of DIRAC Distributed Infrastructure
The SuperB asymmetric energy e+e− collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavour sector of the Standard Model. SuperB distributed computing group performed a detailed evaluation of DIRAC Distributed Infrastructure for use in the SuperB experiment based on the two use cases: End User Analysis and Monte Carlo Production. Test aims to evaluate DIRAC capabilities to manage both gLite and OSG sites, File Catalog management, job and data management features in SuperB realistic use cases.
FastSim: A Fast Simulation for the SuperB Detector
We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.