Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
115 result(s) for "Potamianos, K"
Sort by:
Pixel readout chip software emulators for the YARR DAQ system upgrade
The Yet Another Rapid Readout (YARR) system is a DAQ system designed for pixel readout chips including the current generation ATLAS FE-I4 chip and the next generation RD53A chip, which is part of the development of new Pixel detector technology to be implemented in High-Luminosity Large Hadron Collider experiments. YARR utilises a PCI-e FPGA card which acts as a simple gateway to pipe all data from pixel readout chips via the high speed PCI-e connection into the host system's memory. All data processing is done on a software level in the host CPU(s), utilising a data-driven, multi-threaded, parallel processing paradigm. YARR has recently been upgraded to interface with software emulators of pixel readout chips. These emulators offer many benefits: quick development of DAQ software; expansion of the developer base to users without access to readout hardware; preparation of DAQ software for upcoming readout chips, such as RD53A; implementation of Continuous Integration and unit tests to ensure code quality and maintainability. The design and capabilities of the FE-I4 and RD53A software emulators will be presented.
Modernising ATLAS Software Build Infrastructure
In the last year ATLAS has radically updated its software development infrastructure hugely reducing the complexity of building releases and greatly improving build speed, flexibility and code testing. The first step in this transition was the adoption of CMake as the software build system over the older CMT. This required the development of an automated translation from the old system to the new, followed by extensive testing and improvements. This resulted in a far more standard build process that was married to the method of building ATLAS software as a series of 12 separate projects from Subversion. We then proceeded with a migration of the code base from Subversion to Git. As the Subversion repository had been structured to manage each package more or less independently there was no simple mapping that could be used to manage the migration into Git. Instead a specialist set of scripts that captured the software changes across official software releases was developed. With some clean up of the repository and the policy of only migrating packages in production releases, we managed to reduce the repository size from 62 GiB to 220 MiB. After moving to Git we took the opportunity to introduce continuous integration so that now each code change from developers is built and tested before being approved. With both CMake and Git in place we also dramatically simplified the build management of ATLAS software. Many heavyweight homegrown tools were dropped and the build procedure was reduced to a single bootstrap of some external packages, followed by a full build of the rest of the stack. This has reduced the time for a build by a factor of 2. It is now easy to build ATLAS software, freeing developers to test compile intrusive changes or new platform ports with ease. We have also developed a system to build lightweight ATLAS releases, for simulation, analysis or physics derivations which can be built from the same branch.
First look at the physics case of TLEP
A bstract The discovery by the ATLAS and CMS experiments of a new boson with mass around 125 GeV and with measured properties compatible with those of a Standard-Model Higgs boson, coupled with the absence of discoveries of phenomena beyond the Standard Model at the TeV scale, has triggered interest in ideas for future Higgs factories. A new circular e + e − collider hosted in a 80 to 100 km tunnel, TLEP, is among the most attractive solutions proposed so far. It has a clean experimental environment, produces high luminosity for top-quark, Higgs boson, W and Z studies, accommodates multiple detectors, and can reach energies up to the threshold and beyond. It will enable measurements of the Higgs boson properties and of Electroweak Symmetry-Breaking (EWSB) parameters with unequalled precision, offering exploration of physics beyond the Standard Model in the multi-TeV range. Moreover, being the natural precursor of the VHE-LHC, a 100 TeV hadron machine in the same tunnel, it builds up a long-term vision for particle physics. Altogether, the combination of TLEP and the VHE-LHC offers, for a great cost effectiveness, the best precision and the best search reach of all options presently on the market. This paper presents a first appraisal of the salient features of the TLEP physics potential, to serve as a baseline for a more extensive design study.
Compute farm software for ATLAS IBL calibration
In 2014 the Insertable B-Layer (IBL) will extend the existing Pixel Detector of the ATLAS experiment at CERN by over 12 million additional pixels. For calibration and monitoring purposes, occupancy and time-over-threshold data are being histogrammed in the read-out hardware. Further processing of the histograms happens on commodity hardware, which not only requires the fast transfer of histogram data from the read-out hardware to the computing farm via Ethernet, but also the integration of the software and hardware into the already existing data-acquisition and calibration framework (TDAQ and PixelDAQ) of the ATLAS experiment and the current Pixel Detector. We implement the software running on the compute cluster with an emphasis on modularity, allowing for flexible adjustment of the infrastructure and a good scalability with respect to the number of network interfaces, available CPU cores, and deployed machines. By using a modular design we are able to not only employ CPU-based fitting algorithms, but also have the possibility to take advantage of the performance offered by a GPU-based approach to fitting.
A Linear Collider Vision for the Future of Particle Physics
In this paper we review the physics opportunities at linear \\(e^+e^-\\) colliders with a special focus on high centre-of-mass energies and beam polarisation, take a fresh look at the various accelerator technologies available or under development and, for the first time, discuss how a facility first equipped with a technology mature today could be upgraded with technologies of tomorrow to reach much higher energies and/or luminosities. In addition, we will discuss detectors and alternative collider modes, as well as opportunities for beyond-collider experiments and R\\&D facilities as part of a linear collider facility (LCF). The material of this paper will support all plans for \\(e^+e^-\\) linear colliders and additional opportunities they offer, independently of technology choice or proposed site, as well as R\\&D for advanced accelerator technologies. This joint perspective on the physics goals, early technologies and upgrade strategies has been developed by the LCVision team based on an initial discussion at LCWS2024 in Tokyo and a follow-up at the LCVision Community Event at CERN in January 2025. It heavily builds on decades of achievements of the global linear collider community, in particular in the context of CLIC and ILC.
The Linear Collider Facility (LCF) at CERN
In this paper we outline a proposal for a Linear Collider Facility as the next flagship project for CERN. It offers the opportunity for a timely, cost-effective and staged construction of a new collider that will be able to comprehensively map the Higgs boson's properties, including the Higgs field potential, thanks to a large span in centre-of-mass energies and polarised beams. A comprehensive programme to study the Higgs boson and its closest relatives with high precision requires data at centre-of-mass energies from the Z pole to at least 1 TeV. It should include measurements of the Higgs boson in both major production mechanisms, ee -> ZH and ee -> vvH, precision measurements of gauge boson interactions as well as of the W boson, Higgs boson and top-quark masses, measurement of the top-quark Yukawa coupling through ee ->ttH, measurement of the Higgs boson self-coupling through HH production, and precision measurements of the electroweak couplings of the top quark. In addition, ee collisions offer discovery potential for new particles complementary to HL-LHC.
Interim report for the International Muon Collider Collaboration (IMCC)
The International Muon Collider Collaboration (IMCC) [1] was established in 2020 following the recommendations of the European Strategy for Particle Physics (ESPP) and the implementation of the European Strategy for Particle Physics-Accelerator R&D Roadmap by the Laboratory Directors Group [2], hereinafter referred to as the the European LDG roadmap. The Muon Collider Study (MuC) covers the accelerator complex, detectors and physics for a future muon collider. In 2023, European Commission support was obtained for a design study of a muon collider (MuCol) [3]. This project started on 1st March 2023, with work-packages aligned with the overall muon collider studies. In preparation of and during the 2021-22 U.S. Snowmass process, the muon collider project parameters, technical studies and physics performance studies were performed and presented in great detail. Recently, the P5 panel [4] in the U.S. recommended a muon collider R&D, proposed to join the IMCC and envisages that the U.S. should prepare to host a muon collider, calling this their \"muon shot\". In the past, the U.S. Muon Accelerator Programme (MAP) [5] has been instrumental in studies of concepts and technologies for a muon collider.
Interim report for the International Muon Collider Collaboration (IMCC)
The International Muon Collider Collaboration (IMCC) [1] was established in 2020 following the recommendations of the European Strategy for Particle Physics (ESPP) and the implementation of the European Strategy for Particle Physics-Accelerator R&D Roadmap by the Laboratory Directors Group [2], hereinafter referred to as the the European LDG roadmap. The Muon Collider Study (MuC) covers the accelerator complex, detectors and physics for a future muon collider. In 2023, European Commission support was obtained for a design study of a muon collider (MuCol) [3]. This project started on 1st March 2023, with work-packages aligned with the overall muon collider studies. In preparation of and during the 2021-22 U.S. Snowmass process, the muon collider project parameters, technical studies and physics performance studies were performed and presented in great detail. Recently, the P5 panel [4] in the U.S. recommended a muon collider R&D, proposed to join the IMCC and envisages that the U.S. should prepare to host a muon collider, calling this their \"muon shot\". In the past, the U.S. Muon Accelerator Programme (MAP) [5] has been instrumental in studies of concepts and technologies for a muon collider.
Measurements of Single Event Upset in ATLAS IBL
Effects of Single Event Upsets (SEU) and Single Event Transients (SET) are studied in the FE-I4B chip of the innermost layer of the ATLAS pixel system. SEU/SET affect the FE-I4B Global Registers as well as the settings for the individual pixels, causing, among other things, occupancy losses, drops in the low voltage currents, noisy pixels, and silent pixels. Quantitative data analysis and simulations indicate that SET dominate over SEU on the load line of the memory. Operational issues and mitigation techniques are presented.
Muon Collider Forum Report
A multi-TeV muon collider offers a spectacular opportunity in the direct exploration of the energy frontier. Offering a combination of unprecedented energy collisions in a comparatively clean leptonic environment, a high energy muon collider has the unique potential to provide both precision measurements and the highest energy reach in one machine that cannot be paralleled by any currently available technology. The topic generated a lot of excitement in Snowmass meetings and continues to attract a large number of supporters, including many from the early career community. In light of this very strong interest within the US particle physics community, Snowmass Energy, Theory and Accelerator Frontiers created a cross-frontier Muon Collider Forum in November of 2020. The Forum has been meeting on a monthly basis and organized several topical workshops dedicated to physics, accelerator technology, and detector R&D. Findings of the Forum are summarized in this report.