Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
18
result(s) for
"Piperov, S"
Sort by:
Analysis Facilities for the HL-LHC White Paper
by
Stark, G.
,
Ciangottini, D.
,
Krommydas, I.
in
Analysis facilities
,
Analysis preservation
,
Data access
2025
This white paper presents the current status of the R&D for Analysis Facilities (AFs) and attempts to summarize the views on the future direction of these facilities. These views have been collected through the High Energy Physics (HEP) Software Foundation’s (HSF) Analysis Facilities forum (HSF Analysis Facilities Forum), established in March 2022, the Analysis Ecosystems II workshop (Analysis Ecosystems Workshop II), that took place in May 2022, and the WLCG/HSF pre-CHEP workshop (WLCG–HSF pre-CHEP Workshop), that took place in May 2023. The paper attempts to cover all the aspects of an analysis facility.
Journal Article
Using the glideinWMS System as a Common Resource Provisioning Layer in CMS
2015
CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.
Journal Article
Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS
2017
The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.
Journal Article
CMS computing operations during run 1
2014
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
Journal Article
CMS Data Transfer operations after the first years of LHC collisions
2012
CMS experiment utilizes distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1s for processing, storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed from Tier-1 sites to all Tier-2 sites for physics analysis while Monte Carlo simulations sent back to Tier-1 sites for further archival. At the core of all transferring machinery is PhEDEx (Physics Experiment Data Export) data transfer system. It is very important to ensure reliable operation of the system, and the operational tasks comprise monitoring and debugging all transfer issues. Based on transfer quality information Site Readiness tool is used to create plans for resources utilization in the future. We review the operational procedures created to enforce reliable data delivery to CMS distributed sites all over the world. Additionally, we need to keep data and meta-data consistent at all sites and both on disk and on tape. In this presentation, we describe the principles and actions taken to keep data consistent on sites storage systems and central CMS Data Replication Database (TMDB/DBS) while ensuring fast and reliable data samples delivery of hundreds of terabytes to the entire CMS physics community.
Journal Article
No file left behind - monitoring transfer latencies in PhEDEx
by
Sanchez-Hernandez, A
,
Ratnikova, N
,
Chwalek, T
in
Completion time
,
Data transfer (computers)
,
Infrastructure
2012
The CMS experiment has to move Petabytes of data among dozens of computing centres with low latency in order to make efficient use of its resources. Transfer operations are well established to achieve the desired level of throughput, but operators lack a system to identify early on transfers that will need manual intervention to reach completion. File transfer latencies are sensitive to the underlying problems in the transfer infrastructure, and their measurement can be used as prompt trigger for preventive actions. For this reason, PhEDEx, the CMS transfer management system, has recently implemented a monitoring system to measure the transfer latencies at the level of individual files. For the first time now, the system can predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies early, and correct the issues while the transfer is still in progress. Statistics are aggregated for blocks of files, recording a historical log to monitor the long-term evolution of transfer latencies, which are used as cumulative metrics to evaluate the performance of the transfer infrastructure, and to plan the global data placement strategy. In this contribution, we present the typical patterns of transfer latencies that may be identified with the latency monitor, and we show how we are able to detect the sources of latency arising from the underlying infrastructure (such as stuck files) which need operator intervention.
Journal Article
Large-angle production of charged pions by 3-12.9 GeV/c protons on beryllium, aluminium and lead targets
by
Zucchelli, P.
,
Tereschenko, V.
,
Panman, J.
in
Aluminum
,
Astronomy
,
Astrophysics and Cosmology
2008
Measurements of the double-differential π
±
production cross-section in the range of momentum 100 MeV/c≤p< 800 MeV/c and angle 0.35 rad ≤θ< 2.15 rad in proton–beryllium, proton–aluminium and proton–lead collisions are presented. The data were taken with the HARP detector in the T9 beam line of the CERN PS. The pions were produced by proton beams in a momentum range from 3 GeV/c to 12.9 GeV/c hitting a target with a thickness of 5% of a nuclear interaction length. The tracking and identification of the produced particles was performed using a small-radius cylindrical time projection chamber (TPC) placed inside a solenoidal magnet. Incident particles were identified by an elaborate system of beam detectors. Results are obtained for the double-differential cross-sections d
2
σ/dpdθ at six incident proton beam momenta (3 GeV/c, 5 GeV/c, 8 GeV/c, 8.9 GeV/c (Be only), 12 GeV/c and 12.9 GeV/c (Al only)) and compared to previously available data.
Journal Article
Large-angle production of charged pions by 3 GeV/c - 12.9 GeV/c protons on beryllium, aluminium and lead targets
2008
Measurements of the double-differential π± production cross-section in the range of momentum 100 MeV/c≤p< 800 MeV/c and angle 0.35 rad ≤θ< 2.15 rad in proton-beryllium, proton-aluminium and proton-lead collisions are presented. The data were taken with the HARP detector in the T9 beam line of the CERN PS. The pions were produced by proton beams in a momentum range from 3 GeV/c to 12.9 GeV/c hitting a target with a thickness of 5% of a nuclear interaction length. The tracking and identification of the produced particles was performed using a small-radius cylindrical time projection chamber (TPC) placed inside a solenoidal magnet. Incident particles were identified by an elaborate system of beam detectors. Results are obtained for the double-differential cross-sections d2σ/dpdθ at six incident proton beam momenta (3 GeV/c, 5 GeV/c, 8 GeV/c, 8.9 GeV/c (Be only), 12 GeV/c and 12.9 GeV/c (Al only)) and compared to previously available data.
Journal Article
Analysis Facilities White Paper
2024
This white paper presents the current status of the R&D for Analysis Facilities (AFs) and attempts to summarize the views on the future direction of these facilities. These views have been collected through the High Energy Physics (HEP) Software Foundation's (HSF) Analysis Facilities forum, established in March 2022, the Analysis Ecosystems II workshop, that took place in May 2022, and the WLCG/HSF pre-CHEP workshop, that took place in May 2023. The paper attempts to cover all the aspects of an analysis facility.
Measurement of the production of charged pions by protons on a tantalum target
by
R. Schroeter
,
F. J. P. Soler
,
V. Tereschenko
in
[PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex]
,
ddc:530
,
Engineering (miscellaneous); Physics and Astronomy (miscellaneous)
2007
Journal Article