Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19
result(s) for
"Bendavid, Josh"
Sort by:
Angular coefficients from interpretable machine learning with symbolic regression
2026
A
bstract
We explore the use of symbolic regression to derive compact analytical expressions for angular observables relevant to electroweak boson production at the Large Hadron Collider (LHC). Focusing on the angular coefficients that govern the decay distributions of
W
and
Z
bosons, we investigate whether symbolic models can well approximate these quantities, typically computed via computationally costly numerical procedures, with high fidelity and interpretability. Using the PySR package, we first validate the approach in controlled settings, namely in angular distributions in lepton-lepton collisions in QED and in leading-order Drell-Yan production at the LHC. We then apply symbolic regression to extract closed-form expressions for the angular coefficients
A
i
as functions of transverse momentum, rapidity, and invariant mass, using next-to-leading order simulations of
pp
→
ℓ
+
ℓ
−
events. Our results demonstrate that symbolic regression can produce accurate and generalisable expressions that match Monte Carlo predictions within uncertainties, while preserving interpretability and providing insight into the kinematic dependence of angular observables.
Journal Article
High Performance Analysis, Today and Tomorrow
2023
The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as ”time to insight”. Discussed are the evolution and current state of analysis data formats, software, infrastructure and workflows at the LHC, and the directions being taken towards fast, efficient, and effective physics analysis at the HL-LHC.
Journal Article
Compatibility and combination of world W-boson mass measurements
by
Amoroso, S.
,
Ramos Pernas, M.
,
Vicini, A.
in
Astronomy
,
Astrophysics and Cosmology
,
Collaboration
2024
The compatibility of
W
-boson mass measurements performed by the ATLAS, LHCb, CDF, and D0 experiments is studied using a coherent framework with theory uncertainty correlations. The measurements are combined using a number of recent sets of parton distribution functions (PDF), and are further combined with the average value of measurements from the Large Electron–Positron collider. The considered PDF sets generally have a low compatibility with a suite of global rapidity-sensitive Drell–Yan measurements. The most compatible set is CT18 due to its larger uncertainties. A combination of all
m
W
measurements yields a value of
m
W
=
80
,
394.6
±
11.5
MeV with the CT18 set, but has a probability of compatibility of 0.5% and is therefore disfavoured. Combinations are performed removing each measurement individually, and a 91% probability of compatibility is obtained when the CDF measurement is removed. The corresponding value of the
W
boson mass is
80
,
369.2
±
13.3
MeV, which differs by
3.6
σ
from the CDF value determined using the same PDF set.
Journal Article
SubMIT: A Physics Analysis Facility at MIT
2026
The recently completed SubMIT platform is a small set of servers that provide interactive access to substantial data samples at high speeds, enabling sophisticated data analyses with very fast turnaround times. Additionally, it seamlessly integrates massive processing resources for large-scale tasks by connecting to a set of powerful batch processing systems. It serves as an ideal prototype for an Analysis Facility tailored to meet the demanding data and computational requirements anticipated during the High-Luminosity phase of the Large Hadron Collider. The key features that make this facility so powerful include highly optimized data access with a minimum of 100 Gbps networking per server, a large managed NVMe storage system, and a substantial spinning-disk Ceph file system. The platform integrates a diverse set of high multicore CPU machines for tasks benefiting from the multithreading and GPU resources for example for neural network training. SubMIT also provides and supports a flexible environment for users to manage their own software needs for example by using containers. This article describes the facility, its users, and a few complementary, generic and real-life analyses that are used to benchmark its various capabilities.
Journal Article
The HEP.TrkX Project: Deep Learning for Particle Tracking
by
Gray, Lindsey
,
Spiropoulou, Maria
,
Calafiura, Paolo
in
Algorithms
,
Artificial neural networks
,
Charged particles
2018
Charged particle reconstruction in dense environments, such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms, such as the combinatorial Kalman Filter, have been used with great success in HEP experiments for years. However, these state-of-the-art techniques are inherently sequential and scale quadratically or worse with increased detector occupancy. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as FPGAs or GPUs. In this paper we present the evolution and performance of our recurrent (LSTM) and convolutional neural networks moving from basic 2D models to more complex models and the challenges of scaling up to realistic dimensionality/sparsity.
Journal Article
Explorations of the viability of ARM and Xeon Phi for physics processing
by
Jones, Christopher D
,
Manzali, Matteo
,
Cooperman, Gene
in
Benchmarks
,
Computation
,
Computer programs
2014
We report on our investigations into the viability of the ARM processor and the Intel Xeon Phi co-processor for scientific computing. We describe our experience porting software to these processors and running benchmarks using real physics applications to explore the potential of these processors for production physics processing.
Journal Article
Angular Coefficients from Interpretable Machine Learning with Symbolic Regression
2025
We explore the use of symbolic regression to derive compact analytical expressions for angular observables relevant to electroweak boson production at the Large Hadron Collider (LHC). Focusing on the angular coefficients that govern the decay distributions of \\(W\\) and \\(Z\\) bosons, we investigate whether symbolic models can well approximate these quantities, typically computed via computationally costly numerical procedures, with high fidelity and interpretability. Using the PySR package, we first validate the approach in controlled settings, namely in angular distributions in lepton-lepton collisions in QED and in leading-order Drell-Yan production at the LHC. We then apply symbolic regression to extract closed-form expressions for the angular coefficients \\(A_i\\) as functions of transverse momentum, rapidity, and invariant mass, using next-to-leading order simulations of \\(pp \\to \\ell^+\\ell^-\\) events. Our results demonstrate that symbolic regression can produce accurate and generalisable expressions that match Monte Carlo predictions within uncertainties, while preserving interpretability and providing insight into the kinematic dependence of angular observables.
Compatibility and combination of world W-boson mass measurements
by
Miguel Ramos Pernas
,
Xu, Menglin
,
Andari, Nansi
in
Compatibility
,
Distribution functions
,
Electron-positron accelerators
2023
The compatibility of W-boson mass measurements performed by the ATLAS, LHCb, CDF, and D0 experiments is studied using a coherent framework with theory uncertainty correlations. The measurements are combined using a number of recent sets of parton distribution functions (PDF), and are further combined with the average value of measurements from the Large Electron-Positron collider. The considered PDF sets generally have a low compatibility with a suite of global rapidity-sensitive Drell-Yan measurements. The most compatible set is CT18 due to its larger uncertainties. A combination of all mW measurements yields a value of mW = 80394.6 +- 11.5 MeV with the CT18 set, but has a probability of compatibility of 0.5% and is therefore disfavoured. Combinations are performed removing each measurement individually, and a 91% probability of compatibility is obtained when the CDF measurement is removed. The corresponding value of the W boson mass is 80369.2 +- 13.3 MeV, which differs by 3.6 sigma from the CDF value determined using the same PDF set.
Symbolic regression for precision LHC physics
by
Conde, Daniel
,
Sanz, Veronica
,
Ubiali, Maria
in
Benchmarks
,
Exact solutions
,
First principles
2024
We study the potential of symbolic regression (SR) to derive compact and precise analytic expressions that can improve the accuracy and simplicity of phenomenological analyses at the Large Hadron Collider (LHC). As a benchmark, we apply SR to equation recovery in quantum electrodynamics (QED), where established analytical results from quantum field theory provide a reliable framework for evaluation. This benchmark serves to validate the performance and reliability of SR before extending its application to structure functions in the Drell-Yan process mediated by virtual photons, which lack analytic representations from first principles. By combining the simplicity of analytic expressions with the predictive power of machine learning techniques, SR offers a useful tool for facilitating phenomenological analyses in high energy physics.
SubMIT: A Physics Analysis Facility at MIT
by
Freer, Chad
,
Lavezzo, Luca
,
Moore, Marianne
in
Batch processing
,
Large Hadron Collider
,
Luminosity
2025
The recently completed SubMIT platform is a small set of servers that provide interactive access to substantial data samples at high speeds, enabling sophisticated data analyses with very fast turnaround times. Additionally, it seamlessly integrates massive processing resources for large-scale tasks by connecting to a set of powerful batch processing systems. It serves as an ideal prototype for an Analysis Facility tailored to meet the demanding data and computational requirements anticipated during the High-Luminosity phase of the Large Hadron Collider. The key features that make this facility so powerful include highly optimized data access with a minimum of 100Gbps networking per server, a large managed NVMe storage system, and a substantial spinning-disk Ceph file system. The platform integrates a diverse set of high multicore CPU machines for tasks benefiting from the multithreading and GPU resources for example for neural network training. SubMIT also provides and supports a flexible environment for users to manage their own software needs for example by using containers. This article describes the facility, its users, and a few complementary, generic and real-life analyses that are used to benchmark its various capabilities.