Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
19 result(s) for "Bendavid, Josh"
Sort by:
Angular coefficients from interpretable machine learning with symbolic regression
A bstract We explore the use of symbolic regression to derive compact analytical expressions for angular observables relevant to electroweak boson production at the Large Hadron Collider (LHC). Focusing on the angular coefficients that govern the decay distributions of W and Z bosons, we investigate whether symbolic models can well approximate these quantities, typically computed via computationally costly numerical procedures, with high fidelity and interpretability. Using the PySR package, we first validate the approach in controlled settings, namely in angular distributions in lepton-lepton collisions in QED and in leading-order Drell-Yan production at the LHC. We then apply symbolic regression to extract closed-form expressions for the angular coefficients A i as functions of transverse momentum, rapidity, and invariant mass, using next-to-leading order simulations of pp → ℓ + ℓ − events. Our results demonstrate that symbolic regression can produce accurate and generalisable expressions that match Monte Carlo predictions within uncertainties, while preserving interpretability and providing insight into the kinematic dependence of angular observables.
High Performance Analysis, Today and Tomorrow
The unprecedented volume of data and Monte Carlo simulations at the HL-LHC will pose increasing challenges for data analysis both in terms of computing resource requirements as well as ”time to insight”. Discussed are the evolution and current state of analysis data formats, software, infrastructure and workflows at the LHC, and the directions being taken towards fast, efficient, and effective physics analysis at the HL-LHC.
Compatibility and combination of world W-boson mass measurements
The compatibility of W -boson mass measurements performed by the ATLAS, LHCb, CDF, and D0 experiments is studied using a coherent framework with theory uncertainty correlations. The measurements are combined using a number of recent sets of parton distribution functions (PDF), and are further combined with the average value of measurements from the Large Electron–Positron collider. The considered PDF sets generally have a low compatibility with a suite of global rapidity-sensitive Drell–Yan measurements. The most compatible set is CT18 due to its larger uncertainties. A combination of all m W measurements yields a value of m W = 80 , 394.6 ± 11.5  MeV with the CT18 set, but has a probability of compatibility of 0.5% and is therefore disfavoured. Combinations are performed removing each measurement individually, and a 91% probability of compatibility is obtained when the CDF measurement is removed. The corresponding value of the W boson mass is 80 , 369.2 ± 13.3  MeV, which differs by 3.6 σ from the CDF value determined using the same PDF set.
SubMIT: A Physics Analysis Facility at MIT
The recently completed SubMIT platform is a small set of servers that provide interactive access to substantial data samples at high speeds, enabling sophisticated data analyses with very fast turnaround times. Additionally, it seamlessly integrates massive processing resources for large-scale tasks by connecting to a set of powerful batch processing systems. It serves as an ideal prototype for an Analysis Facility tailored to meet the demanding data and computational requirements anticipated during the High-Luminosity phase of the Large Hadron Collider. The key features that make this facility so powerful include highly optimized data access with a minimum of 100 Gbps networking per server, a large managed NVMe storage system, and a substantial spinning-disk Ceph file system. The platform integrates a diverse set of high multicore CPU machines for tasks benefiting from the multithreading and GPU resources for example for neural network training. SubMIT also provides and supports a flexible environment for users to manage their own software needs for example by using containers. This article describes the facility, its users, and a few complementary, generic and real-life analyses that are used to benchmark its various capabilities.
The HEP.TrkX Project: Deep Learning for Particle Tracking
Charged particle reconstruction in dense environments, such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms, such as the combinatorial Kalman Filter, have been used with great success in HEP experiments for years. However, these state-of-the-art techniques are inherently sequential and scale quadratically or worse with increased detector occupancy. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as FPGAs or GPUs. In this paper we present the evolution and performance of our recurrent (LSTM) and convolutional neural networks moving from basic 2D models to more complex models and the challenges of scaling up to realistic dimensionality/sparsity.
Explorations of the viability of ARM and Xeon Phi for physics processing
We report on our investigations into the viability of the ARM processor and the Intel Xeon Phi co-processor for scientific computing. We describe our experience porting software to these processors and running benchmarks using real physics applications to explore the potential of these processors for production physics processing.
Angular Coefficients from Interpretable Machine Learning with Symbolic Regression
We explore the use of symbolic regression to derive compact analytical expressions for angular observables relevant to electroweak boson production at the Large Hadron Collider (LHC). Focusing on the angular coefficients that govern the decay distributions of \\(W\\) and \\(Z\\) bosons, we investigate whether symbolic models can well approximate these quantities, typically computed via computationally costly numerical procedures, with high fidelity and interpretability. Using the PySR package, we first validate the approach in controlled settings, namely in angular distributions in lepton-lepton collisions in QED and in leading-order Drell-Yan production at the LHC. We then apply symbolic regression to extract closed-form expressions for the angular coefficients \\(A_i\\) as functions of transverse momentum, rapidity, and invariant mass, using next-to-leading order simulations of \\(pp \\to \\ell^+\\ell^-\\) events. Our results demonstrate that symbolic regression can produce accurate and generalisable expressions that match Monte Carlo predictions within uncertainties, while preserving interpretability and providing insight into the kinematic dependence of angular observables.
Compatibility and combination of world W-boson mass measurements
The compatibility of W-boson mass measurements performed by the ATLAS, LHCb, CDF, and D0 experiments is studied using a coherent framework with theory uncertainty correlations. The measurements are combined using a number of recent sets of parton distribution functions (PDF), and are further combined with the average value of measurements from the Large Electron-Positron collider. The considered PDF sets generally have a low compatibility with a suite of global rapidity-sensitive Drell-Yan measurements. The most compatible set is CT18 due to its larger uncertainties. A combination of all mW measurements yields a value of mW = 80394.6 +- 11.5 MeV with the CT18 set, but has a probability of compatibility of 0.5% and is therefore disfavoured. Combinations are performed removing each measurement individually, and a 91% probability of compatibility is obtained when the CDF measurement is removed. The corresponding value of the W boson mass is 80369.2 +- 13.3 MeV, which differs by 3.6 sigma from the CDF value determined using the same PDF set.
Symbolic regression for precision LHC physics
We study the potential of symbolic regression (SR) to derive compact and precise analytic expressions that can improve the accuracy and simplicity of phenomenological analyses at the Large Hadron Collider (LHC). As a benchmark, we apply SR to equation recovery in quantum electrodynamics (QED), where established analytical results from quantum field theory provide a reliable framework for evaluation. This benchmark serves to validate the performance and reliability of SR before extending its application to structure functions in the Drell-Yan process mediated by virtual photons, which lack analytic representations from first principles. By combining the simplicity of analytic expressions with the predictive power of machine learning techniques, SR offers a useful tool for facilitating phenomenological analyses in high energy physics.
SubMIT: A Physics Analysis Facility at MIT
The recently completed SubMIT platform is a small set of servers that provide interactive access to substantial data samples at high speeds, enabling sophisticated data analyses with very fast turnaround times. Additionally, it seamlessly integrates massive processing resources for large-scale tasks by connecting to a set of powerful batch processing systems. It serves as an ideal prototype for an Analysis Facility tailored to meet the demanding data and computational requirements anticipated during the High-Luminosity phase of the Large Hadron Collider. The key features that make this facility so powerful include highly optimized data access with a minimum of 100Gbps networking per server, a large managed NVMe storage system, and a substantial spinning-disk Ceph file system. The platform integrates a diverse set of high multicore CPU machines for tasks benefiting from the multithreading and GPU resources for example for neural network training. SubMIT also provides and supports a flexible environment for users to manage their own software needs for example by using containers. This article describes the facility, its users, and a few complementary, generic and real-life analyses that are used to benchmark its various capabilities.