Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
13 result(s) for "Jezghani, A"
Sort by:
Using Nab to determine correlations in unpolarized neutron decay
The Nab experiment will measure the ratio of the weak axial-vector and vector coupling constants \\(\\lambda=g_A/g_V\\) with precision \\(\\delta\\lambda/\\lambda\\sim3\\times10^{-4}\\) and search for a Fierz term \\(b_F\\) at a level \\(\\Delta b_F<10^{-3}\\). The Nab detection system uses thick, large area, segmented silicon detectors to very precisely determine the decay proton's time of flight and the decay electron's energy in coincidence and reconstruct the correlation between the antineutrino and electron momenta. Excellent understanding of systematic effects affecting timing and energy reconstruction using this detection system are required. To explore these effects, a series of ex situ studies have been undertaken, including a search for a Fierz term at a less sensitive level of \\(\\Delta b_F<10^{-2}\\) in the beta decay of \\(^{45}\\)Ca using the UCNA spectrometer.
A Recursive Method for Real-Time Waveform Fitting with Background Noise Rejection
We present here a technique for developing a high-throughput algorithm to fit a combination of template pulse shapes while simultaneously subtracting parameterized background noise. By convolving the psuedoinverse of the least-squares fit design matrix along a regularly sampled waveform trace, the time evolution of the fit parameters for each basis function can be determined in real-time. We approximate these sliding linear fit response functions using piecewise polynomials, and develop an FPGA-friendly algorithm to be implemented in high sample-rate data acquisition systems. This is a robust universal filter that compares well to common filters optimized for energy calibration/resolution, as well as filters optimized for timing performance, even when significant noise components are present.
From Commissioning to Precision Data-Taking: Resolving Operational Challenges in the Nab Detector Systems
Our understanding of the weak mixing of quarks, described by the Cabibbo Kobayashi Maskawa (CKM) matrix, currently presents an anomaly. Thanks to major strides in both theory and experiment, improved precision in determinations of the first row of matrix elements has revealed disagreement with the expectation of unitarity. The Nab experiment at the Spallation Neutron Source is designed to precisely extract the first matrix element \\(V_ud\\) and shed light on experimental tensions within the neutron beta decay dataset. Nab's asymmetric spectrometer allows coincident reconstruction of the decay proton and electron energies, which will be used to determine the electron-neutrino correlation coefficient, and thus (with the neutron lifetime) determine \\(V_ud\\). This unique approach has provided a more comprehensive view of neutron beta decay, including a first observation of the full momentum phase space of the decay above detector thresholds and limits on exotic neutron states. Recent upgrades to the Nab detector system have improved the robustness and stability of the detector performance in terms of proton detection efficiency, noise performance, and detector segment availability, setting the stage for high precision physics data-taking.
A Flexible Data Acquisition System Architecture for the Nab Experiment
The Nab experiment will measure the electron-neutrino correlation and Fierz interference term in free neutron beta decay to test the Standard Model and probe Beyond the Standard Model Physics. Using National Instrument's PXIe-5171 Reconfigurable Oscilloscope module, we have developed a data acquisition system that is not only capable of meeting Nab's specifications, but flexible enough to be adapted in situ as the experimental environment dictates. The L1 and L2 trigger logic can be reconfigured to optimize the system for coincidence event detection at runtime through configuration files and LabVIEW controls. This system is capable of identifying L1 triggers at at least \\(1\\) MHz, while reading out a peak signal rate of approximately \\(2\\) GB/s. During commissioning, the system ran at a sustained readout rate of \\(400\\) MB/s of signal data originating from roughly \\(6\\) kHz L2 triggers, well within the peak performance of the system.
The Nab Experiment: A Precision Measurement of Unpolarized Neutron Beta Decay
Neutron beta decay is one of the most fundamental processes in nuclear physics and provides sensitive means to uncover the details of the weak interaction. Neutron beta decay can evaluate the ratio of axial-vector to vector coupling constants in the standard model, \\(\\lambda = g_A / g_V\\), through multiple decay correlations. The Nab experiment will carry out measurements of the electron-neutrino correlation parameter \\(a\\) with a precision of \\(\\delta a / a = 10^{-3}\\) and the Fierz interference term \\(b\\) to \\(\\delta b = 3\\times10^{-3}\\) in unpolarized free neutron beta decay. These results, along with a more precise measurement of the neutron lifetime, aim to deliver an independent determination of the ratio \\(\\lambda\\) with a precision of \\(\\delta \\lambda / \\lambda = 0.03\\%\\) that will allow an evaluation of \\(V_{ud}\\) and sensitively test CKM unitarity, independent of nuclear models. Nab utilizes a novel, long asymmetric spectrometer that guides the decay electron and proton to two large area silicon detectors in order to precisely determine the electron energy and an estimation of the proton momentum from the proton time of flight. The Nab spectrometer is being commissioned at the Fundamental Neutron Physics Beamline at the Spallation Neutron Source at Oak Ridge National Lab. We present an overview of the Nab experiment and recent updates on the spectrometer, analysis, and systematic effects.
Creation of quark–gluon plasma droplets with three distinct geometries
Experimental studies of the collisions of heavy nuclei at relativistic energies have established the properties of the quark–gluon plasma (QGP), a state of hot, dense nuclear matter in which quarks and gluons are not bound into hadrons1–4. In this state, matter behaves as a nearly inviscid fluid5 that efficiently translates initial spatial anisotropies into correlated momentum anisotropies among the particles produced, creating a common velocity field pattern known as collective flow. In recent years, comparable momentum anisotropies have been measured in small-system proton–proton (p+p) and proton–nucleus (p+A) collisions, despite expectations that the volume and lifetime of the medium produced would be too small to form a QGP. Here we report on the observation of elliptic and triangular flow patterns of charged particles produced in proton–gold (p+Au), deuteron–gold (d+Au) and helium–gold (3He+Au) collisions at a nucleon–nucleon centre-of-mass energy \\[\\sqrt {s_{{\\mathrm{NN}}}\\] = 200 GeV. The unique combination of three distinct initial geometries and two flow patterns provides unprecedented model discrimination. Hydrodynamical models, which include the formation of a short-lived QGP droplet, provide the best simultaneous description of these measurements.
Study of neutron beta decay with the Nab experiment
The current three sigma tension in the unitarity test of the Cabbibo-Kobayashi-Maskawa (CKM) matrix is a notable problem with the Standard Model of elementary particle physics. A long-standing goal of the study of free neutron beta decay is to better determine the CKM element V ud through measurements of the neutron lifetime and a decay correlation parameter. The Nab collaboration intends to measure a , the neutrino-electron correlation, with accuracy sufficient for a competitive evaluation of V ud based on neutron decay data alone. This paper gives a status report and an outlook.
Characterizing CPU-Induced Slowdowns in Multi-GPU LLM Inference
Large-scale machine learning workloads increasingly rely on multi-GPU systems, yet their performance is often limited by an overlooked component: the CPU. Through a detailed study of modern large language model (LLM) inference and serving workloads, we find that multi-GPU performance frequently degrades not because GPUs are saturated, but because CPUs fail to keep the GPUs busy. Under limited CPU allocations, systems exhibit symptoms such as delayed kernel launch, stalled communication, and increased tokenization latency, leading to severe GPU underutilization even when ample GPU resources are available. This work presents a systematic analysis of CPU-induced slowdowns in multi-GPU LLM inference. We show that these bottlenecks persist even in serving stacks that employ process-level separation and modern GPU-side optimizations such as CUDA Graphs. Since the marginal cost of additional CPU cores is small relative to GPU instance pricing, our evaluation indicates that increasing the number of CPU cores can substantially improve performance and stability at minimal additional cost. Under moderate serving load, we observe that CPU-starved configurations frequently time out, while providing adequate CPU resources restores responsiveness and reduces time-to-first-token (TTFT) latency by 1.36-5.40x across configurations, all without requiring additional GPUs. This work shows that CPU provisioning is a crucial factor in multi-GPU LLM inference configuration, helping prevent control-side bottlenecks.
Characterizing the Efficiency of Distributed Training: A Power, Performance, and Thermal Perspective
The rapid scaling of Large Language Models (LLMs) has pushed training workloads far beyond the limits of single-node analysis, demanding a deeper understanding of how these models behave across large-scale, multi-GPU systems. In this paper, we present a comprehensive characterization of LLM training across diverse real-world workloads and hardware platforms, including NVIDIA H100/H200 and AMD MI250 GPUs. We analyze dense and sparse models under various parallelism strategies -- tensor, pipeline, data, and expert -- and evaluate their effects on hardware utilization, power consumption, and thermal behavior. We further evaluate the effectiveness of optimizations such as activation recomputation and compute-communication overlap. Our findings show that performance is not determined solely by scaling hardware capacity. Scale-up systems with fewer, higher-memory GPUs can outperform scale-out systems in communication-bound regimes, but only under carefully tuned configurations; in other cases, scale-out deployments achieve superior throughput. We also show that certain parallelism combinations, such as tensor with pipeline, lead to bandwidth underutilization due to inefficient data chunking, while increasing microbatch sizes beyond a certain point induces bursty execution and peak power excursions that worsen thermal throttling. These insights reveal how training performance is shaped by complex interactions between hardware, system topology, and model execution. We conclude by offering recommendations for system and hardware design to improve the scalability and reliability of future LLM systems and workloads. The source code of this project is available at https://github.com/sitar-lab/CharLLM-PPT.
Precision pulse shape simulation for proton detection at the Nab experiment
The Nab experiment at Oak Ridge National Laboratory, USA, aims to measure the beta-antineutrino angular correlation following neutron \\(\\beta\\) decay to an anticipated precision of approximately 0.1\\%. The proton momentum is reconstructed through proton time-of-flight measurements, and potential systematic biases in the timing reconstruction due to detector effects must be controlled at the nanosecond level. We present a thorough and detailed semiconductor and quasiparticle transport simulation effort to provide precise pulse shapes, and report on relevant systematic effects and potential measurement schemes.