Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
36 result(s) for "Balewski, Jan"
Sort by:
Quantum-parallel vectorized data encodings and computations on trapped-ion and transmon QPUs
Compact data representations in quantum systems are crucial for the development of quantum algorithms for data analysis. In this study, we present two innovative data encoding techniques, known as QCrank and QBArt , which exhibit significant quantum parallelism via uniformly controlled rotation gates. The QCrank method encodes a series of real-valued data as rotations on data qubits, resulting in increased storage capacity. On the other hand, QBArt directly incorporates a binary representation of the data within the computational basis, requiring fewer quantum measurements and enabling well-established arithmetic operations on binary data. We showcase various applications of the proposed encoding methods for various data types. Notably, we demonstrate quantum algorithms for tasks such as DNA pattern matching, Hamming weight computation, complex value conjugation, and the retrieval of a binary image with 384 pixels, all executed on the Quantinuum trapped-ion QPU. Furthermore, we employ several cloud-accessible QPUs, including those from IBMQ and IonQ, to conduct supplementary benchmarking experiments.
Scaling and Benchmarking an Evolutionary Algorithm for Constructing Biophysical Neuronal Models
Single neuron models are fundamental for computational modeling of the brain’s neuronal networks, and understanding how ion channel dynamics mediate neural function. A challenge in defining such models is determining biophysically realistic channel distributions. Here, we present an efficient, highly parallel evolutionary algorithm for developing such models, named NeuroGPU-EA. NeuroGPU-EA uses CPUs and GPUs concurrently to simulate and evaluate neuron membrane potentials with respect to multiple stimuli. We demonstrate a logarithmic cost for scaling the stimuli used in the fitting procedure. NeuroGPU-EA outperforms the typically used CPU based evolutionary algorithm by a factor of 100 on a series of scaling benchmarks. We report observed performance bottlenecks and propose mitigation strategies. Finally, we also discuss the potential of this method for efficient simulation and evaluation of electrophysiological waveforms.
Simulation-aided optimization of detector design using portable representation of 3D objects
Use of the Standard Tessellation Language (STL) for automatic transport of CAD 1 geometry into Geant is presented. The hybrid approach of combining Geant native and STL objects is preferred. The tradeoffs between the CPU cost of the simulation and the accuracy of tessellation are discussed.
STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC
As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.
Offloading peak processing to virtual farm by STAR experiment at RHIC
The Virtual Machine framework was used to assemble the STAR-computing environment, validated once, deployed on over 100 8-core VMs at NERSC and Argonne National Lab, and used as a homogeneous Virtual Farm processing events acquired in real time by STAR detector located at Brookhaven National Lab. To provide time dependent calibration, a database snapshot scheme was devised. The two high capacity filesystems, localized at the opposite coasts of US and interconnected via Globus-Online protocol, were used in this setup, which resulted with a highly scalable Cloud-based extension of STAR computing resources. The system was in continuous operation for over 3 months.
Q-GEAR: Improving quantum simulation framework
Fast execution of complex quantum circuit simulations are crucial for verification of theoretical algorithms paving the way for their successful execution on the quantum hardware. However, the main stream CPU-based platforms for circuit simulation are well-established but slower. Despite this, adoption of GPU platforms remains limited because different hardware architectures require specialized quantum simulation frameworks, each with distinct implementations and optimization strategies. Therefore, we introduce Q-Gear, a software framework that transforms Qiskit quantum circuits into Cuda-Q kernels. By leveraging Cuda-Q seamless execution on GPUs, Q-Gear accelerates both CPU and GPU based simulations by respectively two orders of magnitude and ten times with minimal coding effort. Furthermore, Q-Gear leverages Cuda-Q configuration to interconnect GPUs memory allowing the execution of much larger circuits, beyond the memory limit set by a single GPU or CPU node. Additionally, we created and deployed a Podman container and a Shifter image at Perlmutter (NERSC/LBNL), both derived from NVIDIA public image. These public NERSC containers were optimized for the Slurm job scheduler allowing for close to 100% GPU utilization. We present various benchmarks of the Q-Gear to prove the efficiency of our computation paradigm.
Sequence and Image Transformations with Monarq: Quantum Implementations for NISQ Devices
We introduce Monarq, a unified quantum data processing framework that combines QCrank encoding with the EHands protocol for polynomial transformations, and demonstrate its implementation on noisy intermediate-scale quantum (NISQ) hardware. This framework provides fundamental quantum building blocks for signal and image processing tasks, including convolution, discrete-time Fourier transform (DFT), squared gradient computation, and edge detection, serving as a reference for a broad class of data processing applications on near-term quantum devices.
Quantum Data Representation via Circuit Partitioning and Reintegration
Quantum data encoding (QDE) enables faster com-putations than classical algorithms through superposition and en-tanglement. Circuit cutting and knitting are effective techniques for ameliorating current noisy quantum processing unit (QPUs) errors via a divide-and-conquer approach that splits quantum circuits into subcircuits and recombines them using classical postprocessing. Unfortunately, the existing QDE frameworks fail to consider quantum hardware limitations, such as the topology of the chip. Designing a computation model that supports the algorithm level of quantum computation and optimizes non-all-to-all connected quantum circuit simulations remains underde-veloped. In this study, we introduce shardQ, a method that leverages the SparseCut algorithm with matrix product state (MPS) compilation and a global knitting technique to mitigate the quantum error rates. This method elucidates the optimal trade-off between the computational time and error rate for quantum encoding with a theoretical proof, evidenced by an ablation analysis using an IBM Heron-type QPUs with 15% error reduction. This study also presents the results of quantum image encoding readiness. The proposed model advances the current quantum computation towards the fault-tolerant regime as QDE is the input of grand unified quantum algorithms.
Quantum Approximate Walk Algorithm
The encoding of classical to quantum data mapping through trigonometric functions within arithmetic-based quantum computation algorithms leads to the exploitation of multivariate distributions. The studied variational quantum gate learning mechanism, which relies on agnostic gradient optimization, does not offer algorithmic guarantees for the correlation of results beyond the measured bitstring outputs. Consequently, existing methodologies are inapplicable to this problem. In this study, we present a classical data-traceable quantum oracle characterized by a circuit depth that increases linearly with the number of qubits. This configuration facilitates the learning of approximate result patterns through a shallow quantum circuit (SQC) layout. Moreover, our approach demonstrates that the classical preprocessing of mid-quantum measurement data enhances the interpretability of quantum approximate optimization algorithm (QAOA) outputs without requiring full quantum state tomography. By establishing an inferable mapping between the classical input and quantum circuit outcomes, we obtained experimental results on the state-of-the-art IBM Pittsburgh hardware, which yielded polynomial-time verification of the solution quality. This hybrid framework bridges the gap between near-term quantum capabilities and practical optimization requirements, offering a pathway toward reliable quantum-classical algorithms for industrial applications.
Vectorized Attention with Learnable Encoding for Quantum Transformer
Vectorized quantum block encoding provides a way to embed classical data into Hilbert space, offering a pathway for quantum models, such as Quantum Transformers (QT), that replace classical self-attention with quantum circuit simulations to operate more efficiently. Current QTs rely on deep parameterized quantum circuits (PQCs), rendering them vulnerable to QPU noise, and thus hindering their practical performance. In this paper, we propose the Vectorized Quantum Transformer (VQT), a model that supports ideal masked attention matrix computation through quantum approximation simulation and efficient training via vectorized nonlinear quantum encoder, yielding shot-efficient and gradient-free quantum circuit simulation (QCS) and reduced classical sampling overhead. In addition, we demonstrate an accuracy comparison for IBM and IonQ in quantum circuit simulation and competitive results in benchmarking natural language processing tasks on IBM state-of-the-art and high-fidelity Kingston QPU. Our noise intermediate-scale quantum friendly VQT approach unlocks a novel architecture for end-to-end machine learning in quantum computing.