Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
132 result(s) for "Gruber, Bernhard"
Sort by:
Next-generation MRI scanner designed for ultra-high-resolution human brain imaging at 7 Tesla
To increase granularity in human neuroimaging science, we designed and built a next-generation 7 Tesla magnetic resonance imaging scanner to reach ultra-high resolution by implementing several advances in hardware. To improve spatial encoding and increase the image signal-to-noise ratio, we developed a head-only asymmetric gradient coil (200 mT m −1 , 900 T m −1 s −1 ) with an additional third layer of windings. We integrated a 128-channel receiver system with 64- and 96-channel receiver coil arrays to boost signal in the cerebral cortex while reducing g-factor noise to enable higher accelerations. A 16-channel transmit system reduced power deposition and improved image uniformity. The scanner routinely performs functional imaging studies at 0.35–0.45 mm isotropic spatial resolution to reveal cortical layer functional activity, achieves high angular resolution in diffusion imaging and reduces acquisition time for both functional and structural imaging. A combination of hardware developments has increased the achievable spatial resolution in 7 Tesla human neuroimaging to about 0.4 mm.
Updates on the Low-Level Abstraction of Memory Access
Choosing the best memory layout for each hardware architecture is increasingly important as more and more programs become memory bound. For portable codes that run across heterogeneous hardware architectures, the choice of the memory layout for data structures is ideally decoupled from the rest of a program. The low-level abstraction of memory access (LLAMA) is a C++ library that provides a zero-runtime-overhead abstraction layer, underneath which memory mappings can be freely exchanged to customize data layouts, memory access and access instrumentation, focusing on multidimensional arrays of nested, structured data. After its scientific debut, several improvements and extensions have been added to LLAMA. This includes compile-time array extents for zero-memory-overhead views, support for computations during memory access, new mappings for bit-packing, switching types, byte-splitting, memory access instrumentation, and explicit SIMD support. This contribution provides an overview of recent developments in the LLAMA library.
Challenges and opportunities integrating LLAMA into AdePT
Particle transport simulations are a cornerstone of high-energy physics (HEP), constituting a substantial part of the computing workload performed in HEP. To boost the simulation throughput and energy efficiency, GPUs as accelerators have been explored in recent years, further driven by the increasing use of GPUs on HPCs. The Accelerated demonstrator of electromagnetic Particle Transport (AdePT) is an advanced prototype for offloading the simulation of electromagnetic showers in Geant4 to GPUs, and still undergoes continuous development and optimization. Improving memory layout and data access is vital to use modern, massively parallel GPU hardware efficiently, contributing to the challenge of migrating traditional CPU based data structures to GPUs in AdePT. The low-level abstraction of memory access (LLAMA) is a C++ library that provides a zero-runtime-overhead data structure abstraction layer, focusing on multidimensional arrays of nested, structured data. It provides a framework for defining and switching custom memory mappings at compile time to define data layouts and instrument data access, making LLAMA an ideal tool to tackle the memory-related optimization challenges in AdePT. Our contribution shares insights gained with LLAMA when instrumenting data access inside AdePT, complementing traditional GPU profiler outputs. We demonstrate traces of read/write counts to data structure elements as well as memory heatmaps. The acquired knowledge allowed for subsequent data layout optimizations.
A dynamic shim approach for correcting eddy current effects in diffusion-prepared MRI acquisition using a multi-coil AC/DC shim-array
Purpose: We developed a dynamic B0 shimming approach using a 46-channel AC/DC shim array to correct phase errors caused by eddy currents from diffusion-encoding gradients in diffusion-prepared MRI, enabling high b-value imaging without the SNR loss from the use of magnitude stabilizer. Methods: A 46-channel AC/DC shim array and corresponding amplifier system were built. Spin echo prescans with and without diffusion preparation were then used to rapidly measure eddy current induced phase differences. These phase maps were used as targets in an optimization framework to compute compensatory shim currents for multi-shot 3D diffusion-prepared acquisitions. Results: The proposed method allows flexible use of the AC/DC shim array to correct undesirable eddy current effects in diffusion-prepared MRI. Phantom and in vivo experiments demonstrate whole-brain, cardiac-gated, multi-shot 3D diffusion-prepared imaging without the use of magnitude stabilizers. The approach enables preservation of full SNR while achieving reliable diffusion encoding at b-values up to 2000 s/mm2. Conclusions: This work demonstrates a new strategy for applying an AC/DC shim array to compensate for eddy current induced phase errors in diffusion-prepared MRI. By eliminating the need for magnitude stabilizer, it enables efficient high-quality diffusion imaging with full signal sensitivity retained.
LLAMA: The Low-Level Abstraction For Memory Access
The performance gap between CPU and memory widens continuously. Choosing the best memory layout for each hardware architecture is increasingly important as more and more programs become memory bound. For portable codes that run across heterogeneous hardware architectures, the choice of the memory layout for data structures is ideally decoupled from the rest of a program. This can be accomplished via a zero-runtime-overhead abstraction layer, underneath which memory layouts can be freely exchanged. We present the Low-Level Abstraction of Memory Access (LLAMA), a C++ library that provides such a data structure abstraction layer with example implementations for multidimensional arrays of nested, structured data. LLAMA provides fully C++ compliant methods for defining and switching custom memory layouts for user-defined data types. The library is extensible with third-party allocators. Providing two close-to-life examples, we show that the LLAMA-generated AoS (Array of Structs) and SoA (Struct of Arrays) layouts produce identical code with the same performance characteristics as manually written data structures. Integrations into the SPEC CPU\\textsuperscript{\\textregistered} lbm benchmark and the particle-in-cell simulation PIConGPU demonstrate LLAMA's abilities in real-world applications. LLAMA's layout-aware copy routines can significantly speed up transfer and reshuffling of data between layouts compared with naive element-wise copying. LLAMA provides a novel tool for the development of high-performance C++ applications in a heterogeneous environment.
A dynamic shim approach for correcting eddy current effects in diffusion-prepared MRI acquisition using a multi-coil AC/DC shim-array
Purpose: We developed a dynamic B0 shimming approach using a 46-channel AC/DC shim array to correct phase errors caused by eddy currents from diffusion-encoding gradients in diffusion-prepared MRI, enabling high b-value imaging without the SNR loss from the use of magnitude stabilizer. Methods: A 46-channel AC/DC shim array and corresponding amplifier system were built. Spin echo prescans with and without diffusion preparation were then used to rapidly measure eddy current induced phase differences. These phase maps were used as targets in an optimization framework to compute compensatory shim currents for multi-shot 3D diffusion-prepared acquisitions. Results: The proposed method allows flexible use of the AC/DC shim array to correct undesirable eddy current effects in diffusion-prepared MRI. Phantom and in vivo experiments demonstrate whole-brain, cardiac-gated, multi-shot 3D diffusion-prepared imaging without the use of magnitude stabilizers. The approach enables preservation of full SNR while achieving reliable diffusion encoding at b-values up to 2000 s/mm2. Conclusions: This work demonstrates a new strategy for applying an AC/DC shim array to compensate for eddy current induced phase errors in diffusion-prepared MRI. By eliminating the need for magnitude stabilizer, it enables efficient high-quality diffusion imaging with full signal sensitivity retained.
Software Training in HEP
Long term sustainability of the high energy physics (HEP) research software ecosystem is essential for the field. With upgrades and new facilities coming online throughout the 2020s this will only become increasingly relevant throughout this decade. Meeting this sustainability challenge requires a workforce with a combination of HEP domain knowledge and advanced software skills. The required software skills fall into three broad groups. The first is fundamental and generic software engineering (e.g. Unix, version control,C++, continuous integration). The second is knowledge of domain specific HEP packages and practices (e.g., the ROOT data format and analysis framework). The third is more advanced knowledge involving more specialized techniques. These include parallel programming, machine learning and data science tools, and techniques to preserve software projects at all scales. This paper dis-cusses the collective software training program in HEP and its activities led by the HEP Software Foundation (HSF) and the Institute for Research and Innovation in Software in HEP (IRIS-HEP). The program equips participants with an array of software skills that serve as ingredients from which solutions to the computing challenges of HEP can be formed. Beyond serving the community by ensuring that members are able to pursue research goals, this program serves individuals by providing intellectual capital and transferable skills that are becoming increasingly important to careers in the realm of software and computing, whether inside or outside HEP
HL-LHC Analysis With ROOT
ROOT is high energy physics' software for storing and mining data in a statistically sound way, to publish results with scientific graphics. It is evolving since 25 years, now providing the storage format for more than one exabyte of data; virtually all high energy physics experiments use ROOT. With another significant increase in the amount of data to be handled scheduled to arrive in 2027, ROOT is preparing for a massive upgrade of its core ingredients. As part of a review of crucial software for high energy physics, the ROOT team has documented its R&D plans for the coming years.
ROOT for the HL-LHC: data format
This document discusses the state, roadmap, and risks of the foundational components of ROOT with respect to the experiments at the HL-LHC (Run 4 and beyond). As foundational components, the document considers in particular the ROOT input/output (I/O) subsystem. The current HEP I/O is based on the TFile container file format and the TTree binary event data format. The work going into the new RNTuple event data format aims at superseding TTree, to make RNTuple the production ROOT event data I/O that meets the requirements of Run 4 and beyond.
Triassic radiolarians from Greece, Sicily and Turkey
One limestone sample from Greece, and several picked assemblages from Sicily and Turkey, yielded well-preserved radiolarians assigned to 54 species of which 20 are new. The new spumellarian family, Capnuchosphaeridae (including the genera Capnuchosphaera, Capnodoce and Icrioma), and the new genera, Xiphotheca (a cyrtoid), and Poulpus (incertae sedis) are described. In total, these late Triassic assemblages show greater affinities with other Mesozoic radiolarians than with Paleozoic ones.