Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
46 result(s) for "Altmann, Yoann"
Sort by:
Quantum-inspired computational imaging
Traditional imaging techniques involve peering down a lens and collecting as much light from the target scene as possible. That requirement can set limits on what can be seen. Altmann et al. review some of the most recent developments in the field of computational imaging, including full three-dimensional imaging of scenes that are hidden from direct view (e.g., around a corner or behind an obstacle). High-resolution imaging can be achieved with a single-pixel detector at wavelengths for which no cameras currently exist. Such advances will lead to the development of cameras that can see through fog or inside the human body. Science , this issue p. eaat2298 Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.
Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers
Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications. The use of single-photon data has been limited by time-consuming reconstruction algorithms. Here, the authors combine statistical models and computational tools known from computer graphics and show real-time reconstruction of moving scenes.
Seeing around corners with edge-resolved transient imaging
Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations—a plan view plus heights—and a 180 ∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations. Non-line-of-sight imaging is typically limited by loss of directional information due to diffuse reflections scattering light in all directions. Here, the authors see around corners by using vertical edges and temporal response to pulsed light to obtain angular and longitudinal resolution, respectively.
Robust real-time imaging through flexible multimode fibers
Conventional endoscopes comprise a bundle of optical fibers, associating one fiber for each pixel in the image. In principle, this can be reduced to a single multimode optical fiber (MMF), the width of a human hair, with one fiber spatial-mode per image pixel. However, images transmitted through a MMF emerge as unrecognizable speckle patterns due to dispersion and coupling between the spatial modes of the fiber. Furthermore, speckle patterns change as the fiber undergoes bending, making the use of MMFs in flexible imaging applications even more complicated. In this paper, we propose a real-time imaging system using flexible MMFs, but which is robust to bending. Our approach does not require access or feedback signal from the distal end of the fiber during imaging. We leverage a variational autoencoder to reconstruct and classify images from the speckles and show that these images can still be recovered when the bend configuration of the fiber is changed to one that was not part of the training set. We utilize a MMF 300 mm long with a 62.5 μm core for imaging 10 × 10  cm objects placed approximately at 20 cm from the fiber and the system can deal with a change in fiber bend of 50 ∘ and range of movement of 8 cm.
Quantitative imaging and automated fuel pin identification for passive gamma emission tomography
Compliance of member States to the Treaty on the Non-Proliferation of Nuclear Weapons is monitored through nuclear safeguards. The Passive Gamma Emission Tomography (PGET) system is a novel instrument developed within the framework of the International Atomic Energy Agency (IAEA) project JNT 1510, which included the European Commission, Finland, Hungary and Sweden. The PGET is used for the verification of spent nuclear fuel stored in water pools. Advanced image reconstruction techniques are crucial for obtaining high-quality cross-sectional images of the spent-fuel bundle to allow inspectors of the IAEA to monitor nuclear material and promptly identify its diversion. In this work, we have developed a software suite to accurately reconstruct the spent-fuel cross sectional image, automatically identify present fuel rods, and estimate their activity. Unique image reconstruction challenges are posed by the measurement of spent fuel, due to its high activity and the self-attenuation. While the former is mitigated by detector physical collimation, we implemented a linear forward model to model the detector responses to the fuel rods inside the PGET, to account for the latter. The image reconstruction is performed by solving a regularized linear inverse problem using the fast-iterative shrinkage-thresholding algorithm. We have also implemented the traditional filtered back projection (FBP) method based on the inverse Radon transform for comparison and applied both methods to reconstruct images of simulated mockup fuel assemblies. Higher image resolution and fewer reconstruction artifacts were obtained with the inverse-problem approach, with the mean-square-error reduced by 50%, and the structural-similarity improved by 200%. We then used a convolutional neural network (CNN) to automatically identify the bundle type and extract the pin locations from the images; the estimated activity levels finally being compared with the ground truth. The proposed computational methods accurately estimated the activity levels of the present pins, with an associated uncertainty of approximately 5%.
Physics-based forward model for near-real-time quantitative imaging of spent nuclear fuel assemblies
The Passive Gamma Emission Tomography (PGET) instrument, authorized by the International Atomic Energy Agency (IAEA) for verification of spent nuclear fuel, aims to reconstruct 2D cross-sectional images of spent fuel assemblies (SFAs), identify missing or present fuel pins, and quantify fuel pin activities. Although the first two objectives are reliably achieved, accurate determination of fuel pin activities remains a challenge due to intense self-shielding and scattering effects. We have developed a linear inverse approach that addresses these effects and demonstrated superior image quality and identification accuracy in simulation studies. This approach frames the image reconstruction process as an inverse problem, relying on a physics-based forward model of the PGET system. We improved our forward model by incorporating collimator septal penetration and detector scattering effects. The enhanced forward model enables near-real-time sinogram simulation and system matrix calculation, which is> 100,000 times faster than 3D Monte Carlo simulations. The model was validated through simulations of VVER-1000 and VVER-440 SFA, and a relative difference of 3.7% in counts was achieved between MCNP and our forward model. Based on this enhanced model, we successfully reconstructed images from the simulated data, identified 100% of the fuel pins, and achieved an average uncertainty of 2.3% in activity quantification. We applied the reconstruction method to measured data of VVER-440 SFAs, successfully imaging all the pins, including the innermost ones, and identifying the water channel within the SFA. The high accuracy and low computational cost of our forward model demonstrate its potential for real-world inspection scenarios and enable future algorithm development.
Expectation-propagation for weak radionuclide identification at radiation portal monitors
We propose a sparsity-promoting Bayesian algorithm capable of identifying radionuclide signatures from weak sources in the presence of a high radiation background. The proposed method is relevant to radiation identification for security applications. In such scenarios, the background typically consists of terrestrial, cosmic, and cosmogenic radiation that may cause false positive responses. We evaluate the new Bayesian approach using gamma-ray data and are able to identify weapons-grade plutonium, masked by naturally-occurring radioactive material (NORM), in a measurement time of a few seconds. We demonstrate this identification capability using organic scintillators (stilbene crystals and EJ-309 liquid scintillators), which do not provide direct, high-resolution, source spectroscopic information. Compared to the EJ-309 detector, the stilbene-based detector exhibits a lower identification error, on average, owing to its better energy resolution. Organic scintillators are used within radiation portal monitors to detect gamma rays emitted from conveyances crossing ports of entry. The described method is therefore applicable to radiation portal monitors deployed in the field and could improve their threat discrimination capability by minimizing “nuisance” alarms produced either by NORM-bearing materials found in shipped cargoes, such as ceramics and fertilizers, or radionuclides in recently treated nuclear medicine patients.
Bayesian Activity Estimation and Uncertainty Quantification of Spent Nuclear Fuel Using Passive Gamma Emission Tomography
In this paper, we address the problem of activity estimation in passive gamma emission tomography (PGET) of spent nuclear fuel. Two different noise models are considered and compared, namely, the isotropic Gaussian and the Poisson noise models. The problem is formulated within a Bayesian framework as a linear inverse problem and prior distributions are assigned to the unknown model parameters. In particular, a Bernoulli-truncated Gaussian prior model is considered to promote sparse pin configurations. A Markov chain Monte Carlo (MCMC) method, based on a split and augmented Gibbs sampler, is then used to sample the posterior distribution of the unknown parameters. The proposed algorithm is first validated by simulations conducted using synthetic data, generated using the nominal models. We then consider more realistic data simulated using a bespoke simulator, whose forward model is non-linear and not available analytically. In that case, the linear models used are mis-specified and we analyse their robustness for activity estimation. The results demonstrate superior performance of the proposed approach in estimating the pin activities in different assembly patterns, in addition to being able to quantify their uncertainty measures, in comparison with existing methods.
Enhancing the recovery of a temporal sequence of images using joint deconvolution
In this work, we address the reconstruction of spatial patterns that are encoded in light fields associated with a series of light pulses emitted by a laser source and imaged using photon-counting cameras, with an intrinsic response significantly longer than the pulse delay. Adopting a Bayesian approach, we propose and demonstrate experimentally a novel joint temporal deconvolution algorithm taking advantage of the fact that single pulses are observed simultaneously by different pixels. Using an intensified CCD camera with a 1000-ps gate, stepped with 10-ps increments, we show the ability to resolve images that are separated by a 10-ps delay, four time better compared to standard deconvolution techniques.
Observation of laser pulse propagation in optical fibers with a SPAD camera
Recording processes and events that occur on sub-nanosecond timescales poses a difficult challenge. Conventional ultrafast imaging techniques often rely on long data collection times, which can be due to limited device sensitivity and/or the requirement of scanning the detection system to form an image. In this work, we use a single-photon avalanche detector array camera with pico-second timing accuracy to detect photons scattered by the cladding in optical fibers. We use this method to film supercontinuum generation and track a GHz pulse train in optical fibers. We also show how the limited spatial resolution of the array can be improved with computational imaging. The single-photon sensitivity of the camera and the absence of scanning the detection system results in short total acquisition times, as low as a few seconds depending on light levels. Our results allow us to calculate the group index of different wavelength bands within the supercontinuum generation process. This technology can be applied to a range of applications, e.g., the characterization of ultrafast processes, time-resolved fluorescence imaging, three-dimensional depth imaging, and tracking hidden objects around a corner.