Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
88
result(s) for
"wavefront sensing"
Sort by:
Optimization of Virtual Shack-Hartmann Wavefront Sensing
2021
Virtual Shack–Hartmann wavefront sensing (vSHWS) can flexibly adjust parameters to meet different requirements without changing the system, and it is a promising means for aberration measurement. However, how to optimize its parameters to achieve the best performance is rarely discussed. In this work, the data processing procedure and methods of vSHWS were demonstrated by using a set of normal human ocular aberrations as an example. The shapes (round and square) of a virtual lenslet, the zero-padding of the sub-aperture electric field, sub-aperture number, as well as the sequences (before and after diffraction calculation), algorithms, and interval of data interpolation, were analyzed to find the optimal configuration. The effect of the above optimizations on its anti-noise performance was also studied. The Zernike coefficient errors and the root mean square of the wavefront error between the reconstructed and preset wavefronts were used for performance evaluation. The performance of the optimized vSHWS could be significantly improved compared to that of a non-optimized one, which was also verified with 20 sets of clinical human ocular aberrations. This work makes the vSHWS’s implementation clearer, and the optimization methods and the obtained results are of great significance for its applications.
Journal Article
Wavefront Sensor for Laser Beams Based on Reweighted Amplitude Flow Algorithm
2026
We present a reference-free computational wavefront sensor based on binary amplitude modulation and phase retrieval. The method employs a Digital Micromirror Device as a programmable amplitude modulator and reconstructs the complex optical field from multiple far-field intensity measurements using the Reweighted Amplitude Flow algorithm with Optimal Spectral Initialization. Unlike classical pupil-plane wavefront sensors, the proposed architecture contains no wavelength-specific optical elements, enabling straightforward adaptation across a broad spectral range. The achievable spatial resolution of the reconstructed wavefront scales directly with the modulator resolution. We experimentally demonstrate wavefront reconstruction at 650 nm and at 2116 nm, a wavelength regime where commercial wavefront sensors are scarce. At 650 nm, the reconstructed wavefront is validated against a commercial lateral shearing interferometer, and the sensor is further integrated into a closed-loop adaptive optics system using a deformable mirror. The proposed approach is particularly suited for applications requiring high spatial resolution and large dynamic range in slowly varying or quasi-static laser fields, where computational reconstruction speed is not a primary concern.
Journal Article
FPGA Implementation of Shack–Hartmann Wavefront Sensing Using Stream-Based Center of Gravity Method for Centroid Estimation
by
Fanpeng Kong
,
Manuel Cegarra Polo
,
Andrew Lambert
in
Accuracy
,
adaptive optics (AO); Shack–Hartmann wavefront sensor (SHWFS); wavefront sensing; field-programmable gate array (FPGA)
,
Algorithms
2023
We present a fast and reconfigurable architecture for Shack–Hartmann wavefront sensing implemented on FPGA devices using a stream-based center of gravity to measure the spot displacements. By calculating the center of gravity around each incoming pixel with an optimal window matching the spot size, the common trade-off between noise and bias errors and dynamic range due to window size existing in conventional center of gravity methods is avoided. In addition, the accuracy of centroid estimation is not compromised when the spot moves to or even crosses the sub-aperture boundary, leading to an increased dynamic range. The calculation of the centroid begins while the pixel values are read from an image sensor and further computation such as slope and partial wavefront reconstruction follows immediately as the sub-aperture centroids are ready. The result is a real-time wavefront sensing system with very low latency and high measurement accuracy feasible for targeting on low-cost FPGA devices. This architecture provides a promising solution which can cope with multiple target objects and work in moderate scintillation.
Journal Article
Recent advances in optical imaging through deep tissue: imaging probes and techniques
by
Cheon, Seo Young
,
Park, Sangjun
,
Lee, Yeeun
in
Biological products
,
Bioluminescence
,
Biomaterials
2022
Optical imaging has been essential for scientific observations to date, however its biomedical applications has been restricted due to its poor penetration through tissues. In living tissue, signal attenuation and limited imaging depth caused by the wave distortion occur because of scattering and absorption of light by various molecules including hemoglobin, pigments, and water. To overcome this, methodologies have been proposed in the various fields, which can be mainly categorized into two stategies: developing new imaging probes and optical techniques. For example, imaging probes with long wavelength like NIR-II region are advantageous in tissue penetration. Bioluminescence and chemiluminescence can generate light without excitation, minimizing background signals. Afterglow imaging also has high a signal-to-background ratio because excitation light is off during imaging. Methodologies of adaptive optics (AO) and studies of complex media have been established and have produced various techniques such as direct wavefront sensing to rapidly measure and correct the wave distortion and indirect wavefront sensing involving modal and zonal methods to correct complex aberrations. Matrix-based approaches have been used to correct the high-order optical modes by numerical post-processing without any hardware feedback. These newly developed imaging probes and optical techniques enable successful optical imaging through deep tissue. In this review, we discuss recent advances for multi-scale optical imaging within deep tissue, which can provide reseachers multi-disciplinary understanding and broad perspectives in diverse fields including biophotonics for the purpose of translational medicine and convergence science.
Graphical Abstract
Methodologies for multi-scale optical imaging within deep tissues are discussed in diverse fields including biophotonics for the purpose of translational medicine and convergence science. Recent imaging probes have tried deep tissue imaging by NIR-II imaging, bioluminescence, chemiluminescence, and afterglow imaging. Optical techniques including direct/indirect and coherence-gated wavefront sensing also can increase imaging depth.
Journal Article
Deep learning wavefront sensing and aberration correction in atmospheric turbulence
2021
Deep learning neural networks are used for wavefront sensing and aberration correction in atmospheric turbulence without any wavefront sensor (i.e. reconstruction of the wavefront aberration phase from the distorted image of the object). We compared and found the characteristics of the direct and indirect reconstruction ways: (i) directly reconstructing the aberration phase; (ii) reconstructing the Zernike coefficients and then calculating the aberration phase. We verified the generalization ability and performance of the network for a single object and multiple objects. What’s more, we verified the correction effect for a turbulence pool and the feasibility for a real atmospheric turbulence environment.
Journal Article
Large-Dynamic-Range Ocular Aberration Measurement Based on Deep Learning with a Shack–Hartmann Wavefront Sensor
2024
The Shack–Hartmann wavefront sensor (SHWFS) is widely utilized for ocular aberration measurement. However, large ocular aberrations caused by individual differences can easily make the spot move out of the range of the corresponding sub-aperture in SHWFS, rendering the traditional centroiding method ineffective. This study applied a novel convolutional neural network (CNN) model to wavefront sensing for large dynamic ocular aberration measurement. The simulation results demonstrate that, compared to the modal method, the dynamic range of our method for main low-order aberrations in ocular system is increased by 1.86 to 43.88 times in variety. Meanwhile, the proposed method also has the best measurement accuracy, and the statistical root mean square (RMS) of the residual wavefronts is 0.0082 ± 0.0185 λ (mean ± standard deviation). The proposed method generally has a higher accuracy while having a similar or even better dynamic range as compared to traditional large-dynamic schemes. On the other hand, compared with recently developed deep learning methods, the proposed method has a much larger dynamic range and better measurement accuracy.
Journal Article
Research on Wavefront Sensing Applications Based on Photonic Lanterns
2025
The Photonic Lantern (PL) is a novel fiber optic device emerging in wavefront sensing, which converts multimode fiber light fields into single-mode fields. By decomposing complex multimode fields into simple fundamental modes, the PL maps wavefront aberrations to light intensity. The Photonic Lantern Wavefront Sensor (PLWFS) functions as an ideal focal-plane sensor. It aligns the focal and imaging planes to coincide completely. This configuration mitigates Non-Common Path Aberrations (NCPAs), which traditional sensors struggle to resolve. This paper reviews the research history of the PLWFS. It first introduces the fabrication methods for PL, then focuses on illustrating the theoretical and experimental developments of the PLWFS. PLWFS research began with the initial realization of sensing simple tip/tilt aberrations, moved to establishing linear response models for small aberrations, and subsequently introduced methods such as neural network algorithms and broadband polychromatic light sources to achieve large aberration sensing and correction. This paper highlights significant research achievements from each stage, summarizes the current limitations in the research, and finally discusses the future potential of the PLWFS as an excellent focal-plane wavefront sensor.
Journal Article
Enhanced Neural Architecture for Real-Time Deep Learning Wavefront Sensing
2025
To achieve real-time deep learning wavefront sensing (DLWFS) of dynamic random wavefront distortions induced by atmospheric turbulence, this study proposes an enhanced wavefront sensing neural network (WFSNet) based on convolutional neural networks (CNN). We introduce a novel multi-objective neural architecture search (MNAS) method designed to attain Pareto optimality in terms of error and floating-point operations (FLOPs) for the WFSNet. Utilizing EfficientNet-B0 prototypes, we propose a WFSNet with enhanced neural architecture which significantly reduces computational costs by 80% while improving wavefront sensing accuracy by 22%. Indoor experiments substantiate this effectiveness. This study offers a novel approach to real-time DLWFS and proposes a potential solution for high-speed, cost-effective wavefront sensing in the adaptive optical systems of satellite-to-ground laser communication (SGLC) terminals.
Journal Article
Experimental wavefront sensing techniques based on deep learning models using a Hartmann-Shack sensor for visual optics applications
by
Ramírez-Quintero, Juan Sebastián
,
Mira-Agudelo, Alejandro
,
Torres-Sepúlveda, Walter
in
639/624
,
639/624/1075/1083
,
639/624/1107/510
2025
Wavefront sensing is essential in visual optics for evaluating the optical quality in systems, such as the human visual system, and understanding its impact on visual performance. Although traditional methods like the Hartmann-Shack wavefront sensor (HSS) are widely employed, they face limitations in precision, dynamic range, and processing speed. Emerging deep learning technologies offer promising solutions to overcome these limitations. This paper presents a novel approach using a modified ResNet convolutional neural network (CNN) to enhance HSS performance. Experimental datasets, including noise-free and speckle noise-added images, were generated using a custom-made monocular visual simulator. The proposed CNN model exhibited superior accuracy in processing HSS images, significantly reducing wavefront aberration reconstruction time by 300% to 400% and increasing the dynamic range by 315.6% compared to traditional methods. Our results indicate that this approach substantially enhances wavefront sensing capabilities, offering a practical solution for applications in visual optics.
Journal Article
Real-Time Wavefront Sensing at High Resolution with an Electrically Tunable Lens
by
Oliva-García, Ricardo
,
Cairós, Carlos
,
Rodríguez-Ramos, José Manuel
in
Cameras
,
Comparative analysis
,
Complementary metal oxide semiconductors
2023
We have designed, assembled, and evaluated a compact instrument capable of capturing the wavefront phase in real time, across various scenarios. Our approach simplifies the optical setup and configuration, which reduces the conventional capture and computation time when compared to other methods that use two defocused images. We evaluated the feasibility of using an electrically tunable lens in our camera by addressing its issues and optimizing its performance. Additionally, we conducted a comparison study between our approach and a Shack–Hartmann sensor. The camera was tested on multiple targets, such as deformable mirrors, lenses with aberrations, and a liquid lens in movement. Working at the highest resolution of the CMOS sensor with a small effective pixel size enables us to achieve the maximum level of detail in lateral resolution, leading to increased sensitivity to high-spatial-frequency signals.
Journal Article