Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
179
result(s) for
"Refraction correction"
Sort by:
Correcting Image Refraction: Towards Accurate Aerial Image-Based Bathymetry Mapping in Shallow Waters
by
Karantzalos, Konstantinos
,
Georgopoulos, Andreas
,
Skarlatos, Dimitrios
in
Acoustic mapping
,
aerial imagery
,
Bathymetry
2020
Although aerial image-based bathymetric mapping can provide, unlike acoustic or LiDAR (Light Detection and Ranging) sensors, both water depth and visual information, water refraction poses significant challenges for accurate depth estimation. In order to tackle this challenge, we propose an image correction methodology, which first exploits recent machine learning procedures that recover depth from image-based dense point clouds and then corrects refraction on the original imaging dataset. This way, the structure from motion (SfM) and multi-view stereo (MVS) processing pipelines are executed on a refraction-free set of aerial datasets, resulting in highly accurate bathymetric maps. Performed experiments and validation were based on datasets acquired during optimal sea state conditions and derived from four different test-sites characterized by excellent sea bottom visibility and textured seabed. Results demonstrated the high potential of our approach, both in terms of bathymetric accuracy, as well as texture and orthoimage quality.
Journal Article
Generalized atmospheric refraction correction for optical remote sensing satellite based on rational function model
by
Wang, Yanli
,
Jin, Shuying
,
Dai, Rongfan
in
Atmospheric refraction correction
,
global atmospheric refraction indices
,
optical remote sensing satellite
2025
Due to the uneven density of the Earth’s atmosphere, the light propagation path bends, destroying the collinearity condition of the ground object, camera projection center, and image point and introducing the atmospheric refraction error of optical satellite. The atmospheric refraction error seriously affects the geometric positioning accuracy and restricts the application of remote sensing imagery. This study proposes a novel and generalized atmospheric refraction correction method based on the rational function model (RFM) to compensate for refraction errors in various optical satellites without the complex satellite ephemeris and protected camera parameters. By using globally measured atmospheric parameters and the imaging characteristics of optical satellites, global atmospheric refraction indices with 400 height layers were stored. A compensation model of atmospheric refraction error was proposed, based on a limited number of key points, to improve processing efficiency. Based on the projection relationship in optical satellite imaging, the satellite position and object direction of key points were determined through backward calculation of the RFM, eliminating the need for complex and undisclosed satellite auxiliary data. Atmospheric refraction error was compensated by conducting iterative geometric positioning and refraction correction using an atmospheric model with 400 height layers. Experimental results show that the proposed method can be applied to sub-meter-level resolution and large swath-wide optical images for atmospheric refraction error correction. The average processing time is less than 2 s. Moreover, the improvement in geometric positioning accuracy of optical images ranges from 0.057 m to 2.985 m. This method is as accurate as the refraction correction method using a rigorous model.
Journal Article
A Geoid Slope Validation Survey (2017) in the rugged terrain of Colorado, USA
by
Hirt, Christian
,
Guillaume, Sebastien
,
van Westrum, Derek
in
Accuracy
,
Corrections
,
Earth and Environmental Science
2021
In the summer of 2017, the National Geodetic Survey (NGS) conducted its third and final Geoid Slope Validation Survey in the rugged terrain of southern Colorado, USA. As in previous surveys, the intent is to acquire the most accurate and precise field observations to determine geoid slopes. In turn, these data can be used to quantify the accuracy of various geoid models as NGS looks ahead to creating a highly accurate gravimetric geoid model for use as a national vertical datum. Long period GPS sessions, spirit leveling, absolute gravity, and deflection of the vertical (DoV) observations were acquired along a 360 km line, ranging from 1900 to 3300 m in elevation, with a station spacing of approximately 1.6 km. Our absolute gravity and DoV datasets are unique in that they were collected at 222 field stations in highly mountainous terrain at an unprecedented observational accuracy of 10 µGal and 0.04″, respectively. Further, by employing tailored refraction corrections to the spirit leveling data, we improved the agreement between heights derived from the DoV and spirit leveling from ± 1.9 to ± 1.3 cm RMS, or by more than 30%, across the line. At all length scales, from 1.6 to 360 km, the agreement is better than 2 cm. Finally, as a description of the validation process, we compare the observations with recent NGS experimental geoid models. We find that typical agreement is at about 3–5 cm, with no single model being best at all length scales. The data from this project are freely available to the community and should serve as test beds for not only geoid modeling comparisons, but also the refinement of numerous field techniques.
Journal Article
Automated underwater plectropomus leopardus phenotype measurement through cylinder
2025
Accurate and non-invasive measurement of fish phenotypic characteristics in underwater environments is crucial for advancing aquaculture. Traditional manual methods require significant labor to anesthetize and capture fish, which not only raises ethical concerns but also risks causing injury to the animals. Alternative hardware-based approaches, such as acoustic technology and active structured light techniques, are often costly and may suffer from limited measurement accuracy. In contrast, image-based methods utilizing low-cost binocular cameras present a more affordable solution, although they face challenges such as light refraction between water and the waterproof enclosure, which can cause discrepancies between image coordinates and actual positions. To address these challenges, we have developed a fish keypoint detection dataset and trained both a fish object detection model and a keypoint detection model using the RTMDet and RTMPose architectures to identify keypoints on Plectropomus leopardus. Since the binocular camera must be housed in a waterproof enclosure, we correct for birefringence caused by the water and the enclosure by applying refraction corrections to the detected keypoint coordinates. This ensures that the keypoint coordinates obtained underwater are consistent with those in air, thereby improving the accuracy of subsequent stereo matching. Once the corrected keypoint coordinates are obtained, we apply the least squares method, in conjunction with binocular stereo imaging principles, to perform stereo matching and derive the actual 3D coordinates of the keypoints. We calculate the fish body length by measuring the 3D coordinates of the snout and tail. Our model achieved
98.6%
accuracy in keypoints detection (AP@0.5:0.95). Underwater tests showed an average measurement error of approximately
3.2 mm
(MRPE=3.50%) for fish in a tank, with real-time processing at
28 FPS
on an NVIDIA GTX 1060 GPU. These results confirm that our method effectively detects keypoints on fish bodies and measures their length without physical contact or removal from the tank. By eliminating invasive procedures, our approach not only improves measurement efficiency but also aligns with ethical standards in aquaculture. Compared to existing techniques, our method offers enhanced accuracy (reducing MRPE by 53.8% compared to baseline methods) and practicality, making it a valuable tool for the aquaculture industry.
Journal Article
Accurate Refraction Correction—Assisted Bathymetric Inversion Using ICESat-2 and Multispectral Data
2021
Shallow-water depth information is essential for ship navigation and fishery farming. However, the accurate acquisition of shallow-water depth has been a challenge for marine mapping. Combining Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) bathymetry data with multispectral data, satellite-derived bathymetry is a promising solution through which to obtain bathymetric information quickly and accurately. This study proposes a photon refraction correction method considering sea-surface undulations to address errors in the underwater photons obtained by the ICESat-2. First, the instantaneous sea surface and beam emission angle are integrated to determine the sea-surface incidence angle. Next, the distance of photon propagation in water is determined using sea-surface undulation and Snell’s law. Finally, position correction is performed through geometric relationships. The corrected photons were combined with the multispectral data for bathymetric inversion, and a bathymetric map of the Yongle Atoll area was obtained. A bathymetric chart was created using the corrected photons and the multispectral data in the Yongle Atoll. Comparing the results of different refraction correction methods with the data measured shows that the refraction correction method proposed in this paper can effectively correct bathymetry errors: the root mean square error is 1.48 m and the R2 is 0.86.
Journal Article
A Spatiotemporal Atmospheric Refraction Correction Method for Improving the Geolocation Accuracy of High-Resolution Remote Sensing Images
2022
Atmospheric refraction is one of the most significant factors that affect the geolocation accuracy of high-resolution remote sensing images. However, most of the current atmospheric refraction correction methods based on empirical data neglect the spatiotemporal variation of pressure, temperature, and humidity of the atmosphere, inevitably resulting in poor geometric positioning accuracy. Therefore, in terms of the problems mentioned above, this study proposed a spatiotemporal atmospheric refraction correction method (SARCM) based on global measured data to avoid the uncertainty of traditional empirical models. Initially, the atmosphere was stratified into 42 layers according to their pressure property, and each layer was divided into 1,042,560 grid cells with intervals of 0.25 longitude and 0.25 latitude. Then, the atmospheric refractive index of each grid in the imaging region was accurately calculated using the high-precision Ciddor formula, and the result was interpolated using three splines. Subsequently, according to the rigorous geometric positioning model, the line-of-sight of each pixel and the viewing zenith angle outside the atmosphere in WGS84 were derived to provide input for atmospheric refraction correction. Finally, the coordinates of the ground control points were corrected with the calculated atmospheric refractive index and Snell’s law. The experimental results showed that the proposed SARCM could effectively improve the positioning accuracy of the image with a large viewing zenith angle, and especially, the improvement percentage for a viewing zenith angle of 34.2426° in the x-direction was 99.5%. Moreover, the atmospheric refraction correction result of the SARCM was better than that of the primary state-of-the-art methods.
Journal Article
Correction of Refraction Effects on Unmanned Aerial Vehicle Structure-from-Motion Bathymetric Survey for Coral Reef Roughness Characterisation
2025
Coral reefs play a crucial role in tropical coastal ecosystems, even though these environments are difficult to monitor due to their diversity and morphological complexity and due to their shallowness in some cases. This study used two approaches for acquiring very-high-resolution bathymetric data: underwater structure-from-motion (SfM) photogrammetry collected from a low-cost platform and unmanned/uncrewed aerial vehicle (UAV)-based SfM photogrammetry. While underwater photogrammetry avoids the distortions caused by refraction at air/water interface, it remains limited in spatial coverage (about 0.04 ha in 1 h of survey). In contrast, UAV photogrammetry allows for covering extensive areas (more than 20 ha/h) but requires applying refraction correction in order to accurately compute bathymetry and roughness values. An analytical approach based on Snell laws and an empirical approach based on linear regression (calibrated using a batch of points whose depths are representative of the depth range of the surveyed areas) are tested to correct the apparent depth on the raw UAV digital elevation model (DEM). Comparison to underwater photogrammetry shows that correcting refraction reduces the root mean square error (RMSE) by more than 50% (up to 62%) on bathymetric models, with RMSE lower than 0.13 m for the analytical approach and down to 0.09 m for the regression method. The linear-regression-based refraction correction proved most effective in restoring accurate seabed roughness, with a mean error on roughness lower than 17% (vs. 30% for analytical refraction correction and 48% for apparent bathymetry).
Journal Article
Refraction Correction Based on ATL03 Photon Parameter Tracking for Improving ICESat-2 Bathymetry Accuracy
2024
The refraction phenomenon causes ICESat-2 nearshore bathymetry errors by deviating seafloor photons’ coordinates. A refraction correction method based on ATL03 photon parameter tracking was proposed to improve the ICESat-2 bathymetry accuracy. The method begins by searching for sea–air intersections using photon parameters. Instead of relying on mathematical operations, it uses logical relations to establish a relationship between the seafloor and the surface, which improves efficiency. Then, a refraction correction model is designed based on Snell’s law for different sea surface fluctuations. This model is clear and suitable for scholars new to refraction correction. The results show the effectiveness of the proposed method since the RMSE is reduced by 1.8842 m~5.2319 m compared with the raw data. Our method has better tolerance than other methods at different water depth ranges.
Journal Article
Photogrammetry and Traditional Bathymetry for High-Resolution Underwater Mapping in Shallow Waters
by
Spadaro, Alessandra
,
Chiabrando, Filiberto
,
Lingua, Andrea
in
Bathymeters
,
Bathymetry
,
Data integration
2025
This study addresses the critical need for accurate mapping of submerged terrain, which is essential for hydraulic modeling, environmental monitoring, and water resource management. Traditional bathymetric techniques, such as topographic surveys and acoustic soundings, face spatial continuity and usability challenges in shallow or vegetated waters. Recent advances, including Uncrewed Surface Vessels (USVs) equipped with GNSS and acoustic sensors, along with UAV-based photogrammetry for 3D modeling in clear waters, have expanded capabilities. However, optical methods suffer from depth underestimation due to light refraction, requiring geometric corrections. To address these limitations, the paper proposes a multi-sensor fusion workflow that integrates high-precision topographic data from total stations and GNSS, depth measurements from a USV equipped with a single-beam echo sounder, and UAV-derived optical bathymetry corrected for refraction using Structure from Motion (SfM) techniques. The goal is to combine each method's strengths to overcome their weaknesses and produce an accurate, high-resolution bathymetric model. Validation against ground truth data demonstrated significant improvements in data quality, aligning with standards for shallow-water mapping. Notably, the use of corrected UAV photogrammetry extended effective depth measurements to 4–5 meters, exceeding typical optical limits. The combined methodology ensures robust spatial coverage, precise georeferencing, and transparent independent measurements, making it particularly well-suited for complex lacustrine (lake) environments. The results highlight the operational benefits of using complementary technologies and suggest potential for further enhancement through Machine Learning and Deep Learning techniques to refine data integration and analysis.
Journal Article
Quantifying Below-Water Fluvial Geomorphic Change: The Implications of Refraction Correction, Water Surface Elevations, and Spatially Variable Error
by
Dietrich, James T.
,
Woodget, Amy S.
,
Wilson, Robin T.
in
Accuracy
,
Aircraft
,
artificial intelligence
2019
Much of the geomorphic work of rivers occurs underwater. As a result, high resolutionquantification of geomorphic change in these submerged areas is important. Currently, to quantify thischange, multiple methods are required to get high resolution data for both the exposed and submergedareas. Remote sensing methods are often limited to the exposed areas due to the challenges imposedby the water, and those remote sensing methods for below the water surface require the collection ofextensive calibration data in-channel, which is time-consuming, labour-intensive, and sometimesprohibitive in dicult-to-access areas. Within this paper, we pioneer a novel approach for quantifyingabove- and below-water geomorphic change using Structure-from-Motion photogrammetry andinvestigate the implications of water surface elevations, refraction correction measures, and thespatial variability of topographic errors. We use two epochs of imagery from a site on the River Teme,Herefordshire, UK, collected using a remotely piloted aircraft system (RPAS) and processed usingStructure-from-Motion (SfM) photogrammetry. For the first time, we show that: (1) Quantification ofsubmerged geomorphic change to levels of accuracy commensurate with exposed areas is possiblewithout the need for calibration data or a dierent method from exposed areas; (2) there is minimaldierence in results produced by dierent refraction correction procedures using predominantlynadir imagery (small angle vs. multi-view), allowing users a choice of software packages/processingcomplexity; (3) improvements to our estimations of water surface elevations are critical for accuratetopographic estimation in submerged areas and can reduce mean elevation error by up to 73%;and (4) we can use machine learning, in the form of multiple linear regressions, and a Gaussian NaïveBayes classifier, based on the relationship between error and 11 independent variables, to generate ahigh resolution, spatially continuous model of geomorphic change in submerged areas, constrained byspatially variable error estimates. Our multiple regression model is capable of explaining up to 54%of magnitude and direction of topographic error, with accuracies of less than 0.04 m. With on-goingtesting and improvements, this machine learning approach has potential for routine application inspatially variable error estimation within the RPAS–SfM workflow.
Journal Article