Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
10,820
result(s) for
"Scanning devices"
Sort by:
The Chinese Halpha Solar Explorer mission: An overview
2022
The Chinese H[alpha] Solar Explorer (CHASE), dubbed \"Xihe\"--Goddess of the Sun, was launched on October 14, 2021 as the first solar space mission of China National Space Administration (CNSA). The CHASE mission is designed to test a newly developed satellite platform and to acquire the spectroscopic observations in the H[alpha] waveband. The H[alpha] Imaging Spectrograph (HIS) is the scientific payload of the CHASE satellite. It consists of two observational modes: raster scanning mode and continuum imaging mode. The raster scanning mode obtains full-Sun or region-of-interest spectral images from 6559.7 to 6565.9 [Angstrom] and from 6567.8 to 6570.6 [Angstrom] with 0.024 [Angstrom] pixel spectral resolution and 1 min temporal resolution. The continuum imaging mode obtains photospheric images in continuum around 6689 [Angstrom] with the full width at half maximum of 13.4 [Angstrom]. The CHASE mission will advance our understanding of the dynamics of solar activity in the photosphere and chromosphere. In this paper, we present an overview of the CHASE mission including the scientific objectives, HIS instrument overview, data calibration flow, and first results of on-orbit observations. space-based telescope, solar physics, chromosphere, photosphere PACS number(s): 95.55.Fw, 96.60.-j, 96.60.Na, 96.60.Mz
Journal Article
A gold nanoparticle-based lateral flow immunoassay for atrazine point-of-care detection using a handhold scanning device as reader
by
Liu, Wentao
,
Sun, Tieqiang
,
Fan, Longxing
in
Agricultural production
,
Analytical Chemistry
,
Antigens
2022
A method is described to achieve accurate quantitative detection of atrazine (ATZ) in maize by using lateral flow strips based on gold nanoparticles (GNPs) and a handheld scanning reader. GNPs of 15 nm in diameter were applied as label, and a lateral flow immune assay strip was prepared. The linear range was 5.01–95.86 ng mL
−1
with a detection limit of 4.92 ng mL
−1
in phosphate buffer, 4 times better than the readout by the naked eye. ATZ-spiked corn samples were also analysed. The accuracy of results of spiked samples was confirmed by ELISA and liquid chromatography-tandem mass spectrometry (HPLC), which proved the reliability of the proposed method. A handhold device with an optical scanning system was designed for on-site quantitative detection. Combined with the pretreatment, the assay could be completed in less than 20 min.
Graphical abstract
Journal Article
Poly/TiOsub.2 Nanocomposite Hydrogels for Paper Artwork Cleaning and Protection
by
Rodesi, Jasmine
,
Botti, Sabina
,
D’Amato, Rosaria
in
Acrylic acid
,
Protection and preservation
,
Raman spectroscopy
2025
Paper-based artworks are prone to natural aging processes driven by chemical and biological processes. Numerous treatments have been developed to mitigate deterioration and prevent irreversible damage. In this study, we investigated the use of poly(acrylic acid)/TiO[sub.2] composite hydrogels, combining their cleaning and protective functions in a minimally invasive treatment. Hydrogels allow for controlled water flow and photocatalytic TiO[sub.2] nanoparticles enhance the hydrogel’s efficacy by enabling the removal of oxidation products and inactivating biological contaminants. Furthermore, this innovative material can act as a protective coating against UV-induced aging, preserving both color and stability of the paper. Raman spectroscopy and confocal laser scanning microscopy imaging techniques were employed to evaluate the treatments, allowing for us to differentiate between hydrolytic and oxidative aging processes. Our findings demonstrate that papers coated with poly(acrylic acid)/TiO[sub.2] composite hydrogels exhibit significant reductions in oxidative markers, an enhanced color stability, and an overall improved resistance to degradation compared to uncoated samples.
Journal Article
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches
2022
With the development of Computer-aided Diagnosis (CAD) and image scanning techniques, Whole-slide Image (WSI) scanners are widely used in the field of pathological diagnosis. Therefore, WSI analysis has become the key to modern digital histopathology. Since 2004, WSI has been used widely in CAD. Since machine vision methods are usually based on semi-automatic or fully automatic computer algorithms, they are highly efficient and labor-saving. The combination of WSI and CAD technologies for segmentation, classification, and detection helps histopathologists to obtain more stable and quantitative results with minimum labor costs and improved diagnosis objectivity. This paper reviews the methods of WSI analysis based on machine learning. Firstly, the development status of WSI and CAD methods are introduced. Secondly, we discuss publicly available WSI datasets and evaluation metrics for segmentation, classification, and detection tasks. Then, the latest development of machine learning techniques in WSI segmentation, classification, and detection are reviewed. Finally, the existing methods are studied, and the application prospects of the methods in this field are forecasted.
Journal Article
Artificial intelligence in diagnostic pathology
by
Parwani, Anil V.
,
Shafi, Saba
in
Algorithms
,
Artificial intelligence
,
Artificial intelligence in Cancer imaging and diagnosis
2023
Digital pathology (DP) is being increasingly employed in cancer diagnostics, providing additional tools for faster, higher-quality, accurate diagnosis. The practice of diagnostic pathology has gone through a staggering transformation wherein new tools such as digital imaging, advanced artificial intelligence (AI) algorithms, and computer-aided diagnostic techniques are being used for assisting, augmenting and empowering the computational histopathology and AI-enabled diagnostics. This is paving the way for advancement in precision medicine in cancer. Automated whole slide imaging (WSI) scanners are now rendering diagnostic quality, high-resolution images of entire glass slides and combining these images with innovative digital pathology tools is making it possible to integrate imaging into all aspects of pathology reporting including anatomical, clinical, and molecular pathology. The recent approvals of WSI scanners for primary diagnosis by the FDA as well as the approval of prostate AI algorithm has paved the way for starting to incorporate this exciting technology for use in primary diagnosis. AI tools can provide a unique platform for innovations and advances in anatomical and clinical pathology workflows. In this review, we describe the milestones and landmark trials in the use of AI in clinical pathology with emphasis on future directions.
Journal Article
SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds
2022
With the recent availability and affordability of commercial depth sensors and 3D scanners, an increasing number of 3D (i.e., RGBD, point cloud) datasets have been publicized to facilitate research in 3D computer vision. However, existing datasets either cover relatively small areas or have limited semantic annotations. Fine-grained understanding of urban-scale 3D scenes is still in its infancy. In this paper, we introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2. Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset. In addition to the more commonly encountered categories such as road and vegetation, urban-level categories including rail, bridge, and river are also included in our dataset. Based on this dataset, we further build a benchmark to evaluate the performance of state-of-the-art segmentation algorithms. In particular, we provide a comprehensive analysis and identify several key challenges limiting urban-scale point cloud understanding. The dataset is available at http://point-cloud-analysis.cs.ox.ac.uk/.
Journal Article
Evaluation of the detection accuracy of set-up for various treatment sites using surface-guided radiotherapy system, VOXEL AN: a phantom study
by
Saito, Masahide
,
Komiyama, Takafumi
,
Ueda, Koji
in
CT imaging
,
Radiotherapy
,
Scanning devices
2022
The purpose of this study is to evaluate the detection accuracy of a 3-dimensional (3D) body scanner, VOXELAN, in surface-guided radiotherapy (SGRT) of each part of the human body using a whole-body human phantom. We used A Resusci Anne was used as the whole-body phantom. The detection accuracy of VOXELAN in a radiotherapy treatment room with a linear accelerator (LINAC) was evaluated for two reference images: reconstruction of the planning computed tomography (CT) image (CT reference) and scanning by VOXELAN before the treatment (scan reference). The accuracy of the translational and rotational directions was verified for four treatment sites (open face shell, breast, abdomen, and arm), using the magnitude of the 6D robotic couch movement as the true value. Our results showed that the detection accuracy improved as the displacement from the reference position decreased for all the sites. Using the scan reference, the average accuracy of the translational and rotational axes was within 1.44 mm and 0.41 [degrees], respectively, for all sites except the arms. Similarly, using the CT reference, the average accuracy was within 2.45 mm and 1.35[degrees], respectively. Additionally, it was difficult for both reference images to recognize misalignment of the arms. In conclusion we discovered that VOXELAN achieved a high detection accuracy for the head with an open face shell, chest, and abdomen, indicating that the system is useful in a clinical setting. However, it is necessary to pay attention to location matching for areas with few features, such as surface irregularities and potential errors, when the reference image is created from CT.
Journal Article
A review on medical imaging synthesis using deep learning and its clinical applications
by
Yang, Xiaofeng
,
Fu, Yabo
,
Wang, Tonghe
in
Artificial intelligence
,
Deep Learning
,
Diagnostic Imaging
2021
This paper reviewed the deep learning‐based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning‐based methods in inter‐ and intra‐modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Journal Article
Evaluating the Accuracy of the Azure Kinect and Kinect v2
2022
The Azure Kinect represents the latest generation of Microsoft Kinect depth cameras. Of interest in this article is the depth and spatial accuracy of the Azure Kinect and how it compares to its predecessor, the Kinect v2. In one experiment, the two sensors are used to capture a planar whiteboard at 15 locations in a grid pattern with laser scanner data serving as ground truth. A set of histograms reveals the temporal-based random depth error inherent in each Kinect. Additionally, a two-dimensional cone of accuracy illustrates the systematic spatial error. At distances greater than 2.5 m, we find the Azure Kinect to have improved accuracy in both spatial and temporal domains as compared to the Kinect v2, while for distances less than 2.5 m, the spatial and temporal accuracies were found to be comparable. In another experiment, we compare the distribution of random depth error between each Kinect sensor by capturing a flat wall across the field of view in horizontal and vertical directions. We find the Azure Kinect to have improved temporal accuracy over the Kinect v2 in the range of 2.5 to 3.5 m for measurements close to the optical axis. The results indicate that the Azure Kinect is a suitable substitute for Kinect v2 in 3D scanning applications.
Journal Article