Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,311
result(s) for
"grading classification"
Sort by:
Mass-spectrometry-based proteomic correlates of grade and stage reveal pathways and kinases associated with aggressive human cancers
2021
Proteomic signatures associated with clinical measures of more aggressive cancers could yield molecular clues as to disease drivers. Here, utilizing the Clinical Proteomic Tumor Analysis Consortium (CPTAC) mass-spectrometry-based proteomics datasets, we defined differentially expressed proteins and mRNAs associated with higher grade or higher stage, for each of seven cancer types (breast, colon, lung adenocarcinoma, clear cell renal, ovarian, uterine, and pediatric glioma), representing 794 patients. Widespread differential patterns of total proteins and phosphoproteins involved some common patterns shared between different cancer types. More proteins were associated with higher grade than higher stage. Most proteomic signatures predicted patient survival in independent transcriptomic datasets. The proteomic grade signatures, in particular, involved DNA copy number alterations. Pathways of interest were enriched within the grade-associated proteins across multiple cancer types, including pathways of altered metabolism, Warburg-like effects, and translation factors. Proteomic grade correlations identified protein kinases having functional impact in vitro in uterine endometrial cancer cells, including MAP3K2, MASTL, and TTK. The protein-level grade and stage associations for all proteins profiled—along with corresponding information on phosphorylation, pathways, mRNA expression, and copy alterations—represent a resource for identifying new potential targets. Proteomic analyses are often concordant with corresponding transcriptomic analyses, but with notable exceptions.
Journal Article
Classification of peanut pod rot based on improved YOLOv5s
2024
Peanut pod rot is one of the major plant diseases affecting peanut production and quality over China, which causes large productivity losses and is challenging to control. To improve the disease resistance of peanuts, breeding is one significant strategy. Crucial preventative and management measures include grading peanut pod rot and screening high-contributed genes that are highly resistant to pod rot should be carried out. A machine vision-based grading approach for individual cases of peanut pod rot was proposed in this study, which avoids time-consuming, labor-intensive, and inaccurate manual categorization and provides dependable technical assistance for breeding studies and peanut pod rot resistance. The Shuffle Attention module has been added to the YOLOv5s (You Only Look Once version 5 small) feature extraction backbone network to overcome occlusion, overlap, and adhesions in complex backgrounds. Additionally, to reduce missing and false identification of peanut pods, the loss function CIoU (Complete Intersection over Union) was replaced with EIoU (Enhanced Intersection over Union). The recognition results can be further improved by introducing grade classification module, which can read the information from the identified RGB images and output data like numbers of non-rotted and rotten peanut pods, the rotten pod rate, and the pod rot grade. The Precision value of the improved YOLOv5s reached 93.8%, which was 7.8%, 8.4%, and 7.3% higher than YOLOv5s, YOLOv8n, and YOLOv8s, respectively; the mAP (mean Average Precision) value was 92.4%, which increased by 6.7%, 7.7%, and 6.5%, respectively. Improved YOLOv5s has an average improvement of 6.26% over YOLOv5s in terms of recognition accuracy: that was 95.7% for non-rotted peanut pods and 90.8% for rotten peanut pods. This article presented a machine vision- based grade classification method for peanut pod rot, which offered technological guidance for selecting high-quality cultivars with high resistance to pod rot in peanut.
Journal Article
Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification
2021
Background
One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels.
Results
As expected, the model performance on strongly annotated data steadily increases as the percentage of strong annotations that are used increases, reaching a performance comparable to pathologists (
κ
=
0.691
±
0.02
). Nevertheless, the performance sharply decreases when applied for the WSI classification scenario with
κ
=
0.307
±
0.133
. Moreover, it only provides a lower performance regardless of the number of annotations used. The model performance increases when fine-tuning the model for the task of Gleason scoring with the weak WSI labels
κ
=
0.528
±
0.05
.
Conclusion
Combining weak and strong supervision improves strong supervision in classification of Gleason patterns using tissue microarrays (TMA) and WSI regions. Our results contribute very good strategies for training CNN models combining few annotated data and heterogeneous data sources. The performance increases in the controlled TMA scenario with the number of annotations used to train the model. Nevertheless, the performance is hindered when the trained TMA model is applied directly to the more challenging WSI classification problem. This demonstrates that a good pre-trained model for prostate cancer TMA image classification may lead to the best downstream model if fine-tuned on the WSI target dataset. We have made available the source code repository for reproducing the experiments in the paper:
https://github.com/ilmaro8/Digital_Pathology_Transfer_Learning
Journal Article
The usefulness of ultrasonographic hydronephrosis grading systems and renal scintigraphy for predicting long-term outcomes in children with ureteropelvic junction obstruction
by
Pańczyk-Tomaszewska, Małgorzata
,
Turczyn, Agnieszka
,
Krzemień, Grażyna
in
Childrens health
,
Scintigraphy
,
Ultrasonic imaging
2024
Introduction and objective: To assess the usefulness of the Society of Fetal Urology (SFU) grading system, the urinary tract dilatation (UTD) classification, anteroposterior renal pelvis diameter (APRPD) measurement, and differential renal function (DRF) in 99mTc-EC scintigraphy (SC) for predicting long-term outcomes in children with ureteropelvic junction obstruction (UPJO). Materials and methods: Abdominal ultrasonography and SC at the time of UPJO diagnosis and at follow-up examination (initial/final US and SC) were evaluated. Initial and final blood pressure, serum creatinine (Cr), cystatin C, urine albumin-to-Cr ratio (ACR), and estimated glomerular filtration rate (GFR) were determined. Results: Fifty-three children with UPJO were studied. The median age at diagnosis was 0.81 years (0.10–6.01), and at follow-up examination, it was 5.17 years (1.75–11.60). Surgical treatment was required for 21 (40%) children, of whom 24% had an initial APRPD <20 mm, and 52% had an initial DRF ≥40%. Severe renal scars in the final SC were demonstrated in 17 (32%) children, of whom 47% had an initial APRPD <20 mm, and 41% had an initial DRF ≥40%. Hypertension was present in 3 (6%) patients, and laboratory symptoms of renal injury were observed in 6 (11%) patients. Receiver operating characteristic (ROC) analysis demonstrated low usefulness of the initial SFU and UTD classifications and DRF for predicting surgical treatment (area under the curve, AUC: 0.696, 0.728, 0,674, respectively) and severe renal scars (AUC: 0.772, 0.723, 0.662, respectively), An APRPD ≥19 mm demonstrated only moderate usefulness (AUC 0.822) for predicting surgery but was not useful for predicting severe renal scars. Conclusions: The ultrasonographic grading systems and DRF in renal scintigraphy at the time of UPJO diagnosis may not be sufficient for assessing adverse long-term outcomes in children.
Journal Article
Machine Learning for Cataract Classification/Grading on Ophthalmic Imaging Modalities: A Survey
2022
Cataracts are the leading cause of visual impairment and blindness globally. Over the years, researchers have achieved significant progress in developing state-of-the-art machine learning techniques for automatic cataract classification and grading, aiming to prevent cataracts early and improve clinicians’ diagnosis efficiency. This survey provides a comprehensive survey of recent advances in machine learning techniques for cataract classification/grading based on ophthalmic images. We summarize existing literature from two research directions: conventional machine learning methods and deep learning methods. This survey also provides insights into existing works of both merits and limitations. In addition, we discuss several challenges of automatic cataract classification/grading based on machine learning techniques and present possible solutions to these challenges for future research.
Journal Article
Research on Delineation and Assessment Methods for Cultivated Land Concentration and Contiguity in Southeastern China
by
Zhao, Rong
,
He, Lihua
,
Liu, Xiaoding
in
Agricultural land
,
Agricultural production
,
Agriculture
2025
Cultivated land concentration and contiguity, as a core element of agricultural modernization development, holds strategic significance for enhancing agricultural production efficiency and ensuring national food security. This study employs vector patches as research units and classifies spatial connections between patches into direct and indirect connections. We quantify six types of spatial relationships between patches using binary encoding, enabling precise delineation of concentrated contiguous cultivated land. A Patch Connectivity Index is proposed. Combined with the Patch Area Index and Patch Shape Index, an evaluation system for cultivated land concentration and contiguity is established. Using Suixi County as a case study, we investigate the spatiotemporal evolution of its cultivated land concentration and contiguity from 2019 to 2023. Overall, patch connectivity exhibits a “single-element dominant, multi-element complementary” structural pattern, while the evaluation grading of cultivated land concentration and contiguity follows a normal distribution. Between 2019 and 2023, the average patch area decreased while the average number of connections between patches increased, indicating significant improvement in cultivated land concentration and contiguity levels. By adjusting spatial relationships between patches, the effective integration and utilization of cultivated land resources can provide theoretical foundations and practical references for agricultural modernization development.
Journal Article
An integrated deep convolutional neural networks framework for the automatic segmentation and grading of glioma tumors using multimodal MRI scans
by
Odong, Otung John Peter
,
Abdelwahab, Moataz
,
Abo-Zahhad, Mohammed
in
Accuracy
,
Artificial intelligence
,
Artificial neural networks
2025
Gliomas are the most prevalent and aggressive primary brain tumors characterized by rapid progressions and infiltration. Timely and precise diagnosis is crucial for effective oncology treatment. Magnetic Resonance Imaging (MRI) facilitates noninvasive assessment of brain lesions. Manual brain tumor evaluation from MRI scans is labor-intensive, relies heavily on clinician experience, and is prone to errors. Consequently, automated diagnosis of brain tumors is essential for optimal clinical management and glioma surgical interventions. This study introduces an Integrated Deep Convolutional Neural Network (IDCNN)-based framework for segmenting and grading glioma tumors from multimodal MRI scans. The framework integrates two state-of-the-art CNN architectures. Based on 3D U-Net architecture, the first CNN performs tumor segmentation from multimodal MRI volumes. The segmentation network utilizes the encoder for feature extraction and dimensionality reduction, while the decoder reconstructs the output to the original input size. A thorough performance evaluation of CNN architectures based on pre-trained (ResNet50, EfficientNetB0, DensNet121, EfficientNetB2, and ResNet152V2) was conducted on a three-class dataset of glioma, meningioma, and pituitary tumors to determine the best model for tumor grading. EfficientNetB2 surpassed other models across all evaluation metrics, achieving 99.19% test accuracy, 99.17% precision, 98.94% sensitivity, 99.57% specificity, and 99.06% F1-score. The optimal EfficientNetB2 model was subsequently utilized to grade tumors identified by the segmentation network. The proposed framework exhibited exceptional performance in both segmentation and grading tasks, attaining dice coefficient scores (DSC) of 86.13, 86.75, and 92.41 in enhancing tumor, tumor core, and whole tumor, respectively, alongside 98.49% classification accuracy for high- and low-grade glioma. Experimental findings validate the superior capabilities of the proposed framework compared to the existing methods. These results highlight the potential of the proposed model to aid radiologists in achieving accurate and reliable diagnoses, improving patient outcomes, and supporting clinical decision-making.
Journal Article
Interest of the preventive and curative use of defibrotide on the occurrence and severity of sinusoidal obstruction syndrome after hematopoietic stem cell transplant in children
2022
Defibrotide (DF) is indicated for the treatment of severe sinusoidal obstruction syndrome (SOS) following hematopoietic stem cell transplantation (HSCT), but its prophylactic use against SOS is not recommended yet. This study describes the impact of the preventive and curative use of DF on reducing the incidence and severity of SOS in children. Patients aged 0–19 years, who received allogenic HSCT after myeloablative conditioning regimen with busulfan or total body irradiation in our comprehensive cancer center, between 2013 and 2017, were included. The Baltimore or modified Seattle criteria were used for SOS diagnosis. SOS was graded using the 2017 European Society for Blood and Marrow Transplantation classification defining severity criteria of SOS in children. SOS occurrence tended to decrease with prophylactic DF, but no significant difference was observed in terms of severity. When not treated with preventive DF, 50% (19/38) of the patients with SOS were graded severe to very severe, but only 37% (7/19) had organ dysfunction. Curative DF was administered at a median of 2 days post‐HSCT, for a median of 6.5 days. The absence of fatal SOS supports the use of early curative DF with acceptable toxicities and questions the optimal duration of DF treatment.
Journal Article
Medical image analysis using improved SAM-Med2D: segmentation and classification perspectives
2024
Recently emerged SAM-Med2D represents a state-of-the-art advancement in medical image segmentation. Through fine-tuning the Large Visual Model, Segment Anything Model (SAM), on extensive medical datasets, it has achieved impressive results in cross-modal medical image segmentation. However, its reliance on interactive prompts may restrict its applicability under specific conditions. To address this limitation, we introduce SAM-AutoMed, which achieves automatic segmentation of medical images by replacing the original prompt encoder with an improved MobileNet v3 backbone. The performance on multiple datasets surpasses both SAM and SAM-Med2D. Current enhancements on the Large Visual Model SAM lack applications in the field of medical image classification. Therefore, we introduce SAM-MedCls, which combines the encoder of SAM-Med2D with our designed attention modules to construct an end-to-end medical image classification model. It performs well on datasets of various modalities, even achieving state-of-the-art results, indicating its potential to become a universal model for medical image classification.
Journal Article