Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,172 result(s) for "cancer grading"
Sort by:
AI-based prostate analysis system trained without human supervision to predict patient outcome from tissue samples
In order to plan the best treatment for prostate cancer patients, the aggressiveness of the tumor is graded based on visual assessment of tissue biopsies according to the Gleason scale. Recently, a number of AI models have been developed that can be trained to do this grading as well as human pathologists. But the accuracy of the AI grading will be limited by the accuracy of the subjective “ground truth” Gleason grades used for the training. We have trained an AI to predict patient outcome directly based on image analysis of a large biobank of tissue samples with known outcome without input of any human knowledge about cancer grading. The model has shown similar and in some cases better ability to predict patient outcome on an independent test-set than expert pathologists doing the conventional grading.
Breast-NET: a lightweight DCNN model for breast cancer detection and grading using histological samples
Breast cancer is a prevalent and highly lethal cancer affecting women globally. While non-invasive techniques like ultrasound and mammogram are used for diagnosis, histological examination after biopsy is considered the gold standard. However, manual examination of tissues for abnormality is labor-intensive, expensive, and requires prior domain knowledge. Early detection, awareness, and access to specialized medical infrastructure in resource-constrained and remote areas are significant challenges but crucial for saving lives. In recent years, deep learning-based approaches have shown promising results in breast cancer detection, facilitated by advancements in GPU memory, computation power, and the availability of digital data. Motivated by these observations, we propose the Breast-NET deep convolutional neural network model for breast cancer detection and grading using histological images. Our model’s performance is evaluated on the BreakHis dataset, and we demonstrate its generalization ability on the Invasive Ductal Carcinoma (IDC) grading and IDC datasets. Extensive experimental and statistical performance analysis, along with an ablation study, validates the efficiency of our proposed model. Furthermore, we demonstrate the effectiveness of transfer learning with seven pre-trained convolutional neural networks for breast cancer detection and grading. Experimental results show that our framework outperforms state-of-the-art approaches in terms of accuracy, space, and computational complexity for the BreakHis, IDC grading, and IDC datasets.
Nuclei-Guided Network for Breast Cancer Grading in HE-Stained Pathological Images
Breast cancer grading methods based on hematoxylin-eosin (HE) stained pathological images can be summarized into two categories. The first category is to directly extract the pathological image features for breast cancer grading. However, unlike the coarse-grained problem of breast cancer classification, breast cancer grading is a fine-grained classification problem, so general methods cannot achieve satisfactory results. The second category is to apply the three evaluation criteria of the Nottingham Grading System (NGS) separately, and then integrate the results of the three criteria to obtain the final grading result. However, NGS is only a semiquantitative evaluation method, and there may be far more image features related to breast cancer grading. In this paper, we proposed a Nuclei-Guided Network (NGNet) for breast invasive ductal carcinoma (IDC) grading in pathological images. The proposed nuclei-guided attention module plays the role of nucleus attention, so as to learn more nuclei-related feature representations for breast IDC grading. In addition, the proposed nuclei-guided fusion module in the fusion process of different branches can further enable the network to focus on learning nuclei-related features. Overall, under the guidance of nuclei-related features, the entire NGNet can learn more fine-grained features for breast IDC grading. The experimental results show that the performance of the proposed method is better than that of state-of-the-art method. In addition, we released a well-labeled dataset with 3644 pathological images for breast IDC grading. This dataset is currently the largest publicly available breast IDC grading dataset and can serve as a benchmark to facilitate a broader study of breast IDC grading.
A novel framework for esophageal cancer grading: combining CT imaging, radiomics, reproducibility, and deep learning insights
Objective This study aims to create a reliable framework for grading esophageal cancer. The framework combines feature extraction, deep learning with attention mechanisms, and radiomics to ensure accuracy, interpretability, and practical use in tumor analysis. Materials and methods This retrospective study used data from 2,560 esophageal cancer patients across multiple clinical centers, collected from 2018 to 2023. The dataset included CT scan images and clinical information, representing a variety of cancer grades and types. Standardized CT imaging protocols were followed, and experienced radiologists manually segmented the tumor regions. Only high-quality data were used in the study. A total of 215 radiomic features were extracted using the SERA platform. The study used two deep learning models—DenseNet121 and EfficientNet-B0—enhanced with attention mechanisms to improve accuracy. A combined classification approach used both radiomic and deep learning features, and machine learning models like Random Forest, XGBoost, and CatBoost were applied. These models were validated with strict training and testing procedures to ensure effective cancer grading. Results This study analyzed the reliability and performance of radiomic and deep learning features for grading esophageal cancer. Radiomic features were classified into four reliability levels based on their ICC (Intraclass Correlation) values. Most of the features had excellent (ICC > 0.90) or good (0.75 < ICC ≤ 0.90) reliability. Deep learning features extracted from DenseNet121 and EfficientNet-B0 were also categorized, and some of them showed poor reliability. The machine learning models, including XGBoost and CatBoost, were tested for their ability to grade cancer. XGBoost with Recursive Feature Elimination (RFE) gave the best results for radiomic features, with an AUC (Area Under the Curve) of 91.36%. For deep learning features, XGBoost with Principal Component Analysis (PCA) gave the best results using DenseNet121, while CatBoost with RFE performed best with EfficientNet-B0, achieving an AUC of 94.20%. Combining radiomic and deep features led to significant improvements, with XGBoost achieving the highest AUC of 96.70%, accuracy of 96.71%, and sensitivity of 95.44%. The combination of both DenseNet121 and EfficientNet-B0 models in ensemble models achieved the best overall performance, with an AUC of 95.14% and accuracy of 94.88%. Conclusions This study improves esophageal cancer grading by combining radiomics and deep learning. It enhances diagnostic accuracy, reproducibility, and interpretability, while also helping in personalized treatment planning through better tumor characterization. Clinical trial number Not applicable.
Validation of the 18-gene classifier as a prognostic biomarker of distant metastasis in breast cancer
We validated an 18-gene classifier (GC) initially developed to predict local/regional recurrence after mastectomy in estimating distant metastasis risk. The 18-gene scoring algorithm defines scores as: <21, low risk; ≥21, high risk. Six hundred eighty-three patients with primary operable breast cancer and fresh frozen tumor tissues available were included. The primary outcome was the 5-year probability of freedom from distant metastasis (DMFP). Two external datasets were used to test the predictive accuracy of 18-GC. The 5-year rates of DMFP for patients classified as low-risk (n = 146, 21.7%) and high-risk (n = 537, 78.6%) were 96.2% (95% CI, 91.1%-98.8%) and 80.9% (74.6%-81.9%), respectively (median follow-up interval, 71.8 months). The 5-year rates of DMFP of the low-risk group in stage I (n = 62, 35.6%), stage II (n = 66, 20.1%), and stage III (n = 18, 10.3%) were 100%, 94.2% (78.5%-98.5%), and 90.9% (50.8%-98.7%), respectively. Multivariate analysis revealed that 18-GC is an independent prognostic factor of distant metastasis (adjusted hazard ratio, 5.1; 95% CI, 1.8-14.1; p = 0.0017) for scores of ≥21. External validation showed that the 5-year rate of DMFP in the low- and high-risk patients was 94.1% (82.9%-100%) and 80.3% (70.7%-89.9%, p = 0.06) in a Singapore dataset, and 89.5% (81.9%-94.1%) and 73.6% (67.2%-79.0%, p = 0.0039) in the GEO-GSE20685 dataset, respectively. In conclusion, 18-GC is a viable prognostic biomarker for breast cancer to estimate distant metastasis risk.
Radiologist-like artificial intelligence for grade group prediction of radical prostatectomy for reducing upgrading and downgrading from biopsy
To reduce upgrading and downgrading between needle biopsy (NB) and radical prostatectomy (RP) by predicting patient-level Gleason grade groups (GGs) of RP to avoid over- and under-treatment. In this study, we retrospectively enrolled 575 patients from two medical institutions. All patients received prebiopsy magnetic resonance (MR) examinations, and pathological evaluations of NB and RP were available. A total of 12,708 slices of original male pelvic MR images (T2-weighted sequences with fat suppression, T2WI-FS) containing 5405 slices of prostate tissue, and 2,753 tumor annotations (only T2WI-FS were annotated using RP pathological sections as ground truth) were analyzed for the prediction of patient-level RP GGs. We present a prostate cancer (PCa) framework, PCa-GGNet, that mimics radiologist behavior based on deep reinforcement learning (DRL). We developed and validated it using a multi-center format. Accuracy (ACC) of our model outweighed NB results (0.815 [95% confidence interval (CI): 0.773-0.857] vs. 0.437 [95% CI: 0.335-0.539]). The PCa-GGNet scored higher (kappa value: 0.761) than NB (kappa value: 0.289). Our model significantly reduced the upgrading rate by 27.9% ( < 0.001) and downgrading rate by 6.4% ( = 0.029). DRL using MRI can be applied to the prediction of patient-level RP GGs to reduce upgrading and downgrading from biopsy, potentially improving the clinical benefits of prostate cancer oncologic controls.
A novel deep learning-based technique for detecting prostate cancer in MRI images
In the western world,the prostate cancer is major cause of death in males. Magnetic Resonance Imaging (MRI) is widely used for the detection of prostate cancer due to which it is an open area of research. The proposed method uses deep learning framework for the detection of prostate cancer using the concept of Gleason grading of the historical images. A3D convolutional neural network has been used to observe the affected region and predicting the affected region with the help of Epithelial and the Gleason grading network. The proposed model has performed the state-of-art while detecting epithelial and the Gleason score simultaneously. The performance has been measured by considering all the slices of MRI, volumes of MRI with the test fold, and segmenting prostate cancer with help of Endorectal Coil for collecting the images of MRI of the prostate 3D CNN network. Experimentally, it was observed that the proposed deep learning approach has achieved overall specificity of 85% with an accuracy of 87% and sensitivity 89% over the patient-level for the different targeted MRI images of the challenge of the SPIE-AAPM-NCI Prostate dataset.
Automatic Mitosis and Nuclear Atypia Detection for Breast Cancer Grading in Histopathological Images using Hybrid Machine Learning Technique
Invasive breast cancer is a complex global health issue and the leading cause of women's mortality. Multiclassification in breast cancer, especially with high-resolution images, presents unique challenges. Clinical diagnosis relies on the cancer's pathological stage, requiring precise segmentation and adjustments. Complex structural changes during slide preparation and inconsistent image magnifications further complicate classification. To address these challenges, we propose a hybrid machine learning framework for accurate breast cancer detection and grading using large-scale pathological images. Our approach includes an improved Non-restricted Boltzmann Deep Belief Neural Network for nuclei segmentation, followed by feature extraction and novel feature selection using the Giraffe Kicking Optimization algorithm to mitigate overfitting. We implement an Optimal Kernel layer-based Support Vector Machine classifier to identify mitotic cells and nuclear atypia, using the Nottingham Grading System. Validation on the MITOSIS-ATYPIA-14 database demonstrates the framework's effectiveness, with performance metrics including accuracy, precision, recall, specificity, and F-measure. This approach addresses the complexities of breast cancer classification and grading in a streamlined manner, enhancing diagnostic accuracy and prognosis prediction.
Pancreatic cancer grading in pathological images using deep learning convolutional neural networks version 2; peer review: 2 approved
Background: Pancreatic cancer is one of the deadliest forms of cancer. The cancer grades define how aggressively the cancer will spread and give indication for doctors to make proper prognosis and treatment. The current method of pancreatic cancer grading, by means of manual examination of the cancerous tissue following a biopsy, is time consuming and often results in misdiagnosis and thus incorrect treatment. This paper presents an automated grading system for pancreatic cancer from pathology images developed by comparing deep learning models on two different pathological stains. Methods: A transfer-learning technique was adopted by testing the method on 14 different ImageNet pre-trained models. The models were fine-tuned to be trained with our dataset. Results: From the experiment, DenseNet models appeared to be the best at classifying the validation set with up to 95.61% accuracy in grading pancreatic cancer despite the small sample set. Conclusions: To the best of our knowledge, this is the first work in grading pancreatic cancer based on pathology images. Previous works have either focused only on detection (benign or malignant), or on radiology images (computerized tomography [CT], magnetic resonance imaging [MRI] etc.). The proposed system can be very useful to pathologists in facilitating an automated or semi-automated cancer grading system, which can address the problems found in manual grading.
Accuracy of pre-operative hysteroscopic guided biopsy for predicting final pathology in uterine malignancies
Purpose To evaluate concordance (C) between pre-operative hysteroscopic-directed sampling and final pathology in uterine cancers. Methods A retrospective cross-sectional evaluation of prospectively collected data of women who underwent hysterectomy for uterine malignancies and a previous hysteroscopic-guided biopsy was performed. Diagnostic concordance between pre-operative (hysteroscopic biopsy) and postoperative (uterine specimen) histology was evaluated. In endometrioid-endometrial cancers cases Kappa ( k ) statistics was applied to evaluate agreement for grading (G) between the preoperative and final pathology. Results A total 101 hysterectomies for uterine malignancies were evaluated. There were 23 non-endometrioid cancers: 7 serous (C:5/7, 71.4%); 10 carcinosarcomas (C:7/10, 70%, remaining 3 cases only epithelial component diagnosed); 3 clear cell (C:3/3, 100%); 3 sarcomas (C:3/3, 100%). In 78 cases an endometrioid endometrial cancer was found. In 63 cases there was a histological C (63/78, 80.8%) between hysteroscopic-guided biopsy and final pathology, while in 15 cases (19.2%) only hyperplasia (with/without atypia) was found preoperatively. Overall accuracy to detect endometrial cancer was 80.2%. In 50 out of 63 endometrial cancers (79.4%) grading was concordant. The overall level of agreement between preoperative and postoperative grading was “substantial” according to Kappa ( k ) statistics ( k 0.64; 95% CI: 0.449–0.83; p  < 0.001), as well as for G1 (0.679; 95% CI: 0.432–0.926; p  < 0.001) and G3 (0.774; 94% CI: 0.534–1; p  < 0.001), while for G2 (0.531; 95% CI: 0.286–0.777; p  < 0.001) it was moderate. Conclusions In our series we found an 80% C between pre-operative hysteroscopic-guided biopsy and final pathology, in uterine malignancies. Moreover, hysteroscopic biopsy accurately predicted endometrial cancer in 80% of cases and “substantially” predicted histological grading. Hysteroscopic-guided uterine sampling could be a useful tool to tailor treatment in patients with uterine malignancies.