Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
507 result(s) for "Navab, Nassir"
Sort by:
Learning 3D Semantic Scene Graphs with Instance Embeddings
A 3D scene is more than the geometry and classes of the objects it comprises. An essential aspect beyond object-level perception is the scene context, described as a dense semantic network of interconnected nodes. Scene graphs have become a common representation to encode the semantic richness of images, where nodes in the graph are object entities connected by edges, so-called relationships. Such graphs have been shown to be useful in achieving state-of-the-art performance in image captioning, visual question answering and image generation or editing. While scene graph prediction methods so far focused on images, we propose instead a novel neural network architecture for 3D data, where the aim is to learn to regress semantic graphs from a given 3D scene. With this work, we go beyond object-level perception, by exploring relations between object entities. Our method learns instance embeddings alongside a scene segmentation and is able to predict semantics for object nodes and edges. We leverage 3DSSG, a large scale dataset based on 3RScan that features scene graphs of changing 3D scenes. Finally, we show the effectiveness of graphs as an intermediate representation on a retrieval task.
Applicability of augmented reality in orthopedic surgery – A systematic review
Background Computer-assisted solutions are changing surgical practice continuously. One of the most disruptive technologies among the computer-integrated surgical techniques is Augmented Reality (AR). While Augmented Reality is increasingly used in several medical specialties, its potential benefit in orthopedic surgery is not yet clear. The purpose of this article is to provide a systematic review of the current state of knowledge and the applicability of AR in orthopedic surgery. Methods A systematic review of the current literature was performed to find the state of knowledge and applicability of AR in Orthopedic surgery. A systematic search of the following three databases was performed: “PubMed”, “Cochrane Library” and “Web of Science”. The systematic review followed the Preferred Reporting Items on Systematic Reviews and Meta-analysis (PRISMA) guidelines and it has been published and registered in the international prospective register of systematic reviews (PROSPERO). Results 31 studies and reports are included and classified into the following categories: Instrument / Implant Placement, Osteotomies, Tumor Surgery, Trauma, and Surgical Training and Education . Quality assessment could be performed in 18 studies. Among the clinical studies, there were six case series with an average score of 90% and one case report, which scored 81% according to the Joanna Briggs Institute Critical Appraisal Checklist (JBI CAC). The 11 cadaveric studies scored 81% according to the QUACS scale (Quality Appraisal for Cadaveric Studies). Conclusion This manuscript provides 1) a summary of the current state of knowledge and research of Augmented Reality in orthopedic surgery presented in the literature, and 2) a discussion by the authors presenting the key remarks required for seamless integration of Augmented Reality in the future surgical practice. Trial registration PROSPERO registration number: CRD42019128569 .
SoftPool++: An Encoder–Decoder Network for Point Cloud Completion
We propose a novel convolutional operator for the task of point cloud completion. One striking characteristic of our approach is that, conversely to related work it does not require any max-pooling or voxelization operation. Instead, the proposed operator used to learn the point cloud embedding in the encoder extracts permutation-invariant features from the point cloud via a soft-pooling of feature activations, which are able to preserve fine-grained geometric details. These features are then passed on to a decoder architecture. Due to the compression in the encoder, a typical limitation of this type of architectures is that they tend to lose parts of the input shape structure. We propose to overcome this limitation by using skip connections specifically devised for point clouds, where links between corresponding layers in the encoder and the decoder are established. As part of these connections, we introduce a transformation matrix that projects the features from the encoder to the decoder and vice-versa. The quantitative and qualitative results on the task of object completion from partial scans on the ShapeNet dataset show that incorporating our approach achieves state-of-the-art performance in shape completion both at low and high resolutions.
A BaSiC tool for background and shading correction of optical microscopy images
Quantitative analysis of bioimaging data is often skewed by both shading in space and background variation in time. We introduce BaSiC, an image correction method based on low-rank and sparse decomposition which solves both issues. In comparison to existing shading correction tools, BaSiC achieves high-accuracy with significantly fewer input images, works for diverse imaging conditions and is robust against artefacts. Moreover, it can correct temporal drift in time-lapse microscopy data and thus improve continuous single-cell quantification. BaSiC requires no manual parameter setting and is available as a Fiji/ImageJ plugin. Accurate quantification of bioimaging data is often confounded by uneven illumination (shading) in space and background variation in time. Here the authors present BaSiC, a Fiji plugin solving both issues. It requires fewer input images and is more robust to artefacts than existing shading correction tools.
Real-time acoustic sensing and artificial intelligence for error prevention in orthopedic surgery
In this work, we developed and validated a computer method capable of robustly detecting drill breakthrough events and show the potential of deep learning-based acoustic sensing for surgical error prevention. Bone drilling is an essential part of orthopedic surgery and has a high risk of injuring vital structures when over-drilling into adjacent soft tissue. We acquired a dataset consisting of structure-borne audio recordings of drill breakthrough sequences with custom piezo contact microphones in an experimental setup using six human cadaveric hip specimens. In the following step, we developed a deep learning-based method for the automated detection of drill breakthrough events in a fast and accurate fashion. We evaluated the proposed network regarding breakthrough detection sensitivity and latency. The best performing variant yields a sensitivity of 93.64 ± 2.42 % for drill breakthrough detection in a total execution time of 139.29 ms . The validation and performance evaluation of our solution demonstrates promising results for surgical error prevention by automated acoustic-based drill breakthrough detection in a realistic experiment while being multiple times faster than a surgeon’s reaction time. Furthermore, our proposed method represents an important step for the translation of acoustic-based breakthrough detection towards surgical use.
Modern machine-learning can support diagnostic differentiation of central and peripheral acute vestibular disorders
Background Diagnostic classification of central vs. peripheral etiologies in acute vestibular disorders remains a challenge in the emergency setting. Novel machine-learning methods may help to support diagnostic decisions. In the current study, we tested the performance of standard and machine-learning approaches in the classification of consecutive patients with acute central or peripheral vestibular disorders. Methods 40 Patients with vestibular stroke (19 with and 21 without acute vestibular syndrome (AVS), defined by the presence of spontaneous nystagmus) and 68 patients with peripheral AVS due to vestibular neuritis were recruited in the emergency department, in the context of the prospective EMVERT trial (EMergency VERTigo). All patients received a standardized neuro-otological examination including videooculography and posturography in the acute symptomatic stage and an MRI within 7 days after symptom onset. Diagnostic performance of state-of-the-art scores, such as HINTS (Head Impulse, gaze-evoked Nystagmus, Test of Skew) and ABCD 2 (Age, Blood, Clinical features, Duration, Diabetes), for the differentiation of vestibular stroke vs. peripheral AVS was compared to various machine-learning approaches: (i) linear logistic regression (LR), (ii) non-linear random forest (RF), (iii) artificial neural network, and (iv) geometric deep learning (Single/MultiGMC). A prospective classification was simulated by ten-fold cross-validation. We analyzed whether machine-estimated feature importances correlate with clinical experience. Results Machine-learning methods (e.g., MultiGMC) outperform univariate scores, such as HINTS or ABCD 2 , for differentiation of all vestibular strokes vs. peripheral AVS (MultiGMC area-under-the-curve (AUC): 0.96 vs. HINTS/ABCD 2 AUC: 0.71/0.58). HINTS performed similarly to MultiGMC for vestibular stroke with AVS (AUC: 0.86), but more poorly for vestibular stroke without AVS (AUC: 0.54). Machine-learning models learn to put different weights on particular features, each of which is relevant from a clinical viewpoint. Established non-linear machine-learning methods like RF and linear methods like LR are less powerful classification models (AUC: 0.89 vs. 0.62). Conclusions Established clinical scores (such as HINTS) provide a valuable baseline assessment for stroke detection in acute vestibular syndromes. In addition, machine-learning methods may have the potential to increase sensitivity and selectivity in the establishment of a correct diagnosis.
Classification of Polar Maps from Cardiac Perfusion Imaging with Graph-Convolutional Neural Networks
Myocardial perfusion imaging is a non-invasive imaging technique commonly used for the diagnosis of Coronary Artery Disease and is based on the injection of radiopharmaceutical tracers into the blood stream. The patient’s heart is imaged while at rest and under stress in order to determine its capacity to react to the imposed challenge. Assessment of imaging data is commonly performed by visual inspection of polar maps showing the tracer uptake in a compact, two-dimensional representation of the left ventricle. This article presents a method for automatic classification of polar maps based on graph convolutional neural networks. Furthermore, it evaluates how well localization techniques developed for standard convolutional neural networks can be used for the localization of pathological segments with respect to clinically relevant areas. The method is evaluated using 946 labeled datasets and compared quantitatively to three other neural-network-based methods. The proposed model achieves an agreement with the human observer on 89.3% of rest test polar maps and on 91.1% of stress test polar maps. Localization performed on a fine 17-segment division of the polar maps achieves an agreement of 83.1% with the human observer, while localization on a coarse 3-segment division based on the vessel beds of the left ventricle has an agreement of 78.8% with the human observer. Our method could thus assist the decision-making process of physicians when analyzing polar map data obtained from myocardial perfusion images.
Automatic Segmentation of Kidneys using Deep Learning for Total Kidney Volume Quantification in Autosomal Dominant Polycystic Kidney Disease
Autosomal Dominant Polycystic Kidney Disease (ADPKD) is the most common inherited disorder of the kidneys. It is characterized by enlargement of the kidneys caused by progressive development of renal cysts, and thus assessment of total kidney volume (TKV) is crucial for studying disease progression in ADPKD. However, automatic segmentation of polycystic kidneys is a challenging task due to severe alteration in the morphology caused by non-uniform cyst formation and presence of adjacent liver cysts. In this study, an automated segmentation method based on deep learning has been proposed for TKV computation on computed tomography (CT) dataset of ADPKD patients exhibiting mild to moderate or severe renal insufficiency. The proposed method has been trained (n = 165) and tested (n = 79) on a wide range of TKV (321.2–14,670.7 mL) achieving an overall mean Dice Similarity Coefficient of 0.86 ± 0.07 ( mean  ±  SD ) between automated and manual segmentations from clinical experts and a mean correlation coefficient ( ρ ) of 0.98 (p < 0.001) for segmented kidney volume measurements in the entire test set. Our method facilitates fast and reproducible measurements of kidney volumes in agreement with manual segmentations from clinical experts.
QuickNAT: A fully convolutional network for quick and accurate segmentation of neuroanatomy
Whole brain segmentation from structural magnetic resonance imaging (MRI) is a prerequisite for most morphological analyses, but is computationally intense and can therefore delay the availability of image markers after scan acquisition. We introduce QuickNAT, a fully convolutional, densely connected neural network that segments a MRI brain scan in 20 s. To enable training of the complex network with millions of learnable parameters using limited annotated data, we propose to first pre-train on auxiliary labels created from existing segmentation software. Subsequently, the pre-trained model is fine-tuned on manual labels to rectify errors in auxiliary labels. With this learning strategy, we are able to use large neuroimaging repositories without manual annotations for training. In an extensive set of evaluations on eight datasets that cover a wide age range, pathology, and different scanners, we demonstrate that QuickNAT achieves superior segmentation accuracy and reliability in comparison to state-of-the-art methods, while being orders of magnitude faster. The speed up facilitates processing of large data repositories and supports translation of imaging biomarkers by making them available within seconds for fast clinical decision making. [Display omitted] •Introduces a deep learning based whole brain segmentation tool called QuickNAT, processing each 3D MRI T1 brain scans in 20 secs.•The high segmentation accuracy of QuickNAT was evaluated on 5 different benchmark datasets, containing a wide age range, subjects with different pathologies (AD, MCI and CN), and different scanners (1.5T and 3.0T).•QuickNAT can be effectively used for longitudinal studies as it performs well in test-retest and multi-center experiments.
A lightweight neural network with multiscale feature enhancement for liver CT segmentation
Segmentation of abdominal Computed Tomography (CT) scan is essential for analyzing, diagnosing, and treating visceral organ diseases (e.g., hepatocellular carcinoma). This paper proposes a novel neural network (Res-PAC-UNet) that employs a fixed-width residual UNet backbone and Pyramid Atrous Convolutions, providing a low disk utilization method for precise liver CT segmentation. The proposed network is trained on medical segmentation decathlon dataset using a modified surface loss function. Additionally, we evaluate its quantitative and qualitative performance; the Res16-PAC-UNet achieves a Dice coefficient of 0.950 ± 0.019 with less than half a million parameters. Alternatively, the Res32-PAC-UNet obtains a Dice coefficient of 0.958 ± 0.015 with an acceptable parameter count of approximately 1.2 million.