Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
32
result(s) for
"Ungi, Tamas"
Sort by:
The use of 3D digital anatomy model improves the communication with patients presenting with prostate disease: The first experience in Senegal
by
Halle, Michael
,
Carreras, Nayra Pumar
,
Fichtinger, Gabor
in
Care and treatment
,
Communication
,
Communication in medicine
2022
We hypothesized that the use of an interactive 3D digital anatomy model can improve the quality of communication with patients about prostate disease.
A 3D digital anatomy model of the prostate was created from an MRI scan, according to McNeal's zonal anatomy classification. During urological consultation, the physician presented the digital model on a computer and used it to explain the disease and available management options. The experience of patients and physicians was recorded in questionnaires.
The main findings were as follows: 308 patients and 47 physicians participated in the study. In the patient group, 96.8% reported an improved level of understanding of prostate disease and 90.6% reported an improved ability to ask questions during consultation. Among the physicians, 91.5% reported improved communication skills and 100% reported an improved ability to obtain patient consent for subsequent treatment. At the same time, 76.6% of physicians noted that using the computer model lengthened the consultation.
This exploratory study found that the use of a 3D digital anatomy model in urology consultations was received overwhelmingly favorably by both patients and physicians, and it was perceived to improve the quality of communication between patient and physician. A randomized study is needed to confirm the preliminary findings and further quantify the improvements in the quality of patient-physician communication.
Journal Article
The use of 3D digital anatomy model improves the communication with patients presenting with prostate disease: The first experience in Senegal
2022
ObjectivesWe hypothesized that the use of an interactive 3D digital anatomy model can improve the quality of communication with patients about prostate disease.MethodsA 3D digital anatomy model of the prostate was created from an MRI scan, according to McNeal's zonal anatomy classification. During urological consultation, the physician presented the digital model on a computer and used it to explain the disease and available management options. The experience of patients and physicians was recorded in questionnaires.ResultsThe main findings were as follows: 308 patients and 47 physicians participated in the study. In the patient group, 96.8% reported an improved level of understanding of prostate disease and 90.6% reported an improved ability to ask questions during consultation. Among the physicians, 91.5% reported improved communication skills and 100% reported an improved ability to obtain patient consent for subsequent treatment. At the same time, 76.6% of physicians noted that using the computer model lengthened the consultation.ConclusionThis exploratory study found that the use of a 3D digital anatomy model in urology consultations was received overwhelmingly favorably by both patients and physicians, and it was perceived to improve the quality of communication between patient and physician. A randomized study is needed to confirm the preliminary findings and further quantify the improvements in the quality of patient-physician communication.
Journal Article
Sensor-Based Automated Detection of Electrosurgical Cautery States
by
Jamzad, Amoon
,
Asselin, Mark
,
Fichtinger, Gabor
in
Ablation
,
automated electrosurgical cautery
,
Automation
2022
In computer-assisted surgery, it is typically required to detect when the tool comes into contact with the patient. In activated electrosurgery, this is known as the energy event. By continuously tracking the electrosurgical tools’ location using a navigation system, energy events can help determine locations of sensor-classified tissues. Our objective was to detect the energy event and determine the settings of electrosurgical cautery—robustly and automatically based on sensor data. This study aims to demonstrate the feasibility of using the cautery state to detect surgical incisions, without disrupting the surgical workflow. We detected current changes in the wires of the cautery device and grounding pad using non-invasive current sensors and an oscilloscope. An open-source software was implemented to apply machine learning on sensor data to detect energy events and cautery settings. Our methods classified each cautery state at an average accuracy of 95.56% across different tissue types and energy level parameters altered by surgeons during an operation. Our results demonstrate the feasibility of automatically identifying energy events during surgical incisions, which could be an important safety feature in robotic and computer-integrated surgery. This study provides a key step towards locating tissue classifications during breast cancer operations and reducing the rate of positive margins.
Journal Article
Bridging 3D Slicer and ROS2 for Image-Guided Robotic Interventions
2022
Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several “ad hoc” attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.
Journal Article
1.5 T augmented reality navigated interventional MRI: paravertebral sympathetic plexus injections
2017
The high contrast resolution and absent ionizing radiation of interventional magnetic resonance imaging (MRI) can be advantageous for paravertebral sympathetic nerve plexus injections. We assessed the feasibility and technical performance of MRI-guided paravertebral sympathetic injections utilizing augmented reality navigation and 1.5 T MRI scanner.
A total of 23 bilateral injections of the thoracic (8/23, 35%), lumbar (8/23, 35%), and hypogastric (7/23, 30%) paravertebral sympathetic plexus were prospectively planned in twelve human cadavers using a 1.5 Tesla (T) MRI scanner and augmented reality navigation system. MRI-conditional needles were used. Gadolinium-DTPA-enhanced saline was injected. Outcome variables included the number of control magnetic resonance images, target error of the needle tip, punctures of critical nontarget structures, distribution of the injected fluid, and procedure length.
Augmented-reality navigated MRI guidance at 1.5 T provided detailed anatomical visualization for successful targeting of the paravertebral space, needle placement, and perineural paravertebral injections in 46 of 46 targets (100%). A mean of 2 images (range, 1-5 images) were required to control needle placement. Changes of the needle trajectory occurred in 9 of 46 targets (20%) and changes of needle advancement occurred in 6 of 46 targets (13%), which were statistically not related to spinal regions (P = 0.728 and P = 0.86, respectively) and cadaver sizes (P = 0.893 and P = 0.859, respectively). The mean error of the needle tip was 3.9±1.7 mm. There were no punctures of critical nontarget structures. The mean procedure length was 33±12 min.
1.5 T augmented reality-navigated interventional MRI can provide accurate imaging guidance for perineural injections of the thoracic, lumbar, and hypogastric sympathetic plexus.
Journal Article
Design of an Ultrasound-Navigated Prostate Cancer Biopsy System for Nationwide Implementation in Senegal
2021
This paper presents the design of NaviPBx, an ultrasound-navigated prostate cancer biopsy system. NaviPBx is designed to support an affordable and sustainable national healthcare program in Senegal. It uses spatiotemporal navigation and multiparametric transrectal ultrasound to guide biopsies. NaviPBx integrates concepts and methods that have been independently validated previously in clinical feasibility studies and deploys them together in a practical prostate cancer biopsy system. NaviPBx is based entirely on free open-source software and will be shared as a free open-source program with no restriction on its use. NaviPBx is set to be deployed and sustained nationwide through the Senegalese Military Health Service. This paper reports on the results of the design process of NaviPBx. Our approach concentrates on “frugal technology”, intended to be affordable for low–middle income (LMIC) countries. Our project promises the wide-scale application of prostate biopsy and will foster time-efficient development and programmatic implementation of ultrasound-guided diagnostic and therapeutic interventions in Senegal and beyond.
Journal Article
Training in soft-tissue resection using real-time visual computer navigation feedback from the Surgery Tutor: a randomized controlled trial
2021
Background: In competency-based medical education (CBME), surgery trainees are often required to learn procedural skills in a simulated setting before proceeding to the clinical environment. The Surgery Tutor computer navigation platform allows for real-time proctorless assessment of open soft-tissue resection skills; however, the use of this platform as an aid in acquisition of procedural skills is yet to be explored. Methods: In this prospective randomized controlled trial, 20 final-year medical students were randomized to receive either training with real-time computer navigation feedback (intervention group, n = 10) or simulation training without navigation feedback (control group, n = 10) during resection of simulated non-palpable soft-tissue tumours. Real-time computer navigation feedback allowed participants to visualize the position of their scalpel relative to the tumour. Computer navigation feedback was removed for postintervention assessment. The primary outcome was the positive margin rate. Secondary outcomes were procedure time, mass of tissue excised, number of scalpel motions and distance travelled by the scalpel. Results: Training with real-time computer navigation resulted in a significantly lower positive margin rate compared with training without navigation feedback (0% v. 40%, p = 0.025). All other performance metrics did not differ significantly between the groups. Participants in the intervention group showed significant improvement in positive margin rate from baseline to final assessment (80% v. 0%,p < 0.01), whereas participants in the control group did not. Conclusion: Real-time visual computer navigation feedback from the Surgery Tutor resulted in superior acquisition of procedural skills than training without navigation feedback.
Journal Article
Shape completion in the dark: completing vertebrae morphology from 3D ultrasound
by
Gafencu, Miruna-Alexandra
,
Velikova, Yordanka
,
Saleh, Mahdi
in
Ablation
,
Acoustics
,
Anatomic Landmarks
2024
Purpose
Ultrasound (US) imaging, while advantageous for its radiation-free nature, is challenging to interpret due to only partially visible organs and a lack of complete 3D information. While performing US-based diagnosis or investigation, medical professionals therefore create a mental map of the 3D anatomy. In this work, we aim to replicate this process and enhance the visual representation of anatomical structures.
Methods
We introduce a point cloud-based probabilistic deep learning (DL) method to complete occluded anatomical structures through 3D shape completion and choose US-based spine examinations as our application. To enable training, we generate synthetic 3D representations of partially occluded spinal views by mimicking US physics and accounting for inherent artifacts.
Results
The proposed model performs consistently on synthetic and patient data, with mean and median differences of 2.02 and 0.03 in Chamfer Distance (CD), respectively. Our ablation study demonstrates the importance of US physics-based data generation, reflected in the large mean and median difference of 11.8 CD and 9.55 CD, respectively. Additionally, we demonstrate that anatomical landmarks, such as the spinous process (with reconstruction CD of 4.73) and the facet joints (mean distance to ground truth (GT) of 4.96 mm), are preserved in the 3D completion.
Conclusion
Our work establishes the feasibility of 3D shape completion for lumbar vertebrae, ensuring the preservation of level-wise characteristics and successful generalization from synthetic to real data. The incorporation of US physics contributes to more accurate patient data completions. Notably, our method preserves essential anatomical landmarks and reconstructs crucial injections sites at their correct locations.
Journal Article
Real-time integration between Microsoft HoloLens 2 and 3D Slicer with demonstration in pedicle screw placement planning
by
Morton, David
,
Pose-Díez-de-la-Lastra, Alicia
,
Fichtinger, Gabor
in
Augmented reality
,
Communication
,
Computed tomography
2023
Purpose
Up to date, there has been a lack of software infrastructure to connect 3D Slicer to any augmented reality (AR) device. This work describes a novel connection approach using Microsoft HoloLens 2 and OpenIGTLink, with a demonstration in pedicle screw placement planning.
Methods
We developed an AR application in Unity that is wirelessly rendered onto Microsoft HoloLens 2 using Holographic Remoting. Simultaneously, Unity connects to 3D Slicer using the OpenIGTLink communication protocol. Geometrical transform and image messages are transferred between both platforms in real time. Through the AR glasses, a user visualizes a patient’s computed tomography overlaid onto virtual 3D models showing anatomical structures. We technically evaluated the system by measuring message transference latency between the platforms. Its functionality was assessed in pedicle screw placement planning. Six volunteers planned pedicle screws' position and orientation with the AR system and on a 2D desktop planner. We compared the placement accuracy of each screw with both methods. Finally, we administered a questionnaire to all participants to assess their experience with the AR system.
Results
The latency in message exchange is sufficiently low to enable real-time communication between the platforms. The AR method was non-inferior to the 2D desktop planner, with a mean error of 2.1 ± 1.4 mm. Moreover, 98% of the screw placements performed with the AR system were successful, according to the Gertzbein–Robbins scale. The average questionnaire outcomes were 4.5/5.
Conclusions
Real-time communication between Microsoft HoloLens 2 and 3D Slicer is feasible and supports accurate planning for pedicle screw placement.
Journal Article
From quantitative metrics to clinical success: assessing the utility of deep learning for tumor segmentation in breast surgery
by
Jamzad, Amoon
,
Fichtinger, Gabor
,
Kaufmann, Martin
in
Breast
,
Breast Neoplasms - diagnostic imaging
,
Breast Neoplasms - pathology
2024
Purpose
Preventing positive margins is essential for ensuring favorable patient outcomes following breast-conserving surgery (BCS). Deep learning has the potential to enable this by automatically contouring the tumor and guiding resection in real time. However, evaluation of such models with respect to pathology outcomes is necessary for their successful translation into clinical practice.
Methods
Sixteen deep learning models based on established architectures in the literature are trained on 7318 ultrasound images from 33 patients. Models are ranked by an expert based on their contours generated from images in our test set. Generated contours from each model are also analyzed using recorded cautery trajectories of five navigated BCS cases to predict margin status. Predicted margins are compared with pathology reports.
Results
The best-performing model using both quantitative evaluation and our visual ranking framework achieved a mean Dice score of 0.959. Quantitative metrics are positively associated with expert visual rankings. However, the predictive value of generated contours was limited with a sensitivity of 0.750 and a specificity of 0.433 when tested against pathology reports.
Conclusion
We present a clinical evaluation of deep learning models trained for intraoperative tumor segmentation in breast-conserving surgery. We demonstrate that automatic contouring is limited in predicting pathology margins despite achieving high performance on quantitative metrics.
Journal Article