Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
10
result(s) for
"Sakinis, Tomas"
Sort by:
Patient-specific functional liver segments based on centerline classification of the hepatic and portal veins
by
Aghayan, Davit
,
Meng, Ruoyan
,
d’Albenzio, Gabriella
in
Classification
,
Couinaud classification
,
Deep Learning
2025
Couinaud's liver segment classification has been widely adopted for liver surgery planning, yet its rigid anatomical boundaries often fail to align precisely with individual patient anatomy. This study proposes a novel patient-specific liver segmentation method based on detailed classification of hepatic and portal veins to improve anatomical adherence and clinical relevance.
Our proposed method involves two key stages: (1) surgeons annotate vascular endpoints on 3D models of hepatic and portal veins, from which vessel centerlines are computed; and (2) liver segments are calculated by assigning voxel labels based on proximity to these vascular centerlines. The accuracy and clinical applicability of our Hepatic and Portal Vein-based Classification (HPVC) were compared with conventional Plane-Based Classification (PBC), Portal Vein-Based Classification (PVC), and an automated deep learning method (nnU-Net) using volumetric measurements, Dice similarity scores, and expert evaluations.
HPVC demonstrated superior anatomical conformity compared to traditional methods, especially in complex segments like 5 and 8, providing segmentations more reflective of actual vascular territories. Volumetric analysis revealed significant discrepancies among the methods, particularly with nnU-Net generally producing larger segment volumes. HPVC consistently achieved higher surgeon-rated scores in patient-specific anatomical adherence, perfusion region assessment, and accuracy in surgical planning compared to PBC, PVC, and nnU-Net.
The presented HPVC method offers substantial improvements in liver segmentation precision, especially relevant for surgical planning in anatomically complex cases. Its integration into clinical workflows
the open-source platform 3D Slicer significantly enhances its accessibility and usability.
Journal Article
Automated segmentation of magnetic resonance bone marrow signal: a feasibility study
by
Zadig Pia K K
,
Bjørnerud Atle
,
Vibke, Lilleby
in
Adolescents
,
Artificial intelligence
,
Artificial neural networks
2022
BackgroundManual assessment of bone marrow signal is time-consuming and requires meticulous standardisation to secure adequate precision of findings.ObjectiveWe examined the feasibility of using deep learning for automated segmentation of bone marrow signal in children and adolescents.Materials and methodsWe selected knee images from 95 whole-body MRI examinations of healthy individuals and of children with chronic non-bacterial osteomyelitis, ages 6–18 years, in a longitudinal prospective multi-centre study cohort. Bone marrow signal on T2-weighted Dixon water-only images was divided into three color-coded intensity-levels: 1 = slightly increased; 2 = mildly increased; 3 = moderately to highly increased, up to fluid-like signal. We trained a convolutional neural network on 85 examinations to perform bone marrow segmentation. Four readers manually segmented a test set of 10 examinations and calculated ground truth using simultaneous truth and performance level estimation (STAPLE). We evaluated model and rater performance through Dice similarity coefficient and in consensus.ResultsConsensus score of model performance showed acceptable results for all but one examination. Model performance and reader agreement had highest scores for level-1 signal (median Dice 0.68) and lowest scores for level-3 signal (median Dice 0.40), particularly in examinations where this signal was sparse.ConclusionIt is feasible to develop a deep-learning-based model for automated segmentation of bone marrow signal in children and adolescents. Our model performed poorest for the highest signal intensity in examinations where this signal was sparse. Further improvement requires training on larger and more balanced datasets and validation against ground truth, which should be established by radiologists from several institutions in consensus.
Journal Article
Revisiting Härtel’s technique for percutaneous transoval glycerol injection
2025
Purpose
Percutaneous transoval glycerol injection (GI) has been widely used since 1981 in the treatment of patients with trigeminal neuralgia. However, outcomes have been more variable than with other percutaneous treatments. Although most authors state that they use Härtel’s technique, the variations are numerous—which may explain procedural problems and most of the poor results. The aim of the present imaging-based study, therefore, was to revisit Härtel’s technique and identify optimal landmarks for guiding the needle from the cheek to Meckel’s cave.
Methods
Eleven patients referred for trigeminal neuralgia were studied. We used CT- and MRI-based simulations to determine the optimal entry points in the cheek and trajectories through foramen ovale (FO) to reach Meckel’s cave – and compared our findings with the results from Härtel’s original study.
Results
The optimal entry point was located at 2 mm below the horizontal plane through the angle of the mouth and just in front of the anterior edge of the mandibular ramus. From this entry point—situated around 10 mm below Härtel’s preferred entry point—Meckel’s cave was easily accessible through the medial part of FO in 17 of 22 sides.
Conclusion
The findings from this study suggest that the technical results of transoval glycerol injection can be improved if we 1. Select the optimal entry point, 2. Guide the needle under fluoroscopy through the medial part of the foramen ovale, and 3. Minimize movement of the soft tissues in the cheek.
Journal Article
MRI segmentation of tooth tissue in age prediction of sub-adults — a new method for combining data from the 1st, 2nd, and 3rd molars
2024
Purpose
We aimed to establish a model combining MRI volume measurements from the 1st, 2nd and 3rd molars for age prediction in sub-adults and compare the age prediction performance of different combinations of all three molars, internally in the study cohort.
Material and method
We examined 99 volunteers using a 1.5 T MR scanner with a customized high-resolution single T2 sequence. Segmentation was performed using SliceOmatic (Tomovision©). Age prediction was based on the tooth tissue ratio (high signal soft tissue + low signal soft tissue)/total. The model included three correlation parameters to account for statistical dependence between the molars. Age prediction performance of different combinations of teeth for the three molars was assessed using interquartile range (
IQR
).
Results
We included data from the 1st molars from 87 participants (F/M 59/28), 2nd molars from 93 (F/M 60/33) and 3rd molars from 67 (F/M 45/22). The age range was 14–24 years with a median age of 18 years. The model with the best age prediction performance (smallest
IQR
) was 46–47-18 (lower right 1st and 2nd and upper right 3rd molar) in males. The estimated correlation between the different molars was 0.620 (46 vs. 47), 0.430 (46 vs. 18), and 0.598 (47 vs. 18).
IQR
was the smallest in tooth combinations including a 3rd molar.
Conclusion
We have established a model for combining tissue volume measurements from the 1st, 2nd and 3rd molars for age prediction in sub-adults. The prediction performance was mostly driven by the 3rd molars. All combinations involving the 3rd molar performed well.
Journal Article
Prediction of Age Older than 18 Years in Sub-adults by MRI Segmentation of 1st and 2nd Molars
by
Kvaal, Sigrid Ingeborg
,
Sakinis, Tomas
,
Bleka, Øyvind
in
Adults
,
Bayesian analysis
,
Image segmentation
2023
PurposeTo investigate prediction of age older than 18 years in sub-adults using tooth tissue volumes from MRI segmentation of the entire 1st and 2nd molars, and to establish a model for combining information from two different molars.Materials and methodsWe acquired T2 weighted MRIs of 99 volunteers with a 1.5-T scanner. Segmentation was performed using SliceOmatic (Tomovision©). Linear regression was used to analyse the association between mathematical transformation outcomes of tissue volumes, age, and sex. Performance of different outcomes and tooth combinations were assessed based on the p-value of the age variable, common, or separate for each sex, depending on the selected model. The predictive probability of being older than 18 years was obtained by a Bayesian approach using information from the 1st and 2nd molars both separately and combined.Results1st molars from 87 participants, and 2nd molars from 93 participants were included. The age range was 14-24 years with a median age of 18 years. The transformation outcome (high signal soft tissue + low signal soft tissue)/total had the strongest statistical association with age for the lower right 1st (p= 7.1*10-4 for males) and 2nd molar (p=9.44×10-7 for males and p=7.4×10-10 for females). Combining the lower right 1st and 2nd molar in males did not increase the prediction performance compared to using the best tooth alone.ConclusionMRI segmentation of the lower right 1st and 2nd molar might prove useful in the prediction of age older than 18 years in sub-adults. We provided a statistical framework to combine the information from two molars.
Journal Article
Age prediction in sub-adults based on MRI segmentation of 3rd molar tissue volumes
2023
PurposeOur aim was to investigate tissue volumes measured by MRI segmentation of the entire 3rd molar for prediction of a sub-adult being older than 18 years.Material and methodWe used a 1.5-T MR scanner with a customized high-resolution single T2 sequence acquisition with 0.37 mm iso-voxels. Two dental cotton rolls drawn with water stabilized the bite and delineated teeth from oral air. Segmentation of the different tooth tissue volumes was performed using SliceOmatic (Tomovision©). Linear regression was used to analyze the association between mathematical transformation outcomes of the tissue volumes, age, and sex. Performance of different transformation outcomes and tooth combinations were assessed based on the p value of the age variable, combined or separated for each sex depending on the selected model. The predictive probability of being older than 18 years was obtained by a Bayesian approach.ResultsWe included 67 volunteers (F/M: 45/22), range 14–24 years, median age 18 years. The transformation outcome (pulp + predentine)/total volume for upper 3rd molars had the strongest association with age (p = 3.4 × 10−9).ConclusionMRI segmentation of tooth tissue volumes might prove useful in the prediction of age older than 18 years in sub-adults.
Journal Article
Body composition assessment by artificial intelligence from routine computed tomography scans in colorectal cancer: Introducing BodySegAI
by
Beichmann, Benedicte
,
Henriksen, Hege Berg
,
Sakinis, Tomas
in
Abdomen
,
Artificial intelligence
,
Automation
2022
Background Body composition is of clinical importance in colorectal cancer patients, but is rarely assessed because of time‐consuming manual segmentation. We developed and tested BodySegAI, a deep learning‐based software for automated body composition quantification from routinely acquired computed tomography (CT) scans. Methods A two‐dimensional U‐Net convolutional network was trained on 2989 abdominal CT slices from L2 to S1 to segment skeletal muscle (SM), visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), and intermuscular and intramuscular adipose tissue (IMAT). Human ground truth was established by combining segmentations from three human readers. BodySegAI was tested using 154 slices against the human ground truth and compared with a software named AutoMATiCA. Results Median Dice scores for BodySegAI against human ground truth were 0.969, 0.814, 0.986, and 0.990 for SM, IMAT, VAT, and SAT, respectively. The mean differences per slice for SM were −0.09 cm3, IMAT: −0.17 cm3, VAT: −0.12 cm3, and SAT: 0.67 cm3. Median absolute errors for SM, IMAT, VAT, and SAT were 1.35, 10.54, 0.91, and 1.07%, respectively. When analysing different anatomical levels separately, L3 and S1 demonstrated the overall highest and lowest Dice scores, respectively. On average, BodySegAI segmented 148 times faster than human readers (4.9 vs. 726.5 seconds, P < 0.001). Also, BodySegAI presented higher Dice scores for SM, IMAT, SAT, and VAT than AutoMATiCA (slices = 154). Conclusions BodySegAI rapidly generates excellent segmentation of SM, VAT, and SAT and good segmentation of IMAT in L2 to S1 among colorectal cancer patients and may replace semi‐manual segmentation.
Journal Article
RIL-Contour: a Medical Imaging Dataset Annotation Tool for and with Deep Learning
by
Boonrod, Arunnit
,
Takahashi, Naoki
,
Philbrick, Kenneth A
in
Algorithms
,
Annotations
,
Artificial intelligence
2019
Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to “learn” from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.
Journal Article
Interactive segmentation of medical images through fully convolutional neural networks
by
Philbrick, Kenneth
,
Milletari, Fausto
,
Sakinis, Tomas
in
Artificial neural networks
,
Automation
,
Computed tomography
2019
Image segmentation plays an essential role in medicine for both diagnostic and interventional tasks. Segmentation approaches are either manual, semi-automated or fully-automated. Manual segmentation offers full control over the quality of the results, but is tedious, time consuming and prone to operator bias. Fully automated methods require no human effort, but often deliver sub-optimal results without providing users with the means to make corrections. Semi-automated approaches keep users in control of the results by providing means for interaction, but the main challenge is to offer a good trade-off between precision and required interaction. In this paper we present a deep learning (DL) based semi-automated segmentation approach that aims to be a \"smart\" interactive tool for region of interest delineation in medical images. We demonstrate its use for segmenting multiple organs on computed tomography (CT) of the abdomen. Our approach solves some of the most pressing clinical challenges: (i) it requires only one to a few user clicks to deliver excellent 2D segmentations in a fast and reliable fashion; (ii) it can generalize to previously unseen structures and \"corner cases\"; (iii) it delivers results that can be corrected quickly in a smart and intuitive way up to an arbitrary degree of precision chosen by the user and (iv) ensures high accuracy. We present our approach and compare it to other techniques and previous work to show the advantages brought by our method.