Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19
result(s) for
"Futsaether, Cecilia"
Sort by:
Head and neck cancer treatment outcome prediction: a comparison between machine learning with conventional radiomics features and deep learning radiomics
by
Malinen, Eirik
,
Dale, Einar
,
Liland, Kristian Hovde
in
Artificial intelligence
,
Cancer therapies
,
Clinical outcomes
2023
Radiomics can provide in-depth characterization of cancers for treatment outcome prediction. Conventional radiomics rely on extraction of image features within a pre-defined image region of interest (ROI) which are typically fed to a classification algorithm for prediction of a clinical endpoint. Deep learning radiomics allows for a simpler workflow where images can be used directly as input to a convolutional neural network (CNN) with or without a pre-defined ROI.
The purpose of this study was to evaluate (i) conventional radiomics and (ii) deep learning radiomics for predicting overall survival (OS) and disease-free survival (DFS) for patients with head and neck squamous cell carcinoma (HNSCC) using pre-treatment
F-fluorodeoxuglucose positron emission tomography (FDG PET) and computed tomography (CT) images.
FDG PET/CT images and clinical data of patients with HNSCC treated with radio(chemo)therapy at Oslo University Hospital (OUS;
= 139) and Maastricht University Medical Center (MAASTRO;
= 99) were collected retrospectively. OUS data was used for model training and initial evaluation. MAASTRO data was used for external testing to assess cross-institutional generalizability. Models trained on clinical and/or conventional radiomics features, with or without feature selection, were compared to CNNs trained on PET/CT images without or with the gross tumor volume (GTV) included. Model performance was measured using accuracy, area under the receiver operating characteristic curve (AUC), Matthew's correlation coefficient (MCC), and the F1 score calculated for both classes separately.
CNNs trained directly on images achieved the highest performance on external data for both endpoints. Adding both clinical and radiomics features to these image-based models increased performance further. Conventional radiomics including clinical data could achieve competitive performance. However, feature selection on clinical and radiomics data lead to overfitting and poor cross-institutional generalizability. CNNs without tumor and node contours achieved close to on-par performance with CNNs including contours.
High performance and cross-institutional generalizability can be achieved by combining clinical data, radiomics features and medical images together with deep learning models. However, deep learning models trained on images without contours can achieve competitive performance and could see potential use as an initial screening tool for high-risk patients.
Journal Article
Attention-based Vision Transformer Enables Early Detection of Radiotherapy-Induced Toxicity in Magnetic Resonance Images of a Preclinical Model
by
Malinen, Eirik
,
Zlygosteva, Olga
,
Juvkam, Inga Solgård
in
Animals
,
Attention
,
Disease Models, Animal
2025
Introduction
Early identification of patients at risk for toxicity induced by radiotherapy (RT) is essential for developing personalized treatments and mitigation plans. Preclinical models with relevant endpoints are critical for systematic evaluation of normal tissue responses. This study aims to determine whether attention-based vision transformers can classify MR images of irradiated and control mice, potentially aiding early identification of individuals at risk of developing toxicity.
Method
C57BL/6J mice (n = 14) were subjected to 66 Gy of fractionated RT targeting the oral cavity, swallowing muscles, and salivary glands. A control group (n = 15) received no irradiation but was otherwise treated identically. T2-weighted MR images were obtained 3–5 days post-irradiation. Late toxicity in terms of saliva production in individual mice was assessed at day 105 after treatment. A pre-trained vision transformer model (ViT Base 16) was employed to classify the images into control and irradiated groups.
Results
The ViT Base 16 model classified the MR images with an accuracy of 69%, with identical overall performance for control and irradiated animals. The ViT's model predictions showed a significant correlation with late toxicity (r = 0.65, p < 0.01). One of the attention maps from the ViT model highlighted the irradiated regions of the animals.
Conclusions
Attention-based vision transformers using MRI have the potential to predict individuals at risk of developing early toxicity. This approach may enhance personalized treatment and follow-up strategies in head and neck cancer radiotherapy.
Journal Article
Visible foliar injury and infrared imaging show that daylength affects short-term recovery after ozone stress in Trifolium subterraneum
by
Kvaal, Knut
,
Vollsnes, Ane V.
,
Oxaal, Unni
in
Air pollution
,
Biological and medical sciences
,
Daylength
2009
Tropospheric ozone is a major air pollutant affecting plants worldwide. Plants in northern regions can display more ozone injury than plants at lower latitudes despite lower ozone levels. Larger ozone influx and shorter nights have been suggested as possible causes. However, the effects of the dim light present during northern summer nights have not been investigated. Young Trifolium subterraneum plants kept in environmentally controlled growth rooms under long day (10 h bright light, 14 h dim light) or short day (10 h bright light, 14 h darkness) conditions were exposed to 6 h of 70 ppb ozone during daytime for three consecutive days. Leaves were visually inspected and imaged in vivo using thermal imaging before and after the daily exposure. In long-day-treated plants, visible foliar injury within 1 week after exposure was more severe. Multivariate statistical analyses showed that the leaves of ozone-exposed long-day-treated plants were also warmer with more homogeneous temperature distributions than exposed short day and control plants, suggesting reduced transpiration. Temperature disruptions were not restricted to areas displaying visible damage and occurred even in leaves with only slight visible injury. Ozone did not affect the leaf temperature of short-day-treated plants. As all factors influencing ozone influx were the same for long- and short-day-treated plants, only the dim nocturnal light could account for the different ozone sensitivities. Thus, the twilight summer nights at high latitudes may have a negative effect on repair and defence processes activated after ozone exposure, thereby enhancing sensitivity.
Journal Article
Automatic gross tumor segmentation of canine head and neck cancer using deep learning and cross-species transfer learning
by
Dale, Einar
,
Malinen, Eirik
,
Søvik, Åste
in
Animal training
,
artificial intelligence
,
Artificial neural networks
2023
Radiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task.
The purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC.
Contrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (
), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs.
CNN models trained from scratch on canine data or by using transfer learning obtained mean test set
scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean
performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set
scores of 0.69 for both approaches.
In conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients.
Journal Article
Thermography Studies of the Spatial and Temporal Variability in Stomatal Conductance of Avena Leaves during Stable and Oscillatory Transpiration
by
Prytz, Gunnar
,
Johnsson, Anders
,
Futsaether, Cecilia M.
in
Agronomy. Soil science and plant productions
,
Avena sativa
,
Avena sativa (oat)
2003
• The spatial and temporal distribution of stomatal conductance of young Avena sativa cv. Seger leaves was studied when the whole-leaf transpiration was stable or displayed complicated oscillatory behaviour to determine whether different regions of the leaf behaved synchronously. • A camera detecting infrared radiation in the 7.5-13 μm range was used to capture leaf temperature images which represented an indirect measure of the transpiration. Simultaneous gas exchange measurements of whole-leaf transpiration were also recorded. • During nonoscillatory behaviour of whole-leaf transpiration, nonhomogeneous, patch-like temperature distributions across the leaf surface could sometimes be observed. However, during complex oscillatory behaviour or dampening of the oscillations, the entire leaf surface displayed the same temporal leaf temperature pattern as the whole-leaf transpiration. Small phase differences characterized by distal regions lagging 0.5-3 min behind the central leaf region were observed. • The synchronous behaviour observed during oscillatory transpiration indicates strong coupling between stomata. During stable whole-leaf behaviour, the coupling was weaker and temperature distributions similar to results categorized as patchy stomatal conductance could be observed.
Journal Article
Corrigendum: Head and neck cancer treatment outcome prediction: a comparison between machine learning with conventional radiomics features and deep learning radiomics
by
Malinen, Eirik
,
Dale, Einar
,
Liland, Kristian Hovde
in
artificial intelligence
,
Cancer therapies
,
Deep learning
2024
[This corrects the article DOI: 10.3389/fmed.2023.1217037.].
Journal Article
Deep learning-based auto-delineation of gross tumour volumes and involved nodes in PET/CT images of head and neck cancer patients
by
Moe, Yngve Mardal
,
Dale, Einar
,
Futsaether, Cecilia Marie
in
Artificial neural networks
,
Cancer
,
Computed tomography
2021
PurposeIdentification and delineation of the gross tumour and malignant nodal volume (GTV) in medical images are vital in radiotherapy. We assessed the applicability of convolutional neural networks (CNNs) for fully automatic delineation of the GTV from FDG-PET/CT images of patients with head and neck cancer (HNC). CNN models were compared to manual GTV delineations made by experienced specialists. New structure-based performance metrics were introduced to enable in-depth assessment of auto-delineation of multiple malignant structures in individual patients.MethodsU-Net CNN models were trained and evaluated on images and manual GTV delineations from 197 HNC patients. The dataset was split into training, validation and test cohorts (n= 142, n = 15 and n = 40, respectively). The Dice score, surface distance metrics and the new structure-based metrics were used for model evaluation. Additionally, auto-delineations were manually assessed by an oncologist for 15 randomly selected patients in the test cohort.ResultsThe mean Dice scores of the auto-delineations were 55%, 69% and 71% for the CT-based, PET-based and PET/CT-based CNN models, respectively. The PET signal was essential for delineating all structures. Models based on PET/CT images identified 86% of the true GTV structures, whereas models built solely on CT images identified only 55% of the true structures. The oncologist reported very high-quality auto-delineations for 14 out of the 15 randomly selected patients.ConclusionsCNNs provided high-quality auto-delineations for HNC using multimodality PET/CT. The introduced structure-wise evaluation metrics provided valuable information on CNN model strengths and weaknesses for multi-structure auto-delineation.
Journal Article
Reversible phytochrome regulation influenced the severity of ozone-induced visible foliar injuries in Trifolium subterraneum L
by
Vollsnes, Ane V.
,
Kruse, Ole Mathis Opstad
,
Eriksen, Aud Berglen
in
Agriculture
,
Biomedical and Life Sciences
,
Brief Communication
2012
In
Trifolium subterraneum,
oxidative stress caused by ozone has been shown to result in more severe visible foliar injuries when plants were kept in dim broadband white light during the night (i.e. a long photoperiod) compared to darkness during the night (a short photoperiod). As phytochrome signalling is involved in photoperiod sensing, the effect of night-time red and far-red illumination on the ozone-induced response was studied.
T. subterraneum
plants were treated with ozone enriched air (70 ppb) for either 1 h for a single day or 6 h for three consecutive days. After the first ozone exposure, plants were separated into six night-time light regimes during the two subsequent nights (10 h day, 14 h night): (1) darkness, (2) far-red light (FR), (3) a short night-break of red followed by far-red light during an otherwise dark night (R FR), (4) a short night-break of red, far-red and finally red light during an otherwise dark night (R FR R), (5) dim white light (L) and (6) red light (R). The treatments L and R resulted in significantly more severe ozone-induced visible foliar injuries relative to D and FR treatments, indicating a phytochrome-mediated response. The night-breaks resulted in a photoreversible and significantly different ozone response depending on the light quality of the last light interval (R FR or R FR R), supporting a photoreversible (between P
r
and P
fr
) phytochrome signalling response. Thus, in
T. subterraneum,
the outcome of oxidative stress due to ozone appears to depend on the photoperiod mediated by the night-time conformation of phytochrome.
Journal Article
A Comparative Literature Review of Machine Learning and Image Processing Techniques Used for Scaling and Grading of Wood Logs
by
Futsæther, Cecilia Marie
,
Sandvik, Yohann Jacob
,
Liland, Kristian Hovde
in
Algorithms
,
Artificial intelligence
,
Cameras
2024
This literature review assesses the efficacy of image-processing techniques and machine-learning models in computer vision for wood log grading and scaling. Four searches were conducted in four scientific databases, yielding a total of 1288 results, which were narrowed down to 33 relevant studies. The studies were categorized according to their goals, including log end grading, log side grading, individual log scaling, log pile scaling, and log segmentation. The studies were compared based on the input used, choice of model, model performance, and level of autonomy. This review found a preference for images over point cloud representations for logs and an increase in camera use over laser scanners. It identified three primary model types: classical image-processing algorithms, deep learning models, and other machine learning models. However, comparing performance across studies proved challenging due to varying goals and metrics. Deep learning models showed better performance in the log pile scaling and log segmentation goal categories. Cameras were found to have become more popular over time compared to laser scanners, possibly due to stereovision cameras taking over for laser scanners for sampling point cloud datasets. Classical image-processing algorithms were consistently used, deep learning models gained prominence in 2018, and other machine learning models were used in studies published between 2010 and 2018.
Journal Article
Effects of the Nordic Photoperiod on Ozone Sensitivity and Repair in Different Clover Species Studied Using Infrared Imaging
2009
Plants in Nordic regions can be more ozone sensitive at a given ozone concentration than plants at lower latitudes. A recent study shows that the Nordic summer photoperiod, particularly the dim nighttime light, can increase visible foliar injury and alter leaf transpiration in subterranean clover. Effects of photoperiod on the ozone sensitivity of white and red clover cultivars adapted to Nordic conditions were investigated. Although ozone induced visible foliar injury and leaf transpirational changes in white clover, the effects were independent of photoperiod. In red clover, ozone combined with a long photoperiod with dim nights (8 nights) induced more severe visible injuries than with a short photoperiod. Furthermore, transpirational changes in red clover depended on photoperiod. Thus, a long photoperiod can increase ozone sensitivity differently in clover cultivars with different degrees of adaptation to northern conditions, suggesting that ozone indices used in risk analysis should take this effect into account.
Journal Article