Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
5 result(s) for "Ragni, Flavio"
Sort by:
Decoding category and familiarity information during visual imagery
•We investigated the encoding of category and familiarity of imagined stimuli.•MVPA revealed a widespread representation of imagined stimulus category.•Familiarity information was represented in a subset of these regions.•Familiarity might be an additional feature shared between perception and imagery. Visual imagery relies on a widespread network of brain regions, partly engaged during the perception of external stimuli. Beyond the recruitment of category-selective areas (FFA, PPA), perception of familiar faces and places has been reported to engage brain areas associated with semantic information, comprising the precuneus, temporo-parietal junction (TPJ), medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Here we used multivariate pattern analyzes (MVPA) to examine to which degree areas of the visual imagery network, category-selective and semantic areas contain information regarding the category and familiarity of imagined stimuli. Participants were instructed via auditory cues to imagine personally familiar and unfamiliar stimuli (i.e. faces and places). Using region-of-interest (ROI)-based MVPA, we were able to distinguish between imagined faces and places within nodes of the visual imagery network (V1, SPL, aIPS), within category-selective inferotemporal regions (FFA, PPA) and across all brain regions of the extended semantic network (i.e. precuneus, mPFC, IFG and TPJ). Moreover, we were able to decode familiarity of imagined stimuli in the SPL and aIPS, and in some regions of the extended semantic network (in particular, right precuneus, right TPJ), but not in V1. Our results suggest that posterior visual areas - including V1 - host categorical representations about imagined stimuli, and that stimulus familiarity might be an additional aspect that is shared between perception and visual imagery.
Neuropsychological and clinical variables associated with cognitive trajectories in patients with Alzheimer's disease
The NeuroArtP3 (NET-2018-12366666) is a multicenter study funded by the Italian Ministry of Health. The aim of the project is to identify the prognostic trajectories of Alzheimer's disease (AD) through the application of artificial intelligence (AI). Only a few AI studies investigated the clinical variables associated with cognitive worsening in AD. We used Mini Mental State Examination (MMSE) scores as outcome to identify the factors associated with cognitive decline at follow up. A sample of = 126 patients diagnosed with AD (MMSE >19) were followed during 3 years in 4 time-points: T0 for the baseline and T1, T2 and T3 for the years of follow-ups. Variables of interest included demographics: age, gender, education, occupation; measures of functional ability: Activities of Daily Living (ADLs) and Instrumental (IADLs); clinical variables: presence or absence of comorbidity with other pathologies, severity of dementia (Clinical Dementia Rating Scale), behavioral symptoms; and the equivalent scores (ES) of cognitive tests. Logistic regression, random forest and gradient boosting were applied on the baseline data to estimate the MMSE scores (decline of at least >3 points) measured at T3. Patients were divided into multiple splits using different model derivation (training) and validation (test) proportions, and the optimization of the models was carried out through cross validation on the derivation subset only. The models predictive capabilities (balanced accuracy, AUC, AUPCR, F1 score and MCC) were computed on the validation set only. To ensure the robustness of the results, the optimization was repeated 10 times. A SHAP-type analysis was carried out to identify the predictive power of individual variables. The model predicted MMSE outcome at T3 with a mean AUC of 0.643. Model interpretability analysis revealed that the global cognitive state progression in AD patients is associated with: low spatial memory (Corsi block-tapping), verbal episodic long-term memory (Babcock's story recall) and working memory (Stroop Color) performances, the presence of hypertension, the absence of hypercholesterolemia, and functional skills inabilities at the IADL scores at baseline. This is the first AI study to predict cognitive trajectories of AD patients using routinely collected clinical data, while at the same time providing explainability of factors contributing to these trajectories. Also, our study used the results of single cognitive tests as a measure of specific cognitive functions allowing for a finer-grained analysis of risk factors with respect to the other studies that have principally used aggregated scores obtained by short neuropsychological batteries. The outcomes of this work can aid prognostic interpretation of the clinical and cognitive variables associated with the initial phase of the disease towards personalized therapies.
Session-by-Session Prediction of Anti-Endothelial Growth Factor Injection Needs in Neovascular Age-Related Macular Degeneration Using Optical-Coherence-Tomography-Derived Features and Machine Learning
Background/Objectives: Neovascular age-related macular degeneration (nAMD) is a retinal disorder leading to irreversible central vision loss. The pro-re-nata (PRN) treatment for nAMD involves frequent intravitreal injections of anti-VEGF medications, placing a burden on patients and healthcare systems. Predicting injections needs at each monitoring session could optimize treatment outcomes and reduce unnecessary interventions. Methods: To achieve these aims, machine learning (ML) models were evaluated using different combinations of clinical variables, including retinal thickness and volume, best-corrected visual acuity, and features derived from macular optical coherence tomography (OCT). A “Leave Some Subjects Out” (LSSO) nested cross-validation approach ensured robust evaluation. Moreover, the SHapley Additive exPlanations (SHAP) analysis was employed to quantify the contribution of each feature to model predictions. Results: Results demonstrated that models incorporating both structural and functional features achieved high classification accuracy in predicting injection necessity (AUC = 0.747 ± 0.046, MCC = 0.541 ± 0.073). Moreover, the explainability analysis identified as key predictors both subretinal and intraretinal fluid, alongside central retinal thickness. Conclusions: These findings suggest that session-by-session prediction of injection needs in nAMD patients is feasible, even without processing the entire OCT image. The proposed ML framework has the potential to be integrated into routine clinical workflows, thereby optimizing nAMD therapeutic management.
Visual imagery during real-time fMRI neurofeedback from occipital and superior parietal cortex
Visual imagery has been suggested to recruit occipital cortex via feedback projections from fronto-parietal regions, suggesting that these feedback projections might be exploited to boost recruitment of occipital cortex by means of real-time neurofeedback. To test this prediction, we instructed a group of healthy participants to perform peripheral visual imagery while they received real-time auditory feedback based on the BOLD signal from either early visual cortex or the medial superior parietal lobe. We examined the amplitude and temporal aspects of the BOLD response in the two regions. Moreover, we compared the impact of self-rated mental focus and vividness of visual imagery on the BOLD responses in these two areas. We found that both early visual cortex and the medial superior parietal cortex are susceptible to auditory neurofeedback within a single feedback session per region. However, the signal in parietal cortex was sustained for a longer time compared to the signal in occipital cortex. Moreover, the BOLD signal in the medial superior parietal lobe was more affected by focus and vividness of the visual imagery than early visual cortex. Our results thus demonstrate that (a) participants can learn to self-regulate the BOLD signal in early visual and parietal cortex within a single session, (b) that different nodes in the visual imagery network respond differently to neurofeedback, and that (c) responses in parietal, but not in occipital cortex are susceptible to self-rated vividness of mental imagery. Together, these results suggest that medial superior parietal cortex might be a suitable candidate to provide real-time feedback to patients suffering from visual field defects. •Neurofeedback helps control both early visual cortex and parietal cortex via imagery.•Large influence of subjective vividness and focus on brain activity.•Parietal cortex more affected by focus and vividness.
Coherent Cross-modal Generation of Synthetic Biomedical Data to Advance Multimodal Precision Medicine
Integration of multimodal, multi-omics data is critical for advancing precision medicine, yet its application is frequently limited by incomplete datasets where one or more modalities are missing. To address this challenge, we developed a generative framework capable of synthesizing any missing modality from an arbitrary subset of available modalities. We introduce Coherent Denoising, a novel ensemble-based generative diffusion method that aggregates predictions from multiple specialized, single-condition models and enforces consensus during the sampling process. We compare this approach against a multicondition, generative model that uses a flexible masking strategy to handle arbitrary subsets of inputs. The results show that our architectures successfully generate high-fidelity data that preserve the complex biological signals required for downstream tasks. We demonstrate that the generated synthetic data can be used to maintain the performance of predictive models on incomplete patient profiles and can leverage counterfactual analysis to guide the prioritization of diagnostic tests. We validated the framework’s efficacy on a large-scale multimodal, multi-omics cohort from The Cancer Genome Atlas (TCGA) of over 10,000 samples spanning across 20 tumor types, using data modalities such as copy-number alterations (CNA), transcriptomics (RNA-Seq), proteomics (RPPA), and histopathology (WSI). This work establishes a robust and flexible generative framework to address sparsity in multimodal datasets, providing a key step toward improving precision oncology.