Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
13
result(s) for
"Tafti, Pouya"
Sort by:
Combining multimodal imaging and treatment features improves machine learning‐based prognostic assessment in patients with glioblastoma multiforme
2019
Background For Glioblastoma (GBM), various prognostic nomograms have been proposed. This study aims to evaluate machine learning models to predict patients' overall survival (OS) and progression‐free survival (PFS) on the basis of clinical, pathological, semantic MRI‐based, and FET‐PET/CT‐derived information. Finally, the value of adding treatment features was evaluated. Methods One hundred and eighty‐nine patients were retrospectively analyzed. We assessed clinical, pathological, and treatment information. The VASARI set of semantic imaging features was determined on MRIs. Metabolic information was retained from preoperative FET‐PET/CT images. We generated multiple random survival forest prediction models on a patient training set and performed internal validation. Single feature class models were created including \"clinical,\" \"pathological,\" \"MRI‐based,\" and \"FET‐PET/CT‐based\" models, as well as combinations. Treatment features were combined with all other features. Results Of all single feature class models, the MRI‐based model had the highest prediction performance on the validation set for OS (C‐index: 0.61 [95% confidence interval: 0.51‐0.72]) and PFS (C‐index: 0.61 [0.50‐0.72]). The combination of all features did increase performance above all single feature class models up to C‐indices of 0.70 (0.59‐0.84) and 0.68 (0.57‐0.78) for OS and PFS, respectively. Adding treatment information further increased prognostic performance up to C‐indices of 0.73 (0.62‐0.84) and 0.71 (0.60‐0.81) on the validation set for OS and PFS, respectively, allowing significant stratification of patient groups for OS. Conclusions MRI‐based features were the most relevant feature class for prognostic assessment. Combining clinical, pathological, and imaging information increased predictive power for OS and PFS. A further increase was achieved by adding treatment features. In comparison with clinical, pathological, and FET‐PET based features, semantic MRI‐based (VASARI) showed the best performance predicting OS and PFS in GBM patients. Combining all features triggered an improved predictive performance above single feature class models. Adding treatment information to the combined model achieved the best predictive performance in an internal validation cohort with a concordance index of up to 0.74 and 0.72 for OS and PFS, respectively. .
Journal Article
Treatment-related features improve machine learning prediction of prognosis in soft tissue sarcoma patients
by
Knie, Christoph
,
Komboz, Basil
,
Rost, Burkhard
in
Artificial intelligence
,
Machine learning
,
Mathematical models
2018
Background and purposeCurrent prognostic models for soft tissue sarcoma (STS) patients are solely based on staging information. Treatment-related data have not been included to date. Including such information, however, could help to improve these models.Materials and methodsA single-center retrospective cohort of 136 STS patients treated with radiotherapy (RT) was analyzed for patients’ characteristics, staging information, and treatment-related data. Therapeutic imaging studies and pathology reports of neoadjuvantly treated patients were analyzed for signs of response. Random forest machine learning-based models were used to predict patients’ death and disease progression at 2 years. Pre-treatment and treatment models were compared.ResultsThe prognostic models achieved high performances. Using treatment features improved the overall performance for all three classification types: prediction of death, and of local and systemic progression (area under the receiver operatoring characteristic curve (AUC) of 0.87, 0.88, and 0.84, respectively). Overall, RT-related features, such as the planning target volume and total dose, had preeminent importance for prognostic performance. Therapy response features were selected for prediction of disease progression.ConclusionsA machine learning-based prognostic model combining known prognostic factors with treatment- and response-related information showed high accuracy for individualized risk assessment. This model could be used for adjustments of follow-up procedures.
Journal Article
Fractional Brownian Vector Fields
2010
This work puts forward an extended definition of vector fractional Brownian motion (fBm) using a distribution theoretic formulation in the spirit of Gelfand and Vilenkin's stochastic analysis. We introduce random vector fields that share the statistical invariances of standard vector fBm (self-similarity and rotation invariance) but which, in contrast, have dependent vector components in the general case. These random vector fields result from the transformation of white noise by a special operator whose invariance properties the random field inherits. The said operator combines an inverse fractional Laplacian with a Helmholtz-like decomposition and weighted recombination. Classical fBm's can be obtained by balancing the weights of the Helmholtz components. The introduced random fields exhibit several important properties that are discussed in this paper. In addition, the proposed scheme yields a natural extension of the definition to Hurst exponents greater than one. [PUBLICATION ABSTRACT]
Journal Article
Gemini: A Family of Highly Capable Multimodal Models
2025
This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of the Gemini family in cross-modal reasoning and language understanding will enable a wide variety of use cases. We discuss our approach toward post-training and deploying Gemini models responsibly to users through services including Gemini, Gemini Advanced, Google AI Studio, and Cloud Vertex AI.
Gemma 3 Technical Report
2025
We introduce Gemma 3, a multimodal addition to the Gemma family of lightweight open models, ranging in scale from 1 to 27 billion parameters. This version introduces vision understanding abilities, a wider coverage of languages and longer context - at least 128K tokens. We also change the architecture of the model to reduce the KV-cache memory that tends to explode with long context. This is achieved by increasing the ratio of local to global attention layers, and keeping the span on local attention short. The Gemma 3 models are trained with distillation and achieve superior performance to Gemma 2 for both pre-trained and instruction finetuned versions. In particular, our novel post-training recipe significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3-4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across benchmarks. We release all our models to the community.
Gemma 2: Improving Open Language Models at a Practical Size
2024
In this work, we introduce Gemma 2, a new addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters. In this new version, we apply several known technical modifications to the Transformer architecture, such as interleaving local-global attentions (Beltagy et al., 2020a) and group-query attention (Ainslie et al., 2023). We also train the 2B and 9B models with knowledge distillation (Hinton et al., 2015) instead of next token prediction. The resulting models deliver the best performance for their size, and even offer competitive alternatives to models that are 2-3 times bigger. We release all our models to the community.
RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
2024
We introduce RecurrentGemma, a family of open language models which uses Google's novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide two sizes of models, containing 2B and 9B parameters, and provide pre-trained and instruction tuned variants for both. Our models achieve comparable performance to similarly-sized Gemma baselines despite being trained on fewer tokens.
Gemini: A Family of Highly Capable Multimodal Models
2024
This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of the Gemini family in cross-modal reasoning and language understanding will enable a wide variety of use cases. We discuss our approach toward post-training and deploying Gemini models responsibly to users through services including Gemini, Gemini Advanced, Google AI Studio, and Cloud Vertex AI.
Gemma: Open Models Based on Gemini Research and Technology
2024
This work introduces Gemma, a family of lightweight, state-of-the art open models built from the research and technology used to create Gemini models. Gemma models demonstrate strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations.
A unified formulation of Gaussian vs. sparse stochastic processes - Part I: Continuous-domain theory
by
Sun, Qiyu
,
Tafti, Pouya D
,
Unser, Michael
in
Decoupling
,
Differential equations
,
Finite difference method
2012
We introduce a general distributional framework that results in a unifying description and characterization of a rich variety of continuous-time stochastic processes. The cornerstone of our approach is an innovation model that is driven by some generalized white noise process, which may be Gaussian or not (e.g., Laplace, impulsive Poisson or alpha stable). This allows for a conceptual decoupling between the correlation properties of the process, which are imposed by the whitening operator L, and its sparsity pattern which is determined by the type of noise excitation. The latter is fully specified by a Levy measure. We show that the range of admissible innovation behavior varies between the purely Gaussian and super-sparse extremes. We prove that the corresponding generalized stochastic processes are well-defined mathematically provided that the (adjoint) inverse of the whitening operator satisfies some Lp bound for p>=1. We present a novel operator-based method that yields an explicit characterization of all Levy-driven processes that are solutions of constant-coefficient stochastic differential equations. When the underlying system is stable, we recover the family of stationary CARMA processes, including the Gaussian ones. The approach remains valid when the system is unstable and leads to the identification of potentially useful generalizations of the Levy processes, which are sparse and non-stationary. Finally, we show how we can apply finite difference operators to obtain a stationary characterization of these processes that is maximally decoupled and stable, irrespective of the location of the poles in the complex plane.