Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
38
result(s) for
"Roosta, Fred"
Sort by:
Generalising uncertainty improves accuracy and safety of deep learning analytics applied to oncology
2023
Uncertainty estimation is crucial for understanding the reliability of deep learning (DL) predictions, and critical for deploying DL in the clinic. Differences between training and production datasets can lead to incorrect predictions with underestimated uncertainty. To investigate this pitfall, we benchmarked one pointwise and three approximate Bayesian DL models for predicting cancer of unknown primary, using three RNA-seq datasets with 10,968 samples across 57 cancer types. Our results highlight that simple and scalable Bayesian DL significantly improves the generalisation of uncertainty estimation. Moreover, we designed a prototypical metric—the area between development and production curve (ADP), which evaluates the accuracy loss when deploying models from development to production. Using ADP, we demonstrate that Bayesian DL improves accuracy under data distributional shifts when utilising ‘uncertainty thresholding’. In summary, Bayesian DL is a promising approach for generalising uncertainty, improving performance, transparency, and safety of DL models for deployment in the real world.
Journal Article
Newton-type methods for non-convex optimization under inexact Hessian information
2020
We consider variants of trust-region and adaptive cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under certain condition on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve ε-approximate second-order optimality which have been shown to be tight. Our Hessian approximation condition offers a range of advantages as compared with the prior works and allows for direct construction of the approximate Hessian with a priori guarantees through various techniques, including randomized sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and adaptive cubic regularization methods.
Journal Article
Robust and interpretable prediction of gene markers and cell types from spatial transcriptomics data
2026
Spatial transcriptomics (ST) links tissue morphology with gene expression values, opening new avenues for digital pathology. Deep learning models are used to predict gene expression or classify cell types directly from images, offering significant clinical potential but still requiring improvements in interpretability and robustness. We present STimage as a comprehensive suite of models to predict spatial gene expression and classify cell types directly from standard H&E images. STimage enhances robustness by estimating gene expression distributions and quantifying both data-driven (aleatoric) and model-based (epistemic) uncertainty using an ensemble approach with foundation models. Interpretability is achieved through attribution analysis at single-cell resolution integrated with histopathological annotations, functional genes, and latent representations. We validated STimage across diverse datasets, demonstrating its performance across various platforms. STimage-predicted gene expression can stratify patient survival and predict drug response. By enabling molecular and cellular prediction from routine histology, STimage offers a powerful tool to advance digital pathology.
Journal Article
Evolution and application of digital technologies to predict crop type and crop phenology in agriculture
by
Dang, Yash P
,
Potgieter, Andries B
,
Chapman, Scott
in
Agricultural industry
,
Agriculture
,
Cloud computing
2021
The downside risk of crop production affects the entire supply chain of the agricultural industry nationally and globally. This also has a profound impact on food security, and thus livelihoods, in many parts of the world. The advent of high temporal, spatial and spectral resolution remote sensing platforms, specifically during the last 5 years, and the advancement in software pipelines and cloud computing have resulted in the collating, analysing and application of ‘BIG DATA’ systems, especially in agriculture. Furthermore, the application of traditional and novel computational and machine learning approaches is assisting in resolving complex interactions, to reveal components of ecophysiological systems that were previously deemed either ‘too difficult’ to solve or ‘unseen’. In this review, digital technologies encompass mathematical, computational, proximal and remote sensing technologies. Here, we review the current state of digital technologies and their application in broad-acre cropping systems globally and in Australia. More specifically, we discuss the advances in (i) remote sensing platforms, (ii) machine learning approaches to discriminate between crops and (iii) the prediction of crop phenological stages from both sensing and crop simulation systems for major Australian winter crops. An integrated solution is proposed to allow accurate development, validation and scalability of predictive tools for crop phenology mapping at within-field scales, across extensive cropping areas.
Journal Article
Complexity Guarantees for Nonconvex Newton-MR Under Inexact Hessian Information
2024
We consider an extension of the Newton-MR algorithm for nonconvex unconstrained optimization to the settings where Hessian information is approximated. Under a particular noise model on the Hessian matrix, we investigate the iteration and operation complexities of this variant to achieve appropriate sub-optimality criteria in several nonconvex settings. We do this by first considering functions that satisfy the (generalized) Polyak-\\L ojasiewicz condition, a special sub-class of nonconvex functions. We show that, under certain conditions, our algorithm achieves global linear convergence rate. We then consider more general nonconvex settings where the rate to obtain first order sub-optimality is shown to be sub-linear. In all these settings, we show that our algorithm converges regardless of the degree of approximation of the Hessian as well as the accuracy of the solution to the sub-problem. Finally, we compare the performance of our algorithm with several alternatives on a few machine learning problems.
Convergence of Newton-MR under Inexact Hessian Information
2020
Recently, there has been a surge of interest in designing variants of the classical Newton-CG in which the Hessian of a (strongly) convex function is replaced by suitable approximations. This is mainly motivated by large-scale finite-sum minimization problems that arise in many machine learning applications. Going beyond convexity, inexact Hessian information has also been recently considered in the context of algorithms such as trust-region or (adaptive) cubic regularization for general non-convex problems. Here, we do that for Newton-MR, which extends the application range of the classical Newton-CG beyond convexity to invex problems. Unlike the convergence analysis of Newton-CG, which relies on spectrum preserving Hessian approximations in the sense of L\"{o}wner partial order, our work here draws from matrix perturbation theory to estimate the distance between the subspaces underlying the exact and approximate Hessian matrices. Numerical experiments demonstrate a great degree of resilience to such Hessian approximations, amounting to a highly efficient algorithm in large-scale problems.
Inexact Newton-type Methods for Optimisation with Nonnegativity Constraints
by
Roosta, Fred
,
Smee, Oscar
in
Feasibility
,
Ill-conditioned problems (mathematics)
,
Machine learning
2024
We consider solving large scale nonconvex optimisation problems with nonnegativity constraints. Such problems arise frequently in machine learning, such as nonnegative least-squares, nonnegative matrix factorisation, as well as problems with sparsity-inducing regularisation. In such settings, first-order methods, despite their simplicity, can be prohibitively slow on ill-conditioned problems or become trapped near saddle regions, while most second-order alternatives involve non-trivially challenging subproblems. The two-metric projection framework, initially proposed by Bertsekas (1982), alleviates these issues and achieves the best of both worlds by combining projected gradient steps at the boundary of the feasible region with Newton steps in the interior in such a way that feasibility can be maintained by simple projection onto the nonnegative orthant. We develop extensions of the two-metric projection framework, which by inexactly solving the subproblems as well as employing non-positive curvature directions, are suitable for large scale and nonconvex settings. We obtain state-of-the-art convergence rates for various classes of non-convex problems and demonstrate competitive practical performance on a variety of problems.
A Newton-MR algorithm with complexity guarantees for nonconvex smooth unconstrained optimization
2022
In this paper, we consider variants of Newton-MR algorithm for solving unconstrained, smooth, but non-convex optimization problems. Unlike the overwhelming majority of Newton-type methods, which rely on conjugate gradient algorithm as the primary workhorse for their respective sub-problems, Newton-MR employs minimum residual (MINRES) method. Recently, it has been established that MINRES has inherent ability to detect non-positive curvature directions as soon as they arise and certain useful monotonicity properties will be satisfied before such detection. We leverage these recent results and show that our algorithms come with desirable properties including competitive first and second-order worst-case complexities. Numerical examples demonstrate the performance of our proposed algorithms.
MINRES: From Negative Curvature Detection to Monotonicity Properties
2022
The conjugate gradient method (CG) has long been the workhorse for inner-iterations of second-order algorithms for large-scale nonconvex optimization. Prominent examples include line-search based algorithms, e.g., Newton-CG, and those based on a trust-region framework, e.g., CG-Steihaug. This is mainly thanks to CG's several favorable properties, including certain monotonicity properties and its inherent ability to detect negative curvature directions, which can arise in nonconvex optimization. This is despite the fact that the iterative method-of-choice when it comes to real symmetric but potentially indefinite matrices is arguably the celebrated minimal residual (MINRES) method. However, limited understanding of similar properties implied by MINRES in such settings has restricted its applicability within nonconvex optimization algorithms. We establish several such nontrivial properties of MINRES, including certain useful monotonicity as well as an inherent ability to detect negative curvature directions. These properties allow MINRES to be considered as a potentially superior alternative to CG for all Newton-type nonconvex optimization algorithms that employ CG as their subproblem solver.