Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
60 result(s) for "Stepney, Susan"
Sort by:
Unsupervised self-organising map classification of Raman spectra from prostate cell lines uncovers substratified prostate cancer disease states
Prostate cancer is a disease which poses an interesting clinical question: Should it be treated? Only a small subset of prostate cancers are aggressive and require removal and treatment to prevent metastatic spread. However, conventional diagnostics remain challenged to risk-stratify such patients; hence, new methods of approach to biomolecularly sub-classify the disease are needed. Here we use an unsupervised self-organising map approach to analyse live-cell Raman spectroscopy data obtained from prostate cell-lines; our aim is to exemplify this method to sub-stratify, at the single-cell-level, the cancer disease state using high-dimensional datasets with minimal preprocessing. The results demonstrate a new sub-clustering of the prostate cancer cell-line into two groups—protein-rich and lipid-rich sub-cellular components—which we believe to be mechanistically linked. This finding shows the potential for unsupervised machine learning to discover distinct disease-state features for more accurate characterisation of highly heterogeneous prostate cancer. Applications may lead to more targeted diagnoses, prognoses and clinical treatment decisions via molecularly-informed stratification that would benefit patients. A method that could discover distinct disease-state features that are mechanistically linked could also assist in the development of more effective broad-spectrum treatments that simultaneously target linked disease-state processes.
A substrate-independent framework to characterize reservoir computers
The reservoir computing (RC) framework states that any nonlinear, input-driven dynamical system (the reservoir ) exhibiting properties such as a fading memory and input separability can be trained to perform computational tasks. This broad inclusion of systems has led to many new physical substrates for RC. Properties essential for reservoirs to compute are tuned through reconfiguration of the substrate, such as change in virtual topology or physical morphology. As a result, each substrate possesses a unique ‘quality’—obtained through reconfiguration—to realize different reservoirs for different tasks. Here we describe an experimental framework to characterize the quality of potentially any substrate for RC. Our framework reveals that a definition of quality is not only useful to compare substrates, but can help map the non-trivial relationship between properties and task performance. In the wider context, the framework offers a greater understanding as to what makes a dynamical system compute, helping improve the design of future substrates for RC.
When does a physical system compute?
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.
Programming Unconventional Computers: Dynamics, Development, Self-Reference
Classical computing has well-established formalisms for specifying, refining, composing, proving, and otherwise reasoning about computations. These formalisms have matured over the past 70 years or so. Unconventional Computing includes the use of novel kinds of substrates–from black holes and quantum effects, through to chemicals, biomolecules, even slime moulds–to perform computations that do not conform to the classical model. Although many of these unconventional substrates can be coerced into performing classical computation, this is not how they “naturally” compute. Our ability to exploit unconventional computing is partly hampered by a lack of corresponding programming formalisms: we need models for building, composing, and reasoning about programs that execute in these substrates. What might, say, a slime mould programming language look like? Here I outline some of the issues and properties of these unconventional substrates that need to be addressed to find “natural” approaches to programming them. Important concepts include embodied real values, processes and dynamical systems, generative systems and their meta-dynamics, and embodied self-reference.
Reservoir computing quality: connectivity and topology
We explore the effect of connectivity and topology on the dynamical behaviour of Reservoir Computers. At present, considerable effort is taken to design and hand-craft physical reservoir computers. Both structure and physical complexity are often pivotal to task performance, however, assessing their overall importance is challenging. Using a recently developed framework, we evaluate and compare the dynamical freedom (referring to quality) of neural network structures, as an analogy for physical systems. The results quantify how structure affects the behavioural range of networks. It demonstrates how high quality reached by more complex structures is often also achievable in simpler structures with greater network size. Alternatively, quality is often improved in smaller networks by adding greater connection complexity. This work demonstrates the benefits of using dynamical behaviour to assess the quality of computing substrates, rather than evaluation through benchmark tasks that often provide a narrow and biased insight into the computing quality of physical systems.
Physical reservoir computing: a tutorial
This tutorial covers physical reservoir computing from a computer science perspective. It first defines what it means for a physical system to compute, rather than merely evolve under the laws of physics. It describes the underlying computational model, the Echo State Network (ESN), and also some variants designed to make physical implementation easier. It explains why the ESN model is particularly suitable for direct physical implementation. It then discusses the issues around choosing a suitable material substrate, and interfacing the inputs and outputs. It describes how to characterise a physical reservoir in terms of benchmark tasks, and task-independent measures. It covers optimising configuration parameters, exploring the space of potential configurations, and simulating the physical reservoir. It ends with a look at the future of physical reservoir computing as devices get more powerful, and are integrated into larger systems.
Interplay of Connectivity and Unwanted Physical Interactions Within the Architecture of the D-Wave 2000Q Chimera Processor
We consider dynamics relevant to annealing in qubit networks modelled on the architecture of the D-Wave 2000Q quantum processor (known as the Chimera topology). Our results report on the effects of the qubits’ connectivity and variable coupling strengths (based on physical interactions) on the dynamics of network. The networks we examine are up to 32 qubits in size and include coupling lengths varying by almost an order of magnitude. We show that while information transfer within the network can be strongly affected by the different interactions, the system maintains similar clusters of qubits with comparable fidelities even in the presence of some of the physical interactions. This suggests an intrinsic robustness of the Chimera topology to these perturbations, even if it includes such a variety of coupling lengths. Moreover, a similar clustering geometry was observed for other qubit properties in previous analysis of actual data from D-Wave 2000Q. This comparable behaviour suggests that the real quantum annealing chip is subject to little or no unwanted effects due to interactions that scale with the coupling lengths. This could be due to absence of the most damaging type of physical interactions and/or to D-Wave calibration methods tuning the control lines such that the couplings perform as if there is no effect due to their physical length. Our results are also relevant to the use of chaining for the creation of logical qubits. They show that even with very strong interactions between the chain, significant unwanted perturbations may occur due to the inhomogeneous fidelities of the overall dynamics and inhomogeneous dynamics should be expected for any given algorithm.
Noise-aware training of neuromorphic dynamic device networks
In materio computing offers the potential for widespread embodied intelligence by leveraging the intrinsic dynamics of complex systems for efficient sensing, processing, and interaction. While individual devices offer basic data processing capabilities, networks of interconnected devices can perform more complex and varied tasks. However, designing such networks for dynamic tasks is challenging in the absence of physical models and accurate characterization of device noise. We introduce the Noise-Aware Dynamic Optimization (NADO) framework for training networks of dynamical devices, using Neural Stochastic Differential Equations (Neural-SDEs) as differentiable digital twins to capture both the dynamics and stochasticity of devices with intrinsic memory. Our approach combines backpropagation through time with cascade learning, enabling effective exploitation of the temporal properties of physical devices. We validate this method on networks of spintronic devices across both temporal classification and regression tasks. By decoupling device model training from network connectivity optimization, our framework reduces data requirements and enables robust, gradient-based programming of dynamical devices without requiring analytical descriptions of their behaviour. Dynamic systems show promise for physical neural networks, but gradient based optimization requires mathematical models. Here, the authors present a data-driven framework for optimizing networks of arbitrary dynamic systems which is robust to noise, and enables tasks such as neuroprosthetic control.
Heterotic computing: past, present and future
We introduce and define 'heterotic computing' as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This first requires a definition of physical computation. We take the framework in Horsman et al. (Horsman et al. 2014 Proc. R. Soc. A 470, 20140182. (doi:10.1098/rspa.2014.0182)), now known as abstract-representation theory, then outline how to compose such computational systems. We use examples to illustrate the ubiquity of heterotic computing, and to discuss the issues raised when one or more of the substrates is not a conventional silicon-based computer. We briefly outline the requirements for a proper theoretical treatment of heterotic computational systems, and the advantages such a theory would provide.
A conceptual and computational framework for modelling and understanding the non-equilibrium gene regulatory networks of mouse embryonic stem cells
The capacity of pluripotent embryonic stem cells to differentiate into any cell type in the body makes them invaluable in the field of regenerative medicine. However, because of the complexity of both the core pluripotency network and the process of cell fate computation it is not yet possible to control the fate of stem cells. We present a theoretical model of stem cell fate computation that is based on Halley and Winkler's Branching Process Theory (BPT) and on Greaves et al.'s agent-based computer simulation derived from that theoretical model. BPT abstracts the complex production and action of a Transcription Factor (TF) into a single critical branching process that may dissipate, maintain, or become supercritical. Here we take the single TF model and extend it to multiple interacting TFs, and build an agent-based simulation of multiple TFs to investigate the dynamics of such coupled systems. We have developed the simulation and the theoretical model together, in an iterative manner, with the aim of obtaining a deeper understanding of stem cell fate computation, in order to influence experimental efforts, which may in turn influence the outcome of cellular differentiation. The model used is an example of self-organization and could be more widely applicable to the modelling of other complex systems. The simulation based on this model, though currently limited in scope in terms of the biology it represents, supports the utility of the Halley and Winkler branching process model in describing the behaviour of stem cell gene regulatory networks. Our simulation demonstrates three key features: (i) the existence of a critical value of the branching process parameter, dependent on the details of the cistrome in question; (ii) the ability of an active cistrome to \"ignite\" an otherwise fully dissipated cistrome, and drive it to criticality; (iii) how coupling cistromes together can reduce their critical branching parameter values needed to drive them to criticality.