Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
245
result(s) for
"Kording, Konrad"
Sort by:
Could a Neuroscientist Understand a Microprocessor?
by
Kording, Konrad Paul
,
Jonas, Eric
in
60 APPLIED LIFE SCIENCES
,
Algorithms
,
Biology and Life Sciences
2017
There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.
Journal Article
Causal mapping of human brain function
by
Siddiqi, Shan H
,
Parvizi, Josef
,
Fox, Michael D
in
Brain injury
,
Brain mapping
,
Brain research
2022
Mapping human brain function is a long-standing goal of neuroscience that promises to inform the development of new treatments for brain disorders. Early maps of human brain function were based on locations of brain damage or brain stimulation that caused a functional change. Over time, this approach was largely replaced by technologies such as functional neuroimaging, which identify brain regions in which activity is correlated with behaviours or symptoms. Despite their advantages, these technologies reveal correlations, not causation. This creates challenges for interpreting the data generated from these tools and using them to develop treatments for brain disorders. A return to causal mapping of human brain function based on brain lesions and brain stimulation is underway. New approaches can combine these causal sources of information with modern neuroimaging and electrophysiology techniques to gain new insights into the functions of specific brain areas. In this Review, we provide a definition of causality for translational research, propose a continuum along which to assess the relative strength of causal information from human brain mapping studies and discuss recent advances in causal brain mapping and their relevance for developing treatments.In this Review, Siddiqi et al. examine causal approaches to mapping human brain function. They provide a definition of causality for translational research, propose a framework for assessing causality strength in brain mapping studies and cover advances in techniques and their use in developing treatments for brain disorders.
Journal Article
Toward an Integration of Deep Learning and Neuroscience
by
Kording, Konrad P.
,
Wayne, Greg
,
Marblestone, Adam H.
in
Artificial intelligence
,
Back propagation
,
Circuits
2016
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
Journal Article
Decision Theory: What \Should\ the Nervous System Do?
2007
The purpose of our nervous system is to allow us to successfully interact with our environment. This normative idea is formalized by decision theory that defines which choices would be most beneficial. We live in an uncertain world, and each decision may have many possible outcomes; choosing the best decision is thus complicated. Bayesian decision theory formalizes these problems in the presence of uncertainty and often provides compact models that predict observed behavior. With its elegant formalization of the problems faced by the nervous system, it promises to become a major inspiration for studies in neuroscience.
Journal Article
Different scaling of linear models and deep learning in UKBiobank brain images versus machine-learning datasets
2020
Recently, deep learning has unlocked unprecedented success in various domains, especially using images, text, and speech. However, deep learning is only beneficial if the data have nonlinear relationships and if they are exploitable at available sample sizes. We systematically profiled the performance of deep, kernel, and linear models as a function of sample size on UKBiobank brain images against established machine learning references. On MNIST and Zalando Fashion, prediction accuracy consistently improves when escalating from linear models to shallow-nonlinear models, and further improves with deep-nonlinear models. In contrast, using structural or functional brain scans, simple linear models perform on par with more complex, highly parameterized models in age/sex prediction across increasing sample sizes. In sum, linear models keep improving as the sample size approaches ~10,000 subjects. Yet, nonlinearities for predicting common phenotypes from typical brain scans remain largely inaccessible to the examined kernel and deep learning methods.
Schulz
et al
. systematically benchmark performance scaling with increasingly sophisticated prediction algorithms and with increasing sample size in reference machine-learning and biomedical datasets. Complicated nonlinear intervariable relationships remain largely inaccessible for predicting key phenotypes from typical brain scans.
Journal Article
Over my fake body: body ownership illusions for studying the multisensory basis of own-body perception
by
Kording, Konrad P.
,
Kilteni, Konstantina
,
Maselli, Antonella
in
Bayesian analysis
,
body ownership
,
body semantics
2015
Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future.
Journal Article
How advances in neural recording affect data analysis
2011
Progress in neural recording techniques has allowed the number of simultaneously recorded neurons to double approximately every 7 years, mimicking Moore's law. Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons.
Over the last five decades, progress in neural recording techniques has allowed the number of simultaneously recorded neurons to double approximately every 7 years, mimicking Moore's law. Such exponential growth motivates us to ask how data analysis techniques are affected by progressively larger numbers of recorded neurons. Traditionally, neurons are analyzed independently on the basis of their tuning to stimuli or movement. Although tuning curve approaches are unaffected by growing numbers of simultaneously recorded neurons, newly developed techniques that analyze interactions between neurons become more accurate and more complex as the number of recorded neurons increases. Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons.
Journal Article
Efficient neural codes naturally emerge through gradient descent learning
by
Stocker, Alan A.
,
Kording, Konrad P.
,
Benjamin, Ari S.
in
631/378/116/1925
,
631/378/2613/2615
,
631/477/2811
2022
Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradient descent in neural networks preferentially creates representations that are more sensitive to common features, a hallmark of efficient coding. This effect occurs in systems with otherwise unconstrained coding resources, and additionally when learning towards both supervised and unsupervised objectives. This result demonstrates that efficient codes can naturally emerge from gradient-like learning.
In animals, sensory systems appear optimized for the statistics of the external world. Here the authors take an artificial psychophysics approach, analysing sensory responses in artificial neural networks, and show why these demonstrate the same phenomenon as natural sensory systems.
Journal Article
Future impact: Predicting scientific success
by
Acuna, Daniel E
,
Allesina, Stefano
,
Kording, Konrad P
in
Animals
,
Bibliometrics
,
Biological Evolution
2012
Journal Article