Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
12
result(s) for
"Landgrebe, Jobst"
Sort by:
Why machines will never rule the world : artificial intelligence without fear
by
Landgrebe, Jobst, author
,
Smith, Barry, 1952- author
in
Artificial intelligence Philosophy.
,
Artificial intelligence Social aspects.
,
Singularities (Artificial intelligence) Forecasting.
2025
\"This book's core argument is that an artificial intelligence that could equal or exceed human intelligence - sometimes called 'artificial general intelligence' (AGI) - is for mathematical reasons impossible. It offers two specific reasons for this claim: 1. Human intelligence is a capability of the human brain and central nervous system, which is a complex dynamic system 2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, biology, and anthropology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve \"Artificial Intelligence\" (AI)? And why, after more than 50 years, are our interactions with AI, for example with our bank's computers, still so unsatisfactory? The First Edition was published the same week that ChatGPT was unleashed onto the world. In this Second Edition, shows how their arguments apply to new Large Language Models and bring up to date their other arguments relating to the limits of AI. They show why AI systems are best viewed as pieces of mathematics, which cannot think, feel, or will. They also demolish the idea that, with the help of AI, we could \"solve physics\" in a way that would allow us to create, in the cloud, a perfect simulation of reality in which we could enjoy digital immortality. Such ideas reveal a lack of understanding of physics, mathematics, human biology, and computers. There is still, as they demonstrate in an updated final chapter, a great deal that AI can achieve which will benefit humanity. But these benefits will be achieved without the aid of systems that are more powerful than humans, and which are as impossible as AI systems that are intrinsically \"evil\" or able to \"will\" a takeover of human society. Key Changes to the Second Edition Shows how the arguments of the First Edition apply also to new Large Language Models Adds a treatment of human practical intelligence - of knowing how vs. knowing that - a topic that is ignored by the AI community Demonstrates why \"AI ethics\" should be relabeled \"ethics of human uses of AI\" Adds a new chapter showing the essential limitations of physics, providing a thorough grounding for the arguments of the book Demolishes the idea that we might already be living in a simulation. Jobst Landgrebe is a scientist and entrepreneur with a background in philosophy, mathematics, neuroscience, medicine, and biochemistry. Landgrebe is also the founder of Cognotekt, a German AI company which has since 2013 provided working systems used by companies in areas such as insurance claims management, real estate management, and medical billing. After more than 15 years in the AI industry he has developed an exceptional understanding of the limits and potential of AI in the future. Barry Smith is one of the most widely cited contemporary philosophers. He has made influential contributions to the foundations of ontology and data science, especially in the biomedical domain. Most recently, his work has led to the creation of an international standard in the ontology field (ISO/IEC 21838), which is the first example of a piece of philosophy that has been subjected to the ISO standardization process\"-- Provided by publisher.
Certifiable AI
2022
Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications.
Journal Article
Making AI meaningful again
2021
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s, but this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy.
Journal Article
Tumor specimen cold ischemia time impacts molecular cancer drug target discovery
2024
Tumor tissue collections are used to uncover pathways associated with disease outcomes that can also serve as targets for cancer treatment, ideally by comparing the molecular properties of cancer tissues to matching normal tissues. The quality of such collections determines the value of the data and information generated from their analyses including expression and modifications of nucleic acids and proteins. These biomolecules are dysregulated upon ischemia and decompose once the living cells start to decay into inanimate matter. Therefore, ischemia time before final tissue preservation is the most important determinant of the quality of a tissue collection. Here we show the impact of ischemia time on tumor and matching adjacent normal tissue samples for mRNAs in 1664, proteins in 1818, and phosphosites in 1800 cases (tumor and matching normal samples) of four solid tumor types (CRC, HCC, LUAD, and LUSC NSCLC subtypes). In CRC, ischemia times exceeding 15 min impacted 12.5% (mRNA), 25% (protein), and 50% (phosphosites) of differentially expressed molecules in tumor versus normal tissues. This hypoxia- and decay-induced dysregulation increased with longer ischemia times and was observed across tumor types. Interestingly, the proteomics analysis revealed that specimen ischemia time above 15 min is mostly associated with a dysregulation of proteins in the immune-response pathway and less so with metabolic processes. We conclude that ischemia time is a crucial quality parameter for tissue collections used for target discovery and validation in cancer research.
Journal Article
Role for LAMP‐2 in endosomal cholesterol transport
by
Sandhoff, Konrad
,
Eskelinen, Eeva‐Liisa
,
Willenborg, Marion
in
Androstenes - pharmacology
,
Animals
,
Antibodies
2011
The mechanisms of endosomal and lysosomal cholesterol traffic are still poorly understood. We showed previously that unesterified cholesterol accumulates in the late endosomes and lysosomes of fibroblasts deficient in both lysosome associated membrane protein‐2 (LAMP‐2) and LAMP‐1, two abundant membrane proteins of late endosomes and lysosomes. In this study we show that in cells deficient in both LAMP‐1 and LAMP‐2 (LAMP−/−), low‐density lipoprotein (LDL) receptor levels and LDL uptake are increased as compared to wild‐type cells. However, there is a defect in esterification of both endogenous and LDL cholesterol. These results suggest that LAMP−/− cells have a defect in cholesterol transport to the site of esterification in the endoplasmic reticulum, likely due to defective export of cholesterol out of late endosomes or lysosomes. We also show that cholesterol accumulates in LAMP‐2 deficient liver and that overexpression of LAMP‐2 retards the lysosomal cholesterol accumulation induced by U18666A. These results point to a critical role for LAMP‐2 in endosomal/lysosomal cholesterol export. Moreover, the late endosomal/lysosomal cholesterol accumulation in LAMP−/− cells was diminished by overexpression of any of the three isoforms of LAMP‐2, but not by LAMP‐1. The LAMP‐2 luminal domain, the membrane‐proximal half in particular, was necessary and sufficient for the rescue effect. Taken together, our results suggest that LAMP‐2, its luminal domain in particular, plays a critical role in endosomal cholesterol transport and that this is distinct from the chaperone‐mediated autophagy function of LAMP‐2.
Journal Article
LAMP-2 deficient mice show depressed cardiac contractile function without significant changes in calcium handling
by
Figura, Kurt
,
Eckardt, Lars
,
Mleczko, Anna
in
Animals
,
Blotting, Western
,
Calcium - metabolism
2006
Mutations in the highly glycosylated lysosome associated membrane protein-2 (LAMP-2) cause, as recently shown, familial Danon disease with mental retardation, mild myopathy and fatal cardiomyopathy. Extent and basis of the contractile dysfunction is not completely understood.
In LAMP-2 deficient mice, we investigated cardiac function in vivo using Doppler-echocardiography and contractile function in vitro in isolated myocardial trabeculae.
LAMP-2 deficient mice displayed reduced ejection fraction (EF) (58.9+/-3.4 vs. 80.7+/-5.1%, P<0.05) and reduced cardiac output (8.3+/-3.1 vs. 14.7+/-3.6 ml/min, P<0.05) as compared to wild-type controls. Isolated multicellular muscle preparations from LAMP-2 deficient mice confirmed depressed force development (3.2+/-0.6 vs. 8.4+/-0.9 mN/mm2, P<0.01). All groups showed similar force-frequency behaviour when normalised to baseline force. Post-rest potentiation was significantly depressed at intervals>15 s in LAMP-2 deficient mice (P<0.05). Although attenuated in absolute force development, the normalised inotropic response to increased calcium and beta-adrenoreceptor stimulation was unaltered. Electron microscopic analysis revealed autophagic vacuoles in LAMP-2 deficient cardiomyocytes. Protein analysis showed unaltered levels of SERCA2a, calsequestrin and phospholamban.
Cardiac contractile function in LAMP-2 deficient mice as a model for Danon disease is significantly attenuated. The occurrence of autophagic vacuoles in LAMP-2 deficient myocytes is likely to be causal for the depressed contractile function resulting in an attenuated cardiac pump reserve.
Journal Article
Why machines do not understand: A response to Søgaard
2023
Some defenders of so-called `artificial intelligence' believe that machines can understand language. In particular, Søgaard has argued in this journal for a thesis of this sort, on the basis of the idea (1) that where there is semantics there is also understanding and (2) that machines are not only capable of what he calls `inferential semantics', but even that they can (with the help of inputs from sensors) `learn' referential semantics \\parencite{sogaard:2022}. We show that he goes wrong because he pays insufficient attention to the difference between language as used by humans and the sequences of inert of symbols which arise when language is stored on hard drives or in books in libraries.
Ontologies of common sense, physics and mathematics
2023
The view of nature we adopt in the natural attitude is determined by common sense, without which we could not survive. Classical physics is modelled on this common-sense view of nature, and uses mathematics to formalise our natural understanding of the causes and effects we observe in time and space when we select subsystems of nature for modelling. But in modern physics, we do not go beyond the realm of common sense by augmenting our knowledge of what is going on in nature. Rather, we have measurements that we do not understand, so we know nothing about the ontology of what we measure. We help ourselves by using entities from mathematics, which we fully understand ontologically. But we have no ontology of the reality of modern physics; we have only what we can assert mathematically. In this paper, we describe the ontology of classical and modern physics against this background and show how it relates to the ontology of common sense and of mathematics.
An argument for the impossibility of machine intelligence
2021
Since the noun phrase `artificial intelligence' (AI) was coined, it has been debated whether humans are able to create intelligence using technology. We shed new light on this question from the point of view of themodynamics and mathematics. First, we define what it is to be an agent (device) that could be the bearer of AI. Then we show that the mainstream definitions of `intelligence' proposed by Hutter and others and still accepted by the AI community are too weak even to capture what is involved when we ascribe intelligence to an insect. We then summarise the highly useful definition of basic (arthropod) intelligence proposed by Rodney Brooks, and we identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition. Finally, we show that, from the perspective of the disciplines needed to create such an agent, namely mathematics and physics, these properties are realisable by neither implicit nor explicit mathematical design nor by setting up an environment in which an AI could evolve spontaneously.
Making AI meaningful again
by
Smith, Barry
,
Landgrebe, Jobst
in
Artificial intelligence
,
Machine learning
,
Natural language processing
2019
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy.