Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
1,067
result(s) for
"Symbolic Learning"
Sort by:
Human-Centered AI
2021
Shneiderman discusses the Human-Centered AI (HCAI). HCAI is a vision of how machines might augment humans, and even encourage our best impulses toward each other, rather than how they might replace humans with something supposedly better. HCAI designers recognize that humans are happily and productively woven into social networks. From the HCAI perspective, computers should play a supportive role, amplifying people's ability to work in masterful or extraordinary ways. Although a growing number of people are demanding that AI machines include a \"human in the loop,\" this phrase often implies a grudging acceptance of human control panels.
Journal Article
The Advantage of Abstract Examples in Learning Math
by
Sloutsky, Vladimir M.
,
Kaminski, Jennifer A.
,
Heckler, Andrew F.
in
Children
,
Education Forum
,
Games
2008
Undergraduate students may benefit more from learning mathematics through a single abstract, symbolic representation than from learning multiple concrete examples.
Journal Article
FFNSL: Feed-Forward Neural-Symbolic Learner
by
Russo, Alessandra
,
Lobo, Jorge
,
Cunnington, Daniel
in
Artificial Intelligence
,
Computer Science
,
Control
2023
Logic-based machine learning aims to learn general, interpretable knowledge in a data-efficient manner. However, labelled data must be specified in a structured logical form. To address this limitation, we propose a neural-symbolic learning framework, called
Feed-Forward Neural-Symbolic Learner (FFNSL)
, that integrates a logic-based machine learning system capable of learning from noisy examples, with neural networks, in order to learn interpretable knowledge from labelled unstructured data. We demonstrate the generality of FFNSL on four neural-symbolic classification problems, where different pre-trained neural network models and logic-based machine learning systems are integrated to learn interpretable knowledge from sequences of images. We evaluate the robustness of our framework by using images subject to distributional shifts, for which the pre-trained neural networks may predict incorrectly and with high confidence. We analyse the impact that these shifts have on the accuracy of the learned knowledge and run-time performance, comparing FFNSL to tree-based and pure neural approaches. Our experimental results show that FFNSL outperforms the baselines by learning more accurate and interpretable knowledge with fewer examples.
Journal Article
A Primer on Generative Artificial Intelligence
2024
Many educators and professionals in different industries may need to become more familiar with the basic concepts of artificial intelligence (AI) and generative artificial intelligence (Gen-AI). Therefore, this paper aims to introduce some of the basic concepts of AI and Gen-AI. The approach of this explanatory paper is first to introduce some of the underlying concepts, such as artificial intelligence, machine learning, deep learning, artificial neural networks, and large language models (LLMs), that would allow the reader to better understand generative AI. The paper also discusses some of the applications and implications of generative AI on businesses and education, followed by the current challenges associated with generative AI.
Journal Article
Reduced implication-bias logic loss for neuro-symbolic learning
2024
Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in the field of Neuro-Symbolic Learning. However, some differentiable operators could introduce significant biases during backpropagation, which can degrade the performance of Neuro-Symbolic systems. In this paper, we demonstrate that the loss functions derived from fuzzy logic operators commonly exhibit a bias, referred to as Implication Bias. To mitigate this bias, we propose a simple yet efficient method to transform the biased loss functions into Reduced Implication-bias Logic Loss (RILL). Empirical studies demonstrate that RILL outperforms the biased logic loss functions, especially when the knowledge base is incomplete or the supervised training data is insufficient.
Journal Article
An Embedded and Embodied Cognition Review of Instructional Manipulatives
by
Pouw, Wim T. J. L.
,
van Gog, Tamara
,
Paas, Fred
in
Abacuses
,
Child and School Psychology
,
Cognition
2014
Recent literature on learning with instructional manipulatives seems to call for a moderate view on the effects of perceptual and interactive richness of instructional manipulatives on learning. This \"moderate view\" holds that manipulatives' perceptual and interactive richness may compromise learning in two ways: (1) by imposing a very high cognitive load on the learner, and (2) by hindering drawing of symbolic inferences that are supposed to play a key role in transfer (i.e., application of knowledge to new situations in the absence of instructional manipulatives). This paper presents a contrasting view. Drawing on recent insights from Embedded Embodied perspectives on cognition, it is argued that (1) perceptual and interactive richness may provide opportunities for alleviating cognitive load (Embedded Cognition), and (2) transfer of learning is not reliant on decontextualized knowledge but may draw on previous sensorimotor experiences of the kind afforded by perceptual and interactive richness of manipulatives (Embodied Cognition). By negotiating the Embedded Embodied Cognition view with the moderate view, implications for research are derived.
Journal Article
Inclusion of domain-knowledge into GNNs using mode-directed inverse entailment
2022
We present a general technique for constructing Graph Neural Networks (GNNs) capable of using multi-relational domain knowledge. The technique is based on mode-directed inverse entailment (MDIE) developed in Inductive Logic Programming (ILP). Given a data instance e and background knowledge B, MDIE identifies a most-specific logical formula ⊥B(e) that contains all the relational information in B that is related to e. We represent ⊥B(e) by a “bottom-graph” that can be converted into a form suitable for GNN implementations. This transformation allows a principled way of incorporating generic background knowledge into GNNs: we use the term ‘BotGNN’ for this form of graph neural networks. For several GNN variants, using real-world datasets with substantial background knowledge, we show that BotGNNs perform significantly better than both GNNs without background knowledge and a recently proposed simplified technique for including domain knowledge into GNNs. We also provide experimental evidence comparing BotGNNs favourably to multi-layer perceptrons that use features representing a “propositionalised” form of the background knowledge; and BotGNNs to a standard ILP based on the use of most-specific clauses. Taken together, these results point to BotGNNs as capable of combining the computational efficacy of GNNs with the representational versatility of ILP.
Journal Article
A review of computational models of basic rule learning: The neural-symbolic debate and beyond
by
Alhama, Raquel G.
,
Zuidema, Willem
in
Behavioral Science and Psychology
,
Cognitive Psychology
,
Computer Simulation
2019
We present a critical review of computational models of generalization of simple grammar-like rules, such as ABA and ABB. In particular, we focus on models attempting to account for the empirical results of Marcus et al. (
Science, 283
(5398), 77–80
1999
). In that study, evidence is reported of generalization behavior by 7-month-old infants, using an Artificial Language Learning paradigm. The authors fail to replicate this behavior in neural network simulations, and claim that this failure reveals inherent limitations of a whole class of neural networks: those that do not incorporate symbolic operations. A great number of computational models were proposed in follow-up studies, fuelling a heated debate about what is required for a model to generalize. Twenty years later, this debate is still not settled. In this paper, we review a large number of the proposed models. We present a critical analysis of those models, in terms of how they contribute to answer the most relevant questions raised by the experiment. After identifying which aspects require further research, we propose a list of desiderata for advancing our understanding on generalization.
Journal Article