Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
2,646 result(s) for "Reasoning Data processing."
Sort by:
Commonsense reasoning
To endow computers with common sense is one of the major long-term goals of Artificial Intelligence research. One approach to this problem is to formalize commonsense reasoning using mathematical logic. Commonsense Reasoning is a detailed, high-level reference on logic-based commonsense reasoning. It uses the event calculus, a highly powerful and usable tool for commonsense reasoning, which Erik T. Mueller demonstrates as the most effective tool for the broadest range of applications. He provides an up-to-date work promoting the use of the event calculus for commonsense reasoning, and bringing into one place information scattered across many books and papers. Mueller shares the knowledge gained in using the event calculus and extends the literature with detailed event calculus solutions to problems that span many areas of the commonsense world.Covers key areas of commonsense reasoning including action, change, defaults, space, and mental states.The first full book on commonsense reasoning to use the event calculus. Contextualizes the event calculus within the framework of commonsense reasoning, introducing the event calculus as the best method overall. Focuses on how to use the event calculus formalism to perform commonsense reasoning, while existing papers and books examine the formalisms themselves. Includes fully worked out proofs and circumscriptions for every example.
Neuro-symbolic artificial intelligence: a survey
The goal of the growing discipline of neuro-symbolic artificial intelligence (AI) is to develop AI systems with more human-like reasoning capabilities by combining symbolic reasoning with connectionist learning. We survey the literature on neuro-symbolic AI during the last two decades, including books, monographs, review papers, contribution pieces, opinion articles, foundational workshops/talks, and related PhD theses. Four main features of neuro-symbolic AI are discussed, including representation, learning, reasoning, and decision-making. Finally, we discuss the many applications of neuro-symbolic AI, including question answering, robotics, computer vision, healthcare, and more. Scalability, explainability, and ethical considerations are also covered, as well as other difficulties and limits of neuro-symbolic AI. This study summarizes the current state of the art in neuro-symbolic artificial intelligence.
Deep Learning with Graph Convolutional Networks: An Overview and Latest Applications in Computational Intelligence
Convolutional neural networks (CNNs) have received widespread attention due to their powerful modeling capabilities and have been successfully applied in natural language processing, image recognition, and other fields. On the other hand, traditional CNN can only deal with Euclidean spatial data. In contrast, many real-life scenarios, such as transportation networks, social networks, reference networks, and so on, exist in graph data. The creation of graph convolution operators and graph pooling is at the heart of migrating CNN to graph data analysis and processing. With the advancement of the Internet and technology, graph convolution network (GCN), as an innovative technology in artificial intelligence (AI), has received more and more attention. GCN has been widely used in different fields such as image processing, intelligent recommender system, knowledge-based graph, and other areas due to their excellent characteristics in processing non-European spatial data. At the same time, communication networks have also embraced AI technology in recent years, and AI serves as the brain of the future network and realizes the comprehensive intelligence of the future grid. Many complex communication network problems can be abstracted as graph-based optimization problems and solved by GCN, thus overcoming the limitations of traditional methods. This survey briefly describes the definition of graph-based machine learning, introduces different types of graph networks, summarizes the application of GCN in various research fields, analyzes the research status, and gives the future research direction.
Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that “the person is riding a horse-drawn carriage.” In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.
Science audiences, misinformation, and fake news
Concerns about public misinformation in the United States—ranging from politics to science—are growing. Here, we provide an overview of how and why citizens become (and sometimes remain) misinformed about science. Our discussion focuses specifically on misinformation among individual citizens. However, it is impossible to understand individual information processing and acceptance without taking into account social networks, information ecologies, and other macro-level variables that provide important social context. Specifically, we show how being misinformed is a function of a person’s ability and motivation to spot falsehoods, but also of other group-level and societal factors that increase the chances of citizens to be exposed to correct(ive) information. We conclude by discussing a number of research areas—some of which echo themes of the 2017 National Academies of Sciences, Engineering, and Medicine’s Communicating Science Effectively report—that will be particularly important for our future understanding of misinformation, specifically a systems approach to the problem of misinformation, the need for more systematic analyses of science communication in new media environments, and a (re)focusing on traditionally underserved audiences.
Advances in natural language processing
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.
Managing information asymmetry in public–private relationships undergoing a digital transformation: the role of contractual and relational governance
PurposeInter-organisational governance is an important enabler for information processing, particularly in relationships undergoing digital transformation (DT) where partners depend on each other for information in decision-making. Based on information processing theory (IPT), the authors theoretically and empirically investigate how governance mechanisms address information asymmetry (uncertainty and equivocality) arising in capturing, sharing and interpreting information generated by digital technologies.Design/methodology/approachIPT is applied to four cases of public–private relationships in the Dutch infrastructure sector that aim to enhance the quantity and quality of information-based decision-making by implementing digital technologies. The investigated relationships are characterised by differing degrees and types of information uncertainty and equivocality. The authors build on rich data sets including archival data, observations, contract documents and interviews.FindingsAddressing information uncertainty requires invoking contractual control and coordination. Contract clauses should be precise and incentive schemes functional in terms of information requirements. Information equivocality is best addressed by using relational governance. Identifying information requirements and reducing information uncertainty are a prerequisite for the transformation activities that organisations perform to reduce information equivocality.Practical implicationsThe study offers insights into the roles of both governance mechanisms in managing information asymmetry in public–private relationships. The study uncovers key activities for gathering, sharing and transforming information when using digital technologies.Originality/valueThis study draws on IPT to study public–private relationships undergoing DT. The study links contractual control and coordination as well as relational governance mechanisms to information-processing activities that organisations deploy to reduce information uncertainty and equivocality.
Astrocyte function from information processing to cognition and cognitive impairment
Astrocytes serve important roles that affect recruitment and function of neurons at the local and network levels. Here we review the contributions of astrocyte signaling to synaptic plasticity, neuronal network oscillations, and memory function. The roles played by astrocytes are not fully understood, but astrocytes seem to contribute to memory consolidation and seem to mediate the effects of vigilance and arousal on memory performance. Understanding the role of astrocytes in cognitive processes may also advance our understanding of how these processes go awry in pathological conditions. Indeed, abnormal astrocytic signaling can cause or contribute to synaptic and network imbalances, leading to cognitive impairment. We discuss evidence for this from animal models of Alzheimer’s disease and multiple sclerosis and from animal studies of sleep deprivation and drug abuse and addiction. Understanding the emerging roles of astrocytes in cognitive function and dysfunction will open up a large array of new therapeutic opportunities.Volterra et al. review evidence that astrocyte-generated signals participate in recruitment and function of neuronal networks underlying memory performance and that signal abnormalities under pathological conditions contribute to cognitive impairment.
AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems
Artificial intelligence (AI) is a leading technology of the current age of the Fourth Industrial Revolution (Industry 4.0 or 4IR), with the capability of incorporating human behavior and intelligence into machines or systems. Thus, AI-based modeling is the key to build automated, intelligent, and smart systems according to today’s needs. To solve real-world issues, various types of AI such as analytical, functional, interactive, textual, and visual AI can be applied to enhance the intelligence and capabilities of an application. However, developing an effective AI model is a challenging task due to the dynamic nature and variation in real-world problems and data. In this paper, we present a comprehensive view on “AI-based Modeling” with the principles and capabilities of potential AI techniques that can play an important role in developing intelligent and smart systems in various real-world application areas including business, finance, healthcare, agriculture, smart cities, cybersecurity and many more. We also emphasize and highlight the research issues within the scope of our study. Overall, the goal of this paper is to provide a broad overview of AI-based modeling that can be used as a reference guide by academics and industry people as well as decision-makers in various real-world scenarios and application domains.