Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
573 result(s) for "First-order logic"
Sort by:
Recommender systems based on neuro-symbolic knowledge graph embeddings encoding first-order logic rules
In this paper, we present a knowledge-aware recommendation model based on neuro-symbolic graph embeddings that encode first-order logic rules. Our approach is based on the intuition that is the basis of neuro-symbolic AI systems: to combine deep learning and symbolic reasoning in one single model, in order to take the best out of both the paradigms. To this end, we start from a knowledge graph (KG) encoding information about users, ratings, and descriptive properties of the items and we design a model that combines background knowledge encoded in logical rules mined from the KG with explicit knowledge encoded in the triples of the KG itself to obtain a more precise representation of users and items. Specifically, our model is based on the combination of: (i) a rule learner that extracts first-order logic rules based on the information encoded in the knowledge graph; (ii) a graph embedding module, that jointly learns a vector space representation of users and items based on the triples encoded in the knowledge graph and the rules previously extracted; (iii) a recommendation module that uses the embeddings to feed a deep learning architecture that provides users with top-k recommendations. In the experimental section, we evaluate the effectiveness of our strategy on three datasets, and the results show that the combination of knowledge graph embeddings and first-order logic rules led to an improvement in the predictive accuracy and in the novelty of the recommendations. Moreover, our approach overcomes several competitive baselines, thus confirming the validity of our intuitions.
A Multimodal AI System: Comparing LLMs and Theorem Proving Systems
This paper discusses a multimodal AI system applied to legal reasoning for tax law. The results given here are very general and apply to systems developed for other areas besides tax law. A central goal of this work is to gain a better understanding of the relationships between LLMs (Large Language Models) and automated theorem-proving methodologies. To do this, we suppose (1) two cases for the theorem-proving system: one where it has a countable number of total meanings for its countable number of atoms and the other is where it has an uncountable number of total meanings for its countable number of atoms, and (2) LLMs can have an uncountable number of token meanings. With this in mind, the results given in this paper use the downward and upward Löwenheim–Skolem theorems and logical model theory to contrast these two AI modalities. One modality focuses on syntactic proofs and the other focuses on logical semantics based on LLMs. Particularly, one modality uses a rule-based first-order logic theorem-proving system to perform legal reasoning. The objective of this theorem-proving system is to provide proofs as evidence of valid legal reasoning when enacted laws are applied to particular situations. These proofs are syntactic structures that can be presented in the form of narrative explanations of how the answer to the legal question was determined. The second modality uses LLMs to analyze and transform a user’s tax query so this query can be sent to a first-order logic theorem-proving system to perform its legal reasoning function. The main goal of our application of LLMs is to enhance and simplify user input and output for the theorem-proving system. Using logical model theory, we show how there can exist an equivalence between laws represented in logic of the theorem-proving system, fixed in time when the theorem-proving system was set up, and new semantics given by LLMs. These results are based on logical model theory and Löwenheim–Skolem theorems.
A Canonical Model for Constant Domain Basic First-Order Logic
I build a canonical model for constant domain basic first-order logic (BQLCD), the constant domain first-order extension of Visser's basic propositional logic, and use the canonical model to verify that BQLCD satisfies the disjunction and existence properties.
Undecidability of First-Order Modal and Intuitionistic Logics with Two Variables and One Monadic Predicate Letter
We prove that the positive fragment of first-order intuitionistic logic in the language with two individual variables and a single monadic predicate letter, without functional symbols, constants, and equality, is undecidable. This holds true regardless of whether we consider semantics with expanding or constant domains. We then generalise this result to intervals [QBL, QKC] and [QBL, QFL], where QKC is the logic of the weak law of the excluded middle and QBL and QFL are first-order counterparts of Visser's basic and formal logics, respectively. We also show that, for most \"natural\" first-order modal logics, the two-variable fragment with a single monadic predicate letter, without functional symbols, constants, and equality, is undecidable, regardless of whether we consider semantics with expanding or constant domains. These include all sublogics of QKTB, QGL, and QGrz—among them, QK, QT, QKB, QD, QK4, and QS4.
A Unified Approach to Structural Limits and Limits of Graphs with Bounded Tree-Depth
In this paper we introduce a general framework for the study of limits of relational structures and graphs in particular, which is based on a combination of model theory and (functional) analysis. We show how the various approaches to graph limits fit to this framework and that they naturally appear as “tractable cases” of a general theory. As an outcome of this, we provide extensions of known results. We believe that this puts these into a broader context. The second part of the paper is devoted to the study of sparse structures. First, we consider limits of structures with bounded diameter connected components and we prove that in this case the convergence can be “almost” studied component-wise. We also propose the structure of limit objects for convergent sequences of sparse structures. Eventually, we consider the specific case of limits of colored rooted trees with bounded height and of graphs with bounded tree-depth, motivated by their role as “elementary bricks” these graphs play in decompositions of sparse graphs, and give an explicit construction of a limit object in this case. This limit object is a graph built on a standard probability space with the property that every first-order definable set of tuples is measurable. This is an example of the general concept of
CONDITIONAL REASONING AND THE SHADOWS IT CASTS ONTO THE FIRST-ORDER LOGIC: THE NELSONIAN CASE
We define a natural notion of standard translation for the formulas of conditional logic which is analogous to the standard translation of modal formulas into the first-order logic. We briefly show that this translation works (modulo a lightweight first-order encoding of the conditional models) for the minimal classical conditional logic$\\mathsf {CK}$introduced by Brian Chellas in [3]; however, the main result of the article is that a classically equivalent reformulation of these notions (i.e., of standard translation plus theory of conditional models) also faithfully embeds the basic Nelsonian conditional logic$\\mathsf {N4CK}$, introduced in [11] into$\\mathsf {QN4}$, the paraconsistent variant of Nelson’s first-order logic of strong negation. Thus$\\mathsf {N4CK}$is the logic induced by the Nelsonian reading of the classical Chellas semantics of conditionals and can, therefore, be considered a faithful analogue of$\\mathsf {CK}$on the non-classical basis provided by the propositional fragment of$\\mathsf {QN4}$. Moreover, the methods used to prove our main result can be easily adapted to the case of modal logic, which makes it possible to improve an older result [10, Proposition 7] by S. Odintsov and H. Wansing about the standard translation embedding of the Nelsonian modal logic$\\mathsf {FSK}^d$into$\\mathsf {QN4}$.
An Intuitionistically Complete System of Basic Intuitionistic Conditional Logic
We introduce a basic intuitionistic conditional logic IntCK that we show to be complete both relative to a special type of Kripke models and relative to a standard translation into first-order intuitionistic logic. We show that IntCK stands in a very natural relation to other similar logics, like the basic classical conditional logic CK and the basic intuitionistic modal logic IK . As for the basic intuitionistic conditional logic ICK proposed in Weiss ( Journal of Philosophical Logic , 48 , 447–469, 2019 ), IntCK extends its language with a diamond-like conditional modality ◊ → , but its ( ◊ → )-free fragment is also a proper extension of ICK . We briefly discuss the resulting gap between the two candidate systems of basic intuitionistic conditional logic and the possible pros and cons of both candidates.
Markov logic networks
Issue Title: Special Issue: Multi-Relational Data Mining and Statistical Relational Learning We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.[PUBLICATION ABSTRACT]
THE FLUTED FRAGMENT REVISITED
We study the fluted fragment, a decidable fragment of first-order logic with an unbounded number of variables, motivated by the work of W. V. Quine.We show that the satisfiability problem for this fragment has nonelementary complexity, thus refuting an earlier published claim by W. C. Purdy that it is in NExpTime. More precisely, we consider 𝓕𝓛 m , the intersection of the fluted fragment and the m-variable fragment of first-order logic, for all m ≥ 1. We show that, for m ≥ 2, this subfragment forces ⌊m/2⌋-tuply exponentially large models, and that its satisfiability problem is ⌊m/2⌋-NExpTime-hard. We further establish that, for m ≥ 3, any satisfiable 𝓕𝓛 m -formula has a model of at most (m − 2)-tuply exponential size, whence the satisfiability (= finite satisfiability) problem for this fragment is in (m − 2)-NExpTime. Together with other, known, complexity results, this provides tight complexity bounds for 𝓕𝓛 m for all m ≤ 4.