Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
32
result(s) for
"68T30"
Sort by:
ON THE DEFINITION OF A CONFOUNDER
2013
The causal inference literature has provided a clear formal definition of confounding expressed in terms of counterfactual independence. The literature has not, however, come to any consensus on a formal definition of a confounder, as it has given priority to the concept of confounding over that of a confounder. We consider a number of candidate definitions arising from various more informal statements made in the literature. We consider the properties satisfied by each candidate definition, principally focusing on (i) whether under the candidate definition control for all \"confounders\" suffices to control for \"confounding\" and (ii) whether each confounder in some context helps eliminate or reduce confounding bias. Several of the candidate definitions do not have these two properties. Only one candidate definition of those considered satisfies both properties. We propose that a \"confounder\" be defined as a pre-exposure covariate C for which there exists a set of other covariates X such that effect of the exposure on the outcome is unconfounded conditional on (X, C) but such that for no proper subset of (X, C) is the effect of the exposure on the outcome unconfounded given the subset. We also provide a conditional analogue of the above definition; and we propose a variable that helps reduce bias but not eliminate bias be referred to as a \"surrogate confounder.\" These definitions are closely related to those given by Robins and Morgenstern [Comput. Math. Appl. 14 (1987) 869-916]. The implications that hold among the various candidate definitions are discussed.
Journal Article
LEARNING HIGH-DIMENSIONAL DIRECTED ACYCLIC GRAPHS WITH LATENT AND SELECTION VARIABLES
2012
We consider the problem of learning causal information between random variables in directed acyclic graphs (DAGs) when allowing arbitrarily many latent and selection variables. The FCI (Fast Causal Inference) algorithm has been explicitly designed to infer conditional independence and causal information in such settings. However, FCI is computationally infeasible for large graphs. We therefore propose the new RFCI algorithm, which is much faster than FCI. In some situations the output of RFCI is slightly less informative, in particular with respect to conditional independence information. However, we prove that any causal information in the output of RFCI is correct in the asymptotic limit. We also define a class of graphs on which the outputs of FCI and RFCI are identical. We prove consistency of FCI and RFCI in sparse high-dimensional settings, and demonstrate in simulations that the estimation performances of the algorithms are very similar. All software is implemented in the R-package pcalg.
Journal Article
Optimization strategy of ideological and political education in colleges and universities based on modern information technology
2024
Data-driven is an important thinking concept, technical resource and innovative method in the new era, which expands the way people think about, explain and deal with problems. Starting from reality, this paper adopts data-driven theory to provide technical support and scientific cognitive way for ideological and political education in new era colleges and universities, and explores data-driven optimization strategy for ideological and political education in colleges and universities. With the support of big data technology, data-driven ideological and political education in the new era explores the trajectory and laws of ideological and political education thoughts and behaviors, changes from attaching importance to result orientation to attaching importance to data prediction function, and changes from focusing on theoretical thinking to in-depth practice, which opens up a brand new idea for the research of ideological and political education in the new era.
Journal Article
An extended knowledge compilation map for conditional preference statements-based and generalized additive utilities-based languages
by
Mengin, Jérôme
,
Fargier, Hélène
,
Mengel, Stefan
in
Artificial Intelligence
,
Combinatorial analysis
,
Complex Systems
2024
Conditional preference statements have been used to compactly represent preferences over combinatorial domains. They are at the core of CP-nets and their generalizations, and lexicographic preference trees. Several works have addressed the complexity of some queries (optimization, dominance in particular). We extend in this paper some of these results, and study other queries which have not been addressed so far, like equivalence, and transformations, like conditioning and variable elimination, thereby contributing to a knowledge compilation map for languages based on conditional preference statements. We also study the expressiveness and complexity of queries and transformations for generalized additive utilities.
Journal Article
A knowledge compilation perspective on queries and transformations for belief tracking
by
Scheck, Sergej
,
Niveau, Alexandre
,
Zanuttini, Bruno
in
Artificial Intelligence
,
Complex Systems
,
Computer Science
2024
Nondeterministic planning is the process of computing plans or policies of actions achieving given goals, when there is nondeterministic uncertainty about the initial state and/or the outcomes of actions. This process encompasses many precise computational problems, from classical planning, where there is no uncertainty, to contingent planning, where the agent has access to observations about the current state. Fundamental to these problems is belief tracking, that is, obtaining information about the current state after a history of actions and observations. At an abstract level, belief tracking can be seen as maintaining and querying the current belief state, that is, the set of states consistent with the history. We take a knowledge compilation perspective on these processes, by defining the queries and transformations which pertain to belief tracking. We study them for propositional domains, considering a number of representations for belief states, actions, observations, and goals. In particular, for belief states, we consider explicit propositional representations with and without auxiliary variables, as well as implicit representations by the history itself; and for actions, we consider propositional action theories as well as ground PDDL and conditional STRIPS. For all combinations, we investigate the complexity of relevant queries (for instance, whether an action is applicable at a belief state) and transformations (for instance, revising a belief state by an observation); we also discuss the relative succinctness of representations. Though many results show an expected tradeoff between succinctness and tractability, we identify some interesting combinations. We also discuss the choice of representations by existing planners in light of our study.
Journal Article
Mconvkgc: a novel multi-channel convolutional model for knowledge graph completion
2024
The incompleteness of the knowledge graph limits its applications to various downstream tasks. To this end, numerous influential knowledge graph embedding models have been presented and have made great achievements in the domain of knowledge graph completion. However, most of these models only pay attention to the extraction of latent knowledge or translational features, and cannot comprehensively capture the surface semantics, latent interactions, and translational characteristics of triples. In this paper, a novel multi-channel convolutional model, MConvKGC, is presented for knowledge graph completion, which has three feature extraction channels and employs them to simultaneously extract shallow semantics, latent interactions, and translational characteristics, respectively. In addition, MConvKGC adopts an asymmetric convolutional block to comprehensively extract the latent interactions from triples, and process the generated feature maps with various attention mechanisms to further learn local dependencies between entities and relations. The results of the conducted link prediction experiments on FB15k-237, WN18RR, and UMLS indicate that our proposed MConvKGC shows excellent performance and outperforms previous state-of-the-art KGE models in the majority of cases.
Journal Article
Logical perspectives on the foundations of probability
2023
We illustrate how a variety of logical methods and techniques provide useful, though currently underappreciated, tools in the foundations and applications of reasoning under uncertainty. The field is vast spanning logic, artificial intelligence, statistics, and decision theory. Rather than (hopelessly) attempting a comprehensive survey, we focus on a handful of telling examples. While most of our attention will be devoted to frameworks in which uncertainty is quantified probabilistically, we will also touch upon generalisations of probability measures of uncertainty, which have attracted a significant interest in the past few decades.
Journal Article
Semantically realizing discovery and composition for RESTful web services
2024
The processes of service discovery and composition are crucial tasks in application development driven by Web Services. However, with RESTful Web Service replacing SOAP-based Web Service as the dominant service-providing approach, the research on service discovery and composition should also shift its focus from SOAP-based Web Service to RESTful Web Service. The unstructured, resource-oriented and unified interface characteristics of RESTful Web Service pose challenges to its discovery and composition process. In this work, a framework for implementing RESTful Web Service discovery and automatic composition based on semantic technology is proposed. Firstly, the framework uses the OpenAPI Specification (OAS), which is extended by resource attributes, as the RESTful Web Service description specification, and then supports semantic-based matching discovery and automatic composition by attaching the concepts of domain ontology to the extended OAS. Secondly, the framework is fully adapted to REST features and provides a method for building service composition dependencies during registration, which is used to generate composition schemes during the service discovery process. Finally, the framework provides a discovery method that can return RESTful Web services to the requester in the form of single-point services or service composition schemes according to the magnitude of the semantic similarity with the requester’s requirements. We applied the proposed methods to experiment with RESTful Web services in three different fields, and the results show that the methods effectively calculate the similarity between RESTful single-point Web services or composite Web services and service requests with the support of domain ontology.
Journal Article
Identification of representative trees in random forests based on a new tree-based distance measure
by
König, Inke R.
,
Westenberger, Ana
,
Laabs, Björn-Hergen
in
Algorithms
,
Chemistry and Earth Sciences
,
Classification
2024
In life sciences, random forests are often used to train predictive models. However, gaining any explanatory insight into the mechanics leading to a specific outcome is rather complex, which impedes the implementation of random forests into clinical practice. By simplifying a complex ensemble of decision trees to a single most representative tree, it is assumed to be possible to observe common tree structures, the importance of specific features and variable interactions. Thus, representative trees could also help to understand interactions between genetic variants. Intuitively, representative trees are those with the minimal distance to all other trees, which requires a proper definition of the distance between two trees. Thus, we developed a new tree-based distance measure, which incorporates more of the underlying tree structure than other metrics. We compared our new method with the existing metrics in an extensive simulation study and applied it to predict the age at onset based on a set of genetic risk factors in a clinical data set. In our simulation study we were able to show the advantages of our weighted splitting variable approach. Our real data application revealed that representative trees are not only able to replicate the results from a recent genome-wide association study, but also can give additional explanations of the genetic mechanisms. Finally, we implemented all compared distance measures in R and made them publicly available in the R package timbR (
https://github.com/imbs-hl/timbR
).
Journal Article
Cultural heritage digital twin: modeling and representing the visual narrative in Leonardo Da Vinci’s Mona Lisa
by
Amelio, Alessia
,
Zarri, Gian Piero
in
Artificial Intelligence
,
Computational Biology/Bioinformatics
,
Computational Science and Engineering
2024
In this paper, Artificial Intelligence/Knowledge Representation methods are used for the digital modeling of cultural heritage elements. Accordingly, the new concept of digital cultural heritage twin is presented as composed of a physical component and an immaterial component of the cultural entity. The former concerns the physical aspects, i.e. style, name of the artist, execution time, dimension, etc. The latter represents the emotional and intangible aspects transmitted by the entity, i.e. emotions, thoughts, opinions. In order to digitally model the physical and immaterial components of the twin, the Narrative Knowledge Representation Language has been formally introduced and described. It is particularly suitable for representing the immaterial aspects of the cultural entity, as it is capable of modeling in a simple but rigorous and efficient way complex situations and events, behaviours, attitudes, etc. As an experiment, NKRL has been adopted for representing some of the most relevant intangible items of the visual narrative underlying the hidden painting that lies beneath the Mona Lisa (La Gioconda) image painted by Leonardo Da Vinci on the same poplar panel. Real-time application of the resulting knowledge base opens up novel possibilities for the development of virtual objects, chatbots and expert systems, as well as the definition of semantic search platforms related to cultural heritage.
Journal Article