Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
172 result(s) for "Schulte, Oliver"
Sort by:
Deep soccer analytics: learning an action-value function for evaluating soccer players
Given the large pitch, numerous players, limited player turnovers, and sparse scoring, soccer is arguably the most challenging to analyze of all the major team sports. In this work, we develop a new approach to evaluating all types of soccer actions from play-by-play event data. Our approach utilizes a Deep Reinforcement Learning (DRL) model to learn an action-value Q-function. To our knowledge, this is the first action-value function based on DRL methods for a comprehensive set of soccer actions. Our neural architecture fits continuous game context signals and sequential features within a play with two stacked LSTM towers, one for the home team and one for the away team separately. To validate the model performance, we illustrate both temporal and spatial projections of the learned Q-function, and conduct a calibration experiment to study the data fit under different game contexts. Our novel soccer Goal Impact Metric (GIM) applies values from the learned Q-function, to measure a player’s overall performance by the aggregate impact values of his actions over all the games in a season. To interpret the impact values, a mimic regression tree is built to find the game features that influence the values most. As an application of our GIM metric, we conduct a case study to rank players in the English Football League Championship. Empirical evaluation indicates GIM is a temporally stable metric, and its correlations with standard measures of soccer success are higher than that computed with other state-of-the-art soccer metrics.
Causal Learning with Occam's Razor
Occam's razor directs us to adopt the simplest hypothesis consistent with the evidence. Learning theory provides a precise definition of the inductive simplicity of a hypothesis for a given learning problem. This definition specifies a learning method that implements an inductive version of Occam's razor. As a case study, we apply Occam's inductive razor to causal learning. We consider two causal learning problems: learning a causal graph structure that presents global causal connections among a set of domain variables, and learning context-sensitive causal relationships that hold not globally, but only relative to a context. For causal graph learning, Occam's inductive razor directs us to adopt the model that explains the observed correlations with a minimum number of direct causal connections. For expanding a causal graph structure to include context-sensitive relationships, Occam's inductive razor directs us to adopt the expansion that explains the observed correlations with a minimum number of free parameters. This is equivalent to explaining the correlations with a minimum number of probabilistic logical rules. The paper provides a gentle introduction to the learning-theoretic definition of inductive simplicity and the application of Occam's razor for causal learning.
Model-based exception mining for object-relational data
This paper develops model-based exception mining and outlier detection for the case of object-relational data. Object-relational data represent a complex heterogeneous network, which comprises objects of different types, links among these objects, also of different types, and attributes of these links. We follow the well-established exceptional model mining (EMM) framework, which has been previously applied for subgroup discovery in propositional data; our novel contribution is to develop EMM for relational data. EMM leverages machine learning models for exception mining: An object is exceptional to the extent that a model learned for the object data differs from a model learned for the general population. In relational data, EMM can therefore be used for detecting single outlier or exceptional objects. We combine EMM with state-of-the-art statistical-relational model discovery methods for constructing a graphical model (Bayesian network), that compactly represents probabilistic associations in the data. We investigate several outlierness metrics, based on the learned object-relational model, that quantify the extent to which the association pattern of a potential outlier object deviates from that of the whole population. Our method is validated on synthetic data sets and on real-world data sets about soccer and hockey matches, IMDb movies and mutagenic compounds. Compared to baseline methods, the EMM approach achieved the best detection accuracy when combined with a novel outlinerness metric. An empirical evaluation on soccer and movie data shows a strong correlation between our novel outlierness metric and success metrics: Individuals that our metric marks out as unusual tend to have unusual success.
The Effect of Renting in Cropland on Livelihood Choices and Agricultural Commercialization: A Case Study from Rural Vietnam
This paper investigates the role of land rental markets in livelihood choices using data from 792 farming households in rural Vietnam. First, we cluster households according to livelihood strategies and estimate the determinants of the respective decision. In a second step, we analyze the contribution of rented land in linking smallholders to output markets. Our results suggest that rented land can provide smallholders with an opportunity to increase their agricultural activities and avoid resorting to less remunerative activities such as agricultural wage labor. Moreover, rented crop area increases the probability of market participation as well as the quantity of sales. Our results point to the need for a further liberalization of land rental markets specifically targeted at households that have been excluded from more remunerative livelihood strategies previously. Supporting these households to have access to the assets required to cultivate the additional area is thus recommended.
Learning graphical models for relational data via lattice search
Many machine learning applications that involve relational databases incorporate first-order logic and probability. Relational extensions of graphical models include Parametrized Bayes Net (Poole in IJCAI, pp. 985-991, 2003), Probabilistic Relational Models (Getoor et al. in Introduction to statistical relational learning, pp. 129-173, 2007), and Markov Logic Networks (MLNs) (Domingos and Richardson in Introduction to statistical relational learning, 2007). Many of the current state-of-the-art algorithms for learning MLNs have focused on relatively small datasets with few descriptive attributes, where predicates are mostly binary and the main task is usually prediction of links between entities. This paper addresses what is in a sense a complementary problem: learning the structure of a graphical model that models the distribution of discrete descriptive attributes given the links between entities in a relational database. Descriptive attributes are usually nonbinary and can be very informative, but they increase the search space of possible candidate clauses. We present an efficient new algorithm for learning a Parametrized Bayes Net that performs a level-wise search through the table join lattice for relational dependencies. From the Bayes net we obtain an MLN structure via a standard moralization procedure for converting directed models to undirected models. Learning MLN structure by moralization is 200-1000 times faster and scores substantially higher in predictive accuracy than benchmark MLN algorithms on five relational databases.[PUBLICATION ABSTRACT]
A Markov Game model for valuing actions, locations, and team performance in ice hockey
We apply the Markov Game formalism to develop a context-aware approach to valuing player actions, locations, and team performance in ice hockey. The Markov Game formalism uses machine learning and AI techniques to incorporate context and look-ahead. Dynamic programming is applied to learn value functions that quantify the impact of actions on goal scoring. Learning is based on a massive new dataset, from SportLogiq, that contains over 1.3M events in the National Hockey League. The SportLogiq data include the location of an action, which has previously been unavailable in hockey analytics. We give examples showing how the model assigns context and location aware values to a large set of 13 action types. Team performance can be assessed as the aggregate value of actions performed by the team’s players, or the aggregate value of states reached by the team. Model validation shows that the total team action and state value both provide a strong indicator predictor of team success, as measured by the team’s average goal ratio.
Fast learning of relational dependency networks
A relational dependency network (RDN) is a directed graphical model widely used for multi-relational data. These networks allow cyclic dependencies, necessary to represent relational auto-correlations. We describe an approach for learning both the RDN’s structure and its parameters, given an input relational database: First learn a Bayesian network (BN), then transform the Bayesian network to an RDN. Thus fast Bayesian network learning translates into fast RDN learning. The BN-to-RDN transform comprises a simple, local adjustment of the Bayesian network structure and a closed-form transform of the Bayesian network parameters. This method can learn an RDN for a dataset with a million tuples in minutes. We empirically compare our approach to a state-of-the-art RDN learning approach that applies functional gradient boosting, using six benchmark datasets. Learning RDNs via BNs scales much better to large datasets than learning RDNs with current boosting methods.
Modelling relational statistics with Bayes Nets
Class-level models capture relational statistics over object attributes and their connecting links, answering questions such as “what is the percentage of friendship pairs where both friends are women?” Class-level relationships are important in themselves, and they support applications like policy making, strategic planning, and query optimization. We represent class statistics using Parametrized Bayes Nets (PBNs), a first-order logic extension of Bayes nets. Queries about classes require a new semantics for PBNs, as the standard grounding semantics is only appropriate for answering queries about specific ground facts. We propose a novel random selection semantics for PBNs, which does not make reference to a ground model, and supports class-level queries. The parameters for this semantics can be learned using the recent pseudo-likelihood measure (Schulte in SIAM SDM, pp. 462–473, 2011 ) as the objective function. This objective function is maximized by taking the empirical frequencies in the relational data as the parameter settings. We render the computation of these empirical frequencies tractable in the presence of negated relations by the inverse Möbius transform. Evaluation of our method on four benchmark datasets shows that maximum pseudo-likelihood provides fast and accurate estimates at different sample sizes.
Learning directed relational models with recursive dependencies
Recently, there has been an increasing interest in generative models that represent probabilistic patterns over both links and attributes. A common characteristic of relational data is that the value of a predicate often depends on values of the same predicate for related entities. For directed graphical models, such recursive dependencies lead to cycles, which violates the acyclicity constraint of Bayes nets. In this paper we present a new approach to learning directed relational models which utilizes two key concepts: a pseudo likelihood measure that is well defined for recursive dependencies, and the notion of stratification from logic programming. An issue for modelling recursive dependencies with Bayes nets are redundant edges that increase the complexity of learning. We propose a new normal form format that removes the redundancy, and prove that assuming stratification, the normal form constraints involve no loss of modelling power. Empirical evaluation compares our approach to learning recursive dependencies with undirected models (Markov Logic Networks). The Bayes net approach is orders of magnitude faster, and learns more recursive dependencies, which lead to more accurate predictions.
Learning compact Markov logic networks with decision trees
Statistical-relational learning combines logical syntax with probabilistic methods. Markov Logic Networks (MLNs) are a prominent model class that generalizes both first-order logic and undirected graphical models (Markov networks). The qualitative component of an MLN is a set of clauses and the quantitative component is a set of clause weights. Generative MLNs model the joint distribution of relationships and attributes. A state-of-the-art structure learning method is the moralization approach : learn a set of directed Horn clauses, then convert them to conjunctions to obtain MLN clauses. The directed clauses are learned using Bayes net methods. The moralization approach takes advantage of the high-quality inference algorithms for MLNs and their ability to handle cyclic dependencies. A weakness of moralization is that it leads to an unnecessarily large number of clauses. In this paper we show that using decision trees to represent conditional probabilities in the Bayes net is an effective remedy that leads to much more compact MLN structures. In experiments on benchmark datasets, the decision trees reduce the number of clauses in the moralized MLN by a factor of 5–25, depending on the dataset. The accuracy of predictions is competitive with the models obtained by standard moralization, and in many cases superior.