Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
457 result(s) for "Graphical modeling (Statistics)"
Sort by:
Time-like Graphical Models
The author studies continuous processes indexed by a special family of graphs. Processes indexed by vertices of graphs are known as probabilistic graphical models. In 2011, Burdzy and Pal proposed a continuous version of graphical models indexed by graphs with an embedded time structure-- so-called time-like graphs. The author extends the notion of time-like graphs and finds properties of processes indexed by them. In particular, the author solves the conjecture of uniqueness of the distribution for the process indexed by graphs with infinite number of vertices. The author provides a new result showing the stochastic heat equation as a limit of the sequence of natural Brownian motions on time-like graphs. In addition, the author's treatment of time-like graphical models reveals connections to Markov random fields, martingales indexed by directed sets and branching Markov processes.
Unifying the Mind
Our ordinary, everyday thinking requires an astonishing range of cognitive activities, yet our cognition seems to take place seamlessly. We move between cognitive processes with ease, and different types of cognition seem to share information readily. In this book, David Danks proposes a novel cognitive architecture that can partially explain two aspects of human cognition: its relatively integrated nature and our effortless ability to focus on the relevant factors in any particular situation. Danks argues that both of these features of cognition are naturally explained if many of our cognitive representations are understood to be structured like graphical models. The computational framework of graphical models is widely used in machine learning, but Danks is the first to offer a book-length account of its use to analyze multiple areas of cognition. Danks demonstrates the usefulness of this approach by reinterpreting a variety of cognitive theories in terms of graphical models. He shows how we can understand much of our cognition -- in particular causal learning, cognition involving concepts, and decision making -- through the lens of graphical models, thus clarifying a range of data from experiments and introspection. Moreover, Danks demonstrates the important role that cognitive representations play in a unified understanding of cognition, arguing that much of our cognition can be explained in terms of different cognitive processes operating on a shared collection of cognitive representations. Danks's account is mathematically accessible, focusing on the qualitative aspects of graphical models and separating the formal mathematical details in the text.
Ecosystems Knowledge
To analyze complex situations we use everyday analogies that allow us to invest in an unknown domain knowledge we have acquired in a known field. In this work the author proposes a modeling and analysis method that uses the analogy of the ecosystem to embrace the complexity of an area of knowledge. After a history of the ecosystem concept and these derivatives (nature, ecology, environment ) from antiquity to the present, the analysis method based on the modeling of socio-semantic ontologies is presented, followed by practical examples of this approach in the areas of software development, digital humanities, Big Data, and more generally in the area of complex analysis.
Probabilistic Graphical Modeling on Big Data
The rise of Big Data in recent years brings many challenges to modern statistical analysis and modeling. In toxicogenomics, the advancement of high-throughput screening technologies facilitates the generation of massive amount of biological data, a big data phenomena in biomedical science. Yet, researchers still heavily rely on key word search and/or literature review to navigate the databases and analyses are often done in rather small-scale. As a result, the rich information of a database has not been fully utilized, particularly for the information embedded in the interactive nature between data points that are largely ignored and buried. For the past 10 years, probabilistic topic modeling has been recognized as an effective machine learning algorithm to annotate the hidden thematic structure of massive collection of documents. The analogy between text corpus and large-scale genomic data enables the application of text mining tools, like probabilistic topic models, to explore hidden patterns of genomic data and to the extension of altered biological functions. In this study, we developed a generalized probabilistic topic model to analyze a toxicogenomics data set that consists of a large number of gene expression data from the rat livers treated with drugs in multiple dose and time-points. We discovered the hidden patterns in gene expression associated with the effect of doses and time-points of treatment. Finally, we illustrated the ability of our model to identify the evidence of potential reduction of animal use. In online social network, social network services have hundreds of millions, sometimes even billions, of monthly active users. These complex and vast social networks are tremendous resources for understanding the human interactions. Especially, characterizing the strength of social interactions becomes essential task for researching or marketing social networks. Instead of traditional dichotomy of strong and weak tie assumption, we believe that there are more types of social ties than just two. We use cosine similarity to measure the strength of the social ties and apply incremental Dirichlet process Gaussian mixture model to group tie into different clusters of ties. Comparing to other methods, our approach generates superior accuracy in classification on data with ground truth. The incremental algorithm also allow data to be added or deleted in a dynamic social network with minimal computer cost. In addition, it has been shown that the network constraints of individuals can be used to predict ones' career successes. Under our multiple type of ties assumption, individuals are profiled based on their surrounding relationships. We demonstrate that network profile of a individual is directly linked to social significance in real world.
Graphical models : representations for learning, reasoning and data mining
Graphical models are of increasing importance in applied statistics, and in particular in data mining. Providing a self-contained introduction and overview to learning relational, probabilistic, and possibilistic networks from data, this second edition of Graphical Models is thoroughly updated to include the latest research in this burgeoning field, including a new chapter on visualization. The text provides graduate students, and researchers with all the necessary background material, including modelling under uncertainty, decomposition of distributions, graphical representation of distributions, and applications relating to graphical models and problems for further research.
Multivariate functional outlier detection
Functional data are occurring more and more often in practice, and various statistical techniques have been developed to analyze them. In this paper we consider multivariate functional data, where for each curve and each time point a p -dimensional vector of measurements is observed. For functional data the study of outlier detection has started only recently, and was mostly limited to univariate curves ( p = 1 ) . In this paper we set up a taxonomy of functional outliers, and construct new numerical and graphical techniques for the detection of outliers in multivariate functional data, with univariate curves included as a special case. Our tools include statistical depth functions and distance measures derived from them. The methods we study are affine invariant in p -dimensional space, and do not assume elliptical or any other symmetry.
Sparse graphs using exchangeable random measures
Statistical network modelling has focused on representing the graph as a discrete structure, namely the adjacency matrix. When assuming exchangeability of this array—which can aid in modelling, computations and theoretical analysis—the Aldous–Hoover theorem informs us that the graph is necessarily either dense or empty. We instead consider representing the graph as an exchangeable random measure and appeal to the Kallenberg representation theorem for this object. We explore using completely random measures (CRMs) to define the exchangeable random measure, and we show how our CRM construction enables us to achieve sparse graphs while maintaining the attractive properties of exchangeability. We relate the sparsity of the graph to the Lévy measure defining the CRM. For a specific choice of CRM, our graphs can be tuned from dense to sparse on the basis of a single parameter. We present a scalable Hamiltonian Monte Carlo algorithm for posterior inference, which we use to analyse network properties in a range of real data sets, including networks with hundreds of thousands of nodes and millions of edges.
Psychometric network models from time-series and panel data
Researchers in the field of network psychometrics often focus on the estimation of Gaussian graphical models (GGMs)—an undirected network model of partial correlations—between observed variables of cross-sectional data or single-subject time-series data. This assumes that all variables are measured without measurement error, which may be implausible. In addition, cross-sectional data cannot distinguish between within-subject and between-subject effects. This paper provides a general framework that extends GGM modeling with latent variables, including relationships over time. These relationships can be estimated from time-series data or panel data featuring at least three waves of measurement. The model takes the form of a graphical vector-autoregression model between latent variables and is termed the ts-lvgvar when estimated from time-series data and the panel-lvgvar when estimated from panel data. These methods have been implemented in the software package psychonetrics , which is exemplified in two empirical examples, one using time-series data and one using panel data, and evaluated in two large-scale simulation studies. The paper concludes with a discussion on ergodicity and generalizability. Although within-subject effects may in principle be separated from between-subject effects, the interpretation of these results rests on the intensity and the time interval of measurement and on the plausibility of the assumption of stationarity.
Kernel Methods in Machine Learning
We review machine learning methods employing positive definite kernels. These methods formulate learning and estimation problems in a reproducing kernel Hilbert space (RKHS) of functions defined on the data domain, expanded in terms of a kernel. Working in linear spaces of function has the benefit of facilitating the construction and analysis of learning algorithms while at the same time allowing large classes of functions. The latter include nonlinear functions as well as functions defined on nonvectorial data. We cover a wide range of methods, ranging from binary classifiers to sophisticated methods for estimation with structured data.