Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
37,531 result(s) for "Mathematical analysis Statistical methods."
Sort by:
Estadística práctica para ciencia de datos con R y Python
Los métodos estadísticos son una parte fundamental de la ciencia de datos, pero pocos científicos de datos tienen una formación avanzada en estadística. Los cursos y libros sobre estadística básica rara vez tratan el tema desde la perspectiva de la ciencia de datos. La segunda edición de este libro incluye ejemplos detallados de Python, ofrece una orientación práctica sobre la aplicación de los métodos estadísticos a la ciencia de datos, te indica cómo evitar su uso incorrecto y te aconseja sobre lo que es y lo que no es importante. Muchos recursos de la ciencia de datos incorporan métodos estadísticos, pero carecen de una perspectiva estadística más profunda. Si estás familiarizado con los lenguajes de programación R o Python y tienes algún conocimiento de estadística, este libro suple esas carencias de una forma práctica, accesible y clara. Con este libro aprenderás: Por qué el análisis exploratorio de datos es un paso preliminar clave en la ciencia de datos Cómo el muestreo aleatorio puede reducir el sesgo y ofrecer un conjunto de datos de mayor calidad, incluso con Big Data Cómo los principios del diseño experimental ofrecen respuestas definitivas a preguntas Cómo utilizar la regresión para estimar resultados y detectar anomalías Técnicas de clasificación esenciales para predecir a qué categorías pertenece un registro Métodos estadísticos de aprendizaje automático que \"aprenden\" a partir de los datos Métodos de aprendizaje no supervisados para extraer significado de datos sin etiquetar Peter Bruce es el fundador del Institute for Statistics Education en Statistics.com. Andrew Bruce es científico investigador jefe en Amazon y tiene más de 30 años de experiencia en estadística y ciencia de datos. Peter Gedeck es científico de datos senior en Collaborative Drug Discovery, desarrolla algoritmos de aprendizaje automático para pronosticar propiedades de posibles futuros fármacos.
Regression models for categorical, count, and related variables : an applied approach
\"Social science and behavioral science students and researchers are often confronted with data that are categorical, count a phenomenon, or have been collected over time. Sociologists examining the likelihood of interracial marriage, political scientists studying voting behavior, criminologists counting the number of offenses people commit, health scientists studying the number of suicides across neighborhoods, and psychologists modeling mental health treatment success are all interested in outcomes that are not continuous. Instead, they must measure and analyze these events and phenomena in a discrete manner. This book provides an introduction and overview of several statistical models designed for these types of outcomes--all presented under the assumption that the reader has only a good working knowledge of elementary algebra and has taken introductory statistics and linear regression analysis. Numerous examples from the social sciences demonstrate the practical applications of these models. The chapters address logistic and probit models, including those designed for ordinal and nominal variables, regular and zero-inflated Poisson and negative binomial models, event history models, models for longitudinal data, multilevel models, and data reduction techniques such as principal components and factor analysis. Each chapter discusses how to utilize the models and test their assumptions with the statistical software Stata, and also includes exercise sets so readers can practice using these techniques. Appendices show how to estimate the models in SAS, SPSS, and R; provide a review of regression assumptions using simulations; and discuss missing data. A companion website includes downloadable versions of all the data sets used in the book\"--Provided by publisher.
Numerical ecology
The book describes and discusses the numerical methods which are successfully being used for analysing ecological data, using a clear and comprehensive approach. These methods are derived from the fields of mathematical physics, parametric and nonparametric statistics, information theory, numerical taxonomy, archaeology, psychometry, sociometry, econometry and others. An updated, 3rd English edition of the most widely cited book on quantitative analysis of multivariate ecological dataRelates ecological questions to methods of statistical analysis, with a clear description of complex numerical methodsAll methods are illustrated by examples from the ecological literature so that ecologists clearly see how to use the methods and approaches in their own researchAll calculations are available in R language functions
Applications of regression for categorical outcomes using R
\"This book covers the main models within the GLM (i.e., logistic, Poisson, negative binomial, ordinal, and multinomial). For each model, estimations, interpretations, model fit, diagnostics, and how to convey results graphically are provided. There is a focus on graphic displays of results as these are a core strength of using R for statistical analysis. Many in the social sciences are transitioning away from using Stata, SPSS and SAS, to using R, and this book uses statistical models which are relevant to the social sciences. Social Science Applications of Regression for Categorical Outcomes Using R will be useful for graduate students in the social sciences who are looking to expand their statistical knowledge, and for Quantitative social scientists due to it's ability to act as a practitioners guide\"-- Provided by publisher.
OpenMx 2.0: Extended Structural Equation and Statistical Modeling
The new software package OpenMx 2.0 for structural equation and other statistical modeling is introduced and its features are described. OpenMx is evolving in a modular direction and now allows a mix-and-match computational approach that separates model expectations from fit functions and optimizers. Major backend architectural improvements include a move to swappable open-source optimizers such as the newly written CSOLNP. Entire new methodologies such as item factor analysis and state space modeling have been implemented. New model expectation functions including support for the expression of models in LISREL syntax and a simplified multigroup expectation function are available. Ease-of-use improvements include helper functions to standardize model parameters and compute their Jacobian-based standard errors, access to model components through standard R $ mechanisms, and improved tab completion from within the R Graphical User Interface.
Understanding The New Statistics
This is the first book to introduce the new statistics - effect sizes, confidence intervals, and meta-analysis - in an accessible way. It is chock full of practical examples and tips on how to analyze and report research results using these techniques. The book is invaluable to readers interested in meeting the new APA Publication Manual guidelines by adopting the new statistics - which are more informative than null hypothesis significance testing, and becoming widely used in many disciplines. Accompanying the book is the Exploratory Software for Confidence Intervals (ESCI) package, free software that runs under Excel and is accessible at www.thenewstatistics.com. The book's exercises use ESCI's simulations, which are highly visual and interactive, to engage users and encourage exploration. Working with the simulations strengthens understanding of key statistical ideas. There are also many examples, and detailed guidance to show readers how to analyze their own data using the new statistics, and practical strategies for interpreting the results. A particular strength of the book is its explanation of meta-analysis, using simple diagrams and examples. Understanding meta-analysis is increasingly important, even at undergraduate levels, because medicine, psychology and many other disciplines now use meta-analysis to assemble the evidence needed for evidence-based practice. The book's pedagogical program, built on cognitive science principles, reinforces learning: Boxes provide \"evidence-based\" advice on the most effective statistical techniques. Numerous examples reinforce learning, and show that many disciplines are using the new statistics. Graphs are tied in with ESCI to make important concepts vividly clear and memorable. Opening overviews and end of chapter take-home messages summarize key points. Exercises encourage exploration, deep understanding, and practical app
Generalized Network Psychometrics: Combining Network and Latent Variable Models
We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of structural equation modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework latent network modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance–covariance structure of indicators is modeled as a network. We term this generalization residual network modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package lvnet , which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms perform adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset.
Intraclass correlation – A discussion and demonstration of basic features
A re-analysis of intraclass correlation (ICC) theory is presented together with Monte Carlo simulations of ICC probability distributions. A partly revised and simplified theory of the single-score ICC is obtained, together with an alternative and simple recipe for its use in reliability studies. Our main, practical conclusion is that in the analysis of a reliability study it is neither necessary nor convenient to start from an initial choice of a specified statistical model. Rather, one may impartially use all three single-score ICC formulas. A near equality of the three ICC values indicates the absence of bias (systematic error), in which case the classical (one-way random) ICC may be used. A consistency ICC larger than absolute agreement ICC indicates the presence of non-negligible bias; if so, classical ICC is invalid and misleading. An F-test may be used to confirm whether biases are present. From the resulting model (without or with bias) variances and confidence intervals may then be calculated. In presence of bias, both absolute agreement ICC and consistency ICC should be reported, since they give different and complementary information about the reliability of the method. A clinical example with data from the literature is given.
SARTools: A DESeq2- and EdgeR-Based R Pipeline for Comprehensive Differential Analysis of RNA-Seq Data
Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods. SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used. SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.