Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,005,119 result(s) for "Statistical analysis"
Sort by:
Meta-analysis of structural evidence for the Hierarchical Taxonomy of Psychopathology (HiTOP) model
The Hierarchical Taxonomy of Psychopathology (HiTOP) is a classification system that seeks to organize psychopathology using quantitative evidence - yet the current model was established by narrative review. This meta-analysis provides a quantitative synthesis of literature on transdiagnostic dimensions of psychopathology to evaluate the validity of the HiTOP framework. Published studies estimating factor-analytic models from ( diagnoses were screened. A total of 120,596 participants from 35 studies assessing 23 diagnoses were included in the meta-analytic models. Data were pooled into a meta-analytic correlation matrix using a random effects model. Exploratory factor analyses were conducted using the pooled correlation matrix. A hierarchical structure was estimated by extracting one to five factors representing levels of the HiTOP framework, then calculating congruence coefficients between factors at sequential levels. Five transdiagnostic dimensions fit the diagnoses well (comparative fit index = 0.92, root mean square error of approximation = 0.07, and standardized root-mean-square residual = 0.03). Most diagnoses had factor loadings >|0.30| on the expected factors, and congruence coefficients between factors indicated a hierarchical structure consistent with the HiTOP framework. A model closely resembling the HiTOP framework fit the data well and placement of diagnoses within transdiagnostic dimensions were largely confirmed, supporting it as valid structure for conceptualizing and organizing psychopathology. Results also suggest transdiagnostic research should (1) use traits, narrow symptoms, and dimensional measures of psychopathology instead of diagnoses, (2) assess a broader array of constructs, and (3) increase focus on understudied pathologies.
Handbook of statistical analysis and data mining applications
The Handbook of Statistical Analysis and Data Mining Applications is a comprehensive professional reference book that guides business analysts, scientists, engineers and researchers (both academic and industrial) through all stages of data analysis, model building and implementation. The Handbook helps one discern the technical and business problem, understand the strengths and weaknesses of modern data mining algorithms, and employ the right statistical methods for practical application. Use this book to address massive and complex datasets with novel statistical approaches and be able to objectively evaluate analyses and solutions. It has clear, intuitive explanations of the principles and tools for solving problems using modern analytic techniques, and discusses their application to real problems, in ways accessible and beneficial to practitioners across industries - from science and engineering, to medicine, academia and commerce. This handbook brings together, in a single resource, all the information a beginner will need to understand the tools and issues in data mining to build successful data mining solutions.Written \"By Practitioners for Practitioners\" Non-technical explanations build understanding without jargon and equations Tutorials in numerous fields of study provide step-by-step instruction on how to use supplied tools to build models Practical advice from successful real-world implementations Includes extensive case studies, examples, MS PowerPoint slides and datasets CD-DVD with valuable fully-working  90-day software included:  \"Complete Data Miner - QC-Miner - Text Miner\" bound with book
The Relative Trustworthiness of Inferential Tests of the Indirect Effect in Statistical Mediation Analysis: Does Method Really Matter?
A content analysis of 2 years of Psychological Science articles reveals inconsistencies in how researchers make inferences about indirect effects when conducting a statistical mediation analysis. In this study, we examined the frequency with which popularly used tests disagree, whether the method an investigator uses makes a difference in the conclusion he or she will reach, and whether there is a most trustworthy test that can be recommended to balance practical and performance considerations. We found that tests agree much more frequently than they disagree, but disagreements are more common when an indirect effect exists than when it does not. We recommend the bias-corrected bootstrap confidence interval as the most trustworthy test if power is of utmost concern, although it can be slightly liberal in some circumstances. Investigators concerned about Type I errors should choose the Monte Carlo confidence interval or the distribution-of-the-product approach, which rarely disagree. The percentile bootstrap confidence interval is a good compromise test.
Key management ratios : the 100+ ratios every manager needs to know
Business ratios are the figures that provide management with targets and standards for their organisation. From earnings per share and cash flow to return on investment and sales to fixed assets ratios, this book guides managers through the key ratios at the heart of business practice.
An integrated approach to explore the suitability of nitrate-contaminated groundwater for drinking purposes in a semiarid region of India
The main objective of the present study is to perform risk assessment of groundwater contaminated by nitrate (NO3−) and evaluate the suitability of groundwater for domestic purposes in the Palani region of South India. Thirty groundwater samples were collected in the study area. Various groundwater quality analysis parameters such as the pH, electrical conductivity, total dissolved solids, total hardness, major cations (Ca2+, Mg2+, Na+, and K+), and major anions (Cl−, SO42−, F−, CO32−, and HCO3−) were adopted in this study to evaluate the drinking water suitability according to 2011 World Health Organization (WHO) standards. Piper and Gibbs’s diagrams for the tested groundwater indicated that, due to the influence of rock–water interactions, evaporation, and reverse ion exchange, the chemical composition of groundwater varied. According to water quality index (WQI) mapping results, 46.67% of the sample locations was identified as contaminated zones via GIS spatial analysis. Multivariate statistical analysis methods, such as principal component analysis, cluster analysis, and the Pearson correlation matrix, were applied to better understand the relationship between water quality parameters. The results demonstrated that 40% of the samples could be identified as highly affected zones in the study region due to a high nitrate concentration. The noncarcinogenic health risks among men, women, and children reached 40, 50, and 53%, respectively. The results illustrated that children and women occurred at a higher risk than did men in the study region. The major sources of contamination included discharge from households, uncovered septic tanks, leachate from waste dump sites, and excess utilization of fertilizers in the agricultural sector. Furthermore, using the nitrate health hazard integrated method with the conventional indexing approach ensures that groundwater reliability can be guaranteed, contamination can be explored, and appropriate remedial measures can be implemented.
The Oxford handbook of functional data analysis
\"As technology progresses, we are able to handle larger and larger datasets. At the same time, monitoring devices such as electronic equipment and sensors (for registering images, temperature, etc.) have become more and more sophisticated. This high-tech revolution offers the opportunity to observe phenomena in an increasingly accurate way by producing statistical units sampled over a finer and finer grid, with the measurement points so close that the data can be considered as observations varying over a continuum. Such continuous (or functional) data may occur in biomechanics (e.g. human movements), chemometrics (e.g. spectrometric curves), econometrics (e.g. the stock market index), geophysics (e.g. spatio-temporal events such as El Nino or time series of satellite images), or medicine (electro-cardiograms/electro-encephalograms). It is well known that standard multivariate statistical analyses fail with functional data. However, the great potential for applications has encouraged new methodologies able to extract relevant information from functional datasets. This Handbook aims to present a state of the art exploration of this high-tech field, by gathering together most of major advances in this area. Leading international experts have contributed to this volume with each chapter giving the key original ideas and comprehensive bibliographical information. The main statistical topics (classification, inference, factor-based analysis, regression modelling, resampling methods, time series, random processes) are covered in the setting of functional data. The twin challenges of the subject are the practical issues of implementing new methodologies and the theoretical techniques needed to expand the mathematical foundations and toolbox. The volume therefore mixes practical, methodological and theoretical aspects of the subject, sometimes within the same chapter. As a consequence, this book should appeal to a wide audience of engineers, practitioners and graduate students, as well as academic researchers, not only in statistics and probability but also in the numerous related application areas\"-- Provided by publisher.
Best practices for your confirmatory factor analysis: A JASP and lavaan tutorial
Confirmatory factor analysis (CFA) is a fundamental method for evaluating the internal structural validity of measurement instruments. In most CFA applications, the measurement model serves as a means to an end rather than an end in itself. To select the appropriate model, prior validity evidence is crucial, and items are typically assessed on an ordinal scale, which has been used in the applied social sciences. However, textbooks on structural equation modeling (SEM) often overlook this specific case, focusing on applications estimable using maximum likelihood (ML) instead. Unfortunately, several popular commercial SEM software packages lack suitable solutions for handling this ‘typical CFA’, leading to confusion and suboptimal decision-making when conducting CFA in this context. This article conceptually contributes to this ongoing discussion by presenting a set of guidelines for conducting a typical CFA, drawing from recent empirical research. We provide a practical contribution by introducing and developing a tutorial example within the JASP and lavaan  software platforms. Supplementary materials such as videos, files, and scripts are freely available.