Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
10,346 result(s) for "Performance Statistical methods."
Sort by:
Demand driven performance : using smart metrics
\"Learn how to implement demand driven metrics for vast improvement in measuring performance.Demand Driven Performance details why the outdated forms of measurement are inappropriate for current circumstances and reveals an elegant set of global and local metrics to fit today's demand driven world. The book shows how to minimize the organizational and supply chain conflicts that impede flow, and eventually, corporate success.Metrics are used to create a benchmark for measuring improvement and to identify and focus on those improvements that are most needed, and that have the highest ROI. However, the world has fundamentally changed in terms of delivering value and driving strong financial performance and growth. The continued use of outdated metrics is driving companies in the wrong direction giving them false signals, putting their personnel into conflict at all levels of the organization, and also wreaking havoc in the supply chain. This book offers solutions to remedy these issues. Defines a new demand driven approach for measuring total organizational performance and the corresponding local metrics that integrate with those measures Advocates a systems approach to measuring improvement, and shows how conventional metrics are no longer appropriate Focuses on reliability, stability, speed/velocity, strategic contribution, local operating expense, and local improvement waste A case study demonstrates the processes in the book and provides you with the technology and tools needed to achieve a demand driven system \"-- Provided by publisher.
Data analysis in sport
\"Making sense of sports performance data can be a challenging task but is nevertheless an essential part of performance analysis investigations. Focusing on techniques used in the analysis of sport performance, this book introduces the fundamental principles of data analysis, explores the most important tools used in data analysis, and offers guidance on the presentation of results. The book covers key topics such as: The purpose of data analysis, from statistical analysis to algorithmic processing Commercial packages for performance and data analysis, including Focus, Sportscode, Dartfish, Prozone, Excel, SPSS and Matlab Effective use of statistical procedures in sport performance analysis Analysing data from manual notation systems, player tracking systems and computerized match analysis systems Creating visually appealing 'dashboard' interfaces for presenting data Assessing reliability. The book includes worked examples from real sport, offering clear guidance to the reader and bringing the subject to life. This book is invaluable reading for any student, researcher or analyst working in sport performance or undertaking a sport-related research project or methods course\"-- Provided by publisher.
MisLeading Indicators
Decision makers in business and government are more reliant than ever on measurements, such as business performance indicators, bond ratings, Six-Sigma indicators, stock ratings, opinion polls, and market research. Yet many popular statistical and business books and courses relating to measurement are based on flawed principles, leading managers to the wrong conclusions-and ultimately, the wrong decisions. misLeading Indicators: How to Reliably Measure Your Business provides something unique and invaluable: trustworthy tools for judging measurements.Each chapter illustrates the four key principles for reliable measurements: sufficient background information, accuracy and precision, reasonable inferences, and reality checks in different situations. After the three fundamental methods of measuring are defined, the authors expand to the application and interpretation of measurements in specific areas, including business performance, risk management, process, control, finance, and economics. This book supplies essential information for managers in business and government who depend on accurate information to run their organizations, as well as the consultants who advise them.
misLeading Indicators
This book reveals the hidden and potentially misleading nature of measurements, empowering readers to avoid making critical business decisions that are harmful, unreasonable, unwarranted, or plain wrong
Methodological Aspects for Controlling the Processes that Secure the Reliability of Aviation Engineering
Methodological aspects for controlling the processes that secure the reliability of a complicated machine are examined for aviation engineering and include the following: the topicality of the problem and the main problems, the characteristics of the processes for securing the reliability of aviation engineering as control objects, a way for forming the system for controlling the processes that secure the reliability of aviation engineering, a way to analyze statistical methods for controlling the reliability and the probabilistic-statistic performance of thinning flows of random events–failures detected in flight and causing on-ground operational failures, aircraft replacement, and the total number of failures and damage detected in flight and on the ground. The aim of investigations is the following: to increase the reliability and safety of flights and to increase the efficiency of aviation engineering operation.
Frequentist Consistency of Variational Bayes
A key challenge for modern Bayesian statistics is how to perform scalable inference of posterior distributions. To address this challenge, variational Bayes (VB) methods have emerged as a popular alternative to the classical Markov chain Monte Carlo (MCMC) methods. VB methods tend to be faster while achieving comparable predictive performance. However, there are few theoretical results around VB. In this article, we establish frequentist consistency and asymptotic normality of VB methods. Specifically, we connect VB methods to point estimates based on variational approximations, called frequentist variational approximations, and we use the connection to prove a variational Bernstein-von Mises theorem. The theorem leverages the theoretical characterizations of frequentist variational approximations to understand asymptotic properties of VB. In summary, we prove that (1) the VB posterior converges to the Kullback-Leibler (KL) minimizer of a normal distribution, centered at the truth and (2) the corresponding variational expectation of the parameter is consistent and asymptotically normal. As applications of the theorem, we derive asymptotic properties of VB posteriors in Bayesian mixture models, Bayesian generalized linear mixed models, and Bayesian stochastic block models. We conduct a simulation study to illustrate these theoretical results. Supplementary materials for this article are available online.
Machine learning and deep learning—A review for ecologists
The popularity of machine learning (ML), deep learning (DL) and artificial intelligence (AI) has risen sharply in recent years. Despite this spike in popularity, the inner workings of ML and DL algorithms are often perceived as opaque, and their relationship to classical data analysis tools remains debated. Although it is often assumed that ML and DL excel primarily at making predictions, ML and DL can also be used for analytical tasks traditionally addressed with statistical models. Moreover, most recent discussions and reviews on ML focus mainly on DL, failing to synthesise the wealth of ML algorithms with different advantages and general principles. Here, we provide a comprehensive overview of the field of ML and DL, starting by summarizing its historical developments, existing algorithm families, differences to traditional statistical tools, and universal ML principles. We then discuss why and when ML and DL models excel at prediction tasks and where they could offer alternatives to traditional statistical methods for inference, highlighting current and emerging applications for ecological problems. Finally, we summarize emerging trends such as scientific and causal ML, explainable AI, and responsible AI that may significantly impact ecological data analysis in the future. We conclude that ML and DL are powerful new tools for predictive modelling and data analysis. The superior performance of ML and DL algorithms compared to statistical models can be explained by their higher flexibility and automatic data‐dependent complexity optimization. However, their use for causal inference is still disputed as the focus of ML and DL methods on predictions creates challenges for the interpretation of these models. Nevertheless, we expect ML and DL to become an indispensable tool in ecology and evolution, comparable to other traditional statistical tools.
Using MicrobiomeAnalyst for comprehensive statistical, functional, and meta-analysis of microbiome data
MicrobiomeAnalyst is an easy-to-use, web-based platform for comprehensive analysis of common data outputs generated from current microbiome studies. It enables researchers and clinicians with little or no bioinformatics training to explore a wide variety of well-established methods for microbiome data processing, statistical analysis, functional profiling and comparison with public datasets or known microbial signatures. MicrobiomeAnalyst currently contains four modules: Marker-gene Data Profiling (MDP), Shotgun Data Profiling (SDP), Projection with Public Data (PPD), and Taxon Set Enrichment Analysis (TSEA). This protocol will first introduce the MDP module by providing a step-wise description of how to prepare, process and normalize data; perform community profiling; identify important features; and conduct correlation and classification analysis. We will then demonstrate how to perform predictive functional profiling and introduce several unique features of the SDP module for functional analysis. The last two sections will describe the key steps involved in using the PPD and TSEA modules for meta-analysis and visual exploration of the results. In summary, MicrobiomeAnalyst offers a one-stop shop that enables microbiome researchers to thoroughly explore their preprocessed microbiome data via intuitive web interfaces. The complete protocol can be executed in ~70 min. This protocol details MicrobiomeAnalyst, a user-friendly, web-based platform for comprehensive statistical, functional, and meta-analysis of microbiome data.
Small Telescopes: Detectability and the Evaluation of Replication Results
This article introduces a new approach for evaluating replication results. It combines effect-size estimation with hypothesis testing, assessing the extent to which the replication results are consistent with an effect size big enough to have been detectable in the original study. The approach is demonstrated by examining replications of three well-known findings. Its benefits include the following: (a) differentiating \"unsuccessful\" replication attempts (i.e., studies yielding p > .05) that are too noisy from those that actively indicate the effect is undetectably different from zero, (b) \"protecting\" true findings from underpowered replications, and (c) arriving at intuitively compelling inferences in general and for the revisited replications in particular.