Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,943 result(s) for "Confidence limits"
Sort by:
Understanding The New Statistics
This is the first book to introduce the new statistics - effect sizes, confidence intervals, and meta-analysis - in an accessible way. It is chock full of practical examples and tips on how to analyze and report research results using these techniques. The book is invaluable to readers interested in meeting the new APA Publication Manual guidelines by adopting the new statistics - which are more informative than null hypothesis significance testing, and becoming widely used in many disciplines. Accompanying the book is the Exploratory Software for Confidence Intervals (ESCI) package, free software that runs under Excel and is accessible at www.thenewstatistics.com. The book's exercises use ESCI's simulations, which are highly visual and interactive, to engage users and encourage exploration. Working with the simulations strengthens understanding of key statistical ideas. There are also many examples, and detailed guidance to show readers how to analyze their own data using the new statistics, and practical strategies for interpreting the results. A particular strength of the book is its explanation of meta-analysis, using simple diagrams and examples. Understanding meta-analysis is increasingly important, even at undergraduate levels, because medicine, psychology and many other disciplines now use meta-analysis to assemble the evidence needed for evidence-based practice. The book's pedagogical program, built on cognitive science principles, reinforces learning: Boxes provide \"evidence-based\" advice on the most effective statistical techniques. Numerous examples reinforce learning, and show that many disciplines are using the new statistics. Graphs are tied in with ESCI to make important concepts vividly clear and memorable. Opening overviews and end of chapter take-home messages summarize key points. Exercises encourage exploration, deep understanding, and practical app
The number of people with glaucoma worldwide in 2010 and 2020
Aim: To estimate the number of people with open angle (OAG) and angle closure glaucoma (ACG) in 2010 and 2020. Methods: A review of published data with use of prevalence models. Data from population based studies of age specific prevalence of OAG and ACG that satisfied standard definitions were used to construct prevalence models for OAG and ACG by age, sex, and ethnicity, weighting data proportional to sample size of each study. Models were combined with UN world population projections for 2010 and 2020 to derive the estimated number with glaucoma. Results: There will be 60.5 million people with OAG and ACG in 2010, increasing to 79.6 million by 2020, and of these, 74% will have OAG. Women will comprise 55% of OAG, 70% of ACG, and 59% of all glaucoma in 2010. Asians will represent 47% of those with glaucoma and 87% of those with ACG. Bilateral blindness will be present in 4.5 million people with OAG and 3.9 million people with ACG in 2010, rising to 5.9 and 5.3 million people in 2020, respectively. Conclusions: Glaucoma is the second leading cause of blindness worldwide, disproportionately affecting women and Asians.
The Validity of Benchmark Dose Limit Analysis for Estimating Permissible Accumulation of Cadmium
Cadmium (Cd) is a toxic metal pollutant that accumulates, especially in the proximal tubular epithelial cells of kidneys, where it causes tubular cell injury, cell death and a reduction in glomerular filtration rate (GFR). Diet is the main Cd exposure source in non-occupationally exposed and non-smoking populations. The present study aimed to evaluate the reliability of a tolerable Cd intake of 0.83 μg/kg body weight/day, and its corresponding toxicity threshold level of 5.24 μg/g creatinine. The PROAST software was used to calculate the lower 95% confidence bound of the benchmark dose (BMDL) values of Cd excretion (ECd) associated with injury to kidney tubular cells, a defective tubular reabsorption of filtered proteins, and a reduction in the estimated GFR (eGFR). Data were from 289 males and 445 females, mean age of 48.1 years of which 42.8% were smokers, while 31.7% had hypertension, and 9% had chronic kidney disease (CKD). The BMDL value of ECd associated with kidney tubular cell injury was 0.67 ng/L of filtrate in both men and women. Therefore, an environmental Cd exposure producing ECd of 0.67 ng/L filtrate could be considered as Cd accumulation levels below which renal effects are likely to be negligible. A reduction in eGFR and CKD may follow when ECd rises from 0.67 to 1 ng/L of filtrate. These adverse health effects occur at the body burdens lower than those associated with ECd of 5.24 µg/g creatinine, thereby arguing that current health-guiding values do not provide a sufficient health protection.
Reliability Assessment for Small-Sample Accelerated Life Tests with Normal Distribution
A significant challenge in the accelerated life test (ALT) is the reliance on large sample sizes and multiple stress levels, which results in high costs and long test durations. To address this issue, this paper develops a new reliability assessment method for small-sample ALTs with normal distribution (or lognormal distribution) and censoring. This method enables a high-confidence evaluation of the percentile lifetime (reliable lifetime) under normal operating stress level using censored data from only two accelerated stress levels. Firstly, a relationship is established between the percentile lifetime at normal stress level and the distribution parameters at accelerated stress levels. Subsequently, an initial estimate of the percentile lifetime is obtained from failure data, and its confidence is then refined using a Bayesian update with the nonfailures. Finally, an exact one-sided lower confidence limit (LCL) for the percentile lifetime and reliability is determined. This paper derives an analytical formula for LCLs under Type-II censoring scenarios and further extend the method to accommodate Type-I censored and general incomplete data. The Monte Carlo simulations and case studies show that, the proposed methods significantly reduce the required sample size and testing duration while offering superior theoretical rigor and accuracy than the conventional methods.
Fuzzy Evaluation Model for Operational Performance of Air Cleaning Equipment
Global warming has led to the continuous deterioration of the living environment, in which air quality directly affects human health. In addition, the severity of the COVID-19 pandemic has further increased the attention to indoor air quality. Indoor clean air quality is not only related to human health but also related to the quality of the manufacturing environment of clean rooms for numerous high-tech processes, such as semiconductors and packaging. This paper proposes a comprehensive model for evaluating, analyzing, and improving the operational performance of air cleaning equipment. Firstly, three operational performance evaluation indexes, such as the number of dust particles, the number of colonies, and microorganisms, were established. Secondly, the 100(1− α)% upper confidence limits of these three operational performance evaluation indexes were deduced to construct a fuzzy testing model. Meanwhile, the accumulated value of ϕ was used to derive the evaluation decision-making value. The proposed model can help companies identify the key quality characteristics that need to be improved. Furthermore, the competitiveness of cooperative enterprises towards smart manufacturing can be strengthened, so that enterprises can not only fulfill their social responsibilities while developing the economy but also take into account the sustainable development of enterprises and the environment.
Recommendations for a Quantitative Description of Joint Orientation Data
A comprehensive joint orientation data collection program is necessary to ensure a sufficient degree of confidence in constructing a structural model. This paper investigates the influence of the number of joints sampled on the characterisation of the joint dip and dip direction. In a series of numerical experiments, Discrete Fracture Networks (DFNs) were used to model joint set populations. The average joint properties and the variation of the orientation cluster are obtained by simulating vertical and inclined boreholes in three different DFN models. The confidence level in the joint orientation data is calculated with the confidence limit method for different drilling densities. For comparable joint orientation data variability, a similar trend is observed regarding the confidence level and the number of joint sampled. Based on that trend, a series of recommendations were developed to estimate the number of joint to sample for a single joint set in order to reach a targeted level of confidence in the dip and dip direction data. Depending on the project requirements, the use of a range of levels of confidence and degrees of precision in the recommendations, can provide greater flexibility in design decisions. The proposed recommendations can be used to optimize the planning of geotechnical drilling campaigns for new mining projects or to review an existing structural database for an undergoing project identifying gaps in the data.
Decision-Making Model of Performance Evaluation Matrix Based on Upper Confidence Limits
A performance evaluation matrix (PEM) is an evaluation tool for assessing customer satisfaction and the importance of service items across various services. In addition, inferences based on point estimates of sample data can increase the risk of misjudgment due to sampling errors. Thus, this paper creates a decision-making model for a performance evaluation matrix based on upper confidence limits to provide various service operating systems for performance evaluation and decision making. The concept is that through the gap between customer satisfaction and the level of importance of each service item, we are able to identify critical-to-quality (CTQ) service items requiring improvement. Many studies have indicated that customer satisfaction and the importance of service items follow a beta distribution, and based on the two parameters of this distribution, the proposed indices for customer satisfaction and the importance of service items represent standardization. The vertical axis of a PEM represents the importance index; the horizontal axis represents the satisfaction index. Since these two indices have unknown parameters, this paper uses the upper confidence limit of the satisfaction index to find out the CTQ service items and the upper confidence limit of the importance index to determine the order of improvement priority for each service item. This paper then establishes a decision-making model for a PEM based on the above-mentioned decision-making rules. Since all decision-making rules proposed in this paper are established through upper confidence limits, the risk of misjudgment caused by sampling errors can be reduced. Finally, this article uses a practical example to illustrate how to use a PEM to find CTQ service items and determine the order of improvement priority for these service items that need to be improved.
Estimation of the Six Sigma Quality Index
The measurement of the process capability is a key part of quantitative quality control, and process capability indices are statistical measures of the process capability. Six Sigma level represents the maximum achievable process capability, and many enterprises have implemented Six Sigma improvement strategies. In recent years, many studies have investigated Six Sigma quality indices, including Qpk. However, Qpk contains two unknown parameters, namely δ and γ, which are difficult to use in process control. Therefore, whether a process quality reaches the k sigma level must be statistically inferred. Moreover, the statistical method of sampling distribution is challenging for the upper confidence limits of Qpk. We address these two difficulties in the present study and propose a methodology to solve them. Boole’s inequality, Demorgan’s theorem, and linear programming were integrated to derive the confidence intervals of Qpk, and then the upper confidence limits were used to perform hypothesis testing. This study involved a case study of the semiconductor assembly process in order to verify the feasibility of the proposed method.
Prevalence of stress, anxiety, depression among the general population during the COVID-19 pandemic: a systematic review and meta-analysis
Background The COVID-19 pandemic has had a significant impact on public mental health. Therefore, monitoring and oversight of the population mental health during crises such as a panedmic is an immediate priority. The aim of this study is to analyze the existing research works and findings in relation to the prevalence of stress, anxiety and depression in the general population during the COVID-19 pandemic. Method In this systematic review and meta-analysis, articles that have focused on stress and anxiety prevalence among the general population during the COVID-19 pandemic were searched in the Science Direct, Embase, Scopus, PubMed, Web of Science (ISI) and Google Scholar databases, without a lower time limit and until May 2020. In order to perform a meta-analysis of the collected studies, the random effects model was used, and the heterogeneity of studies was investigated using the I 2 index. Moreover. data analysis was conducted using the Comprehensive Meta-Analysis (CMA) software. Results The prevalence of stress in 5 studies with a total sample size of 9074 is obtained as 29.6% (95% confidence limit: 24.3–35.4), the prevalence of anxiety in 17 studies with a sample size of 63,439 as 31.9% (95% confidence interval: 27.5–36.7), and the prevalence of depression in 14 studies with a sample size of 44,531 people as 33.7% (95% confidence interval: 27.5–40.6). Conclusion COVID-19 not only causes physical health concerns but also results in a number of psychological disorders. The spread of the new coronavirus can impact the mental health of people in different communities. Thus, it is essential to preserve the mental health of individuals and to develop psychological interventions that can improve the mental health of vulnerable groups during the COVID-19 pandemic.