Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4,063
result(s) for
"Statistische Methode"
Sort by:
Applied Time Series Econometrics
2004,2006,2009
Time series econometrics is a rapidly evolving field. Particularly, the cointegration revolution has had a substantial impact on applied analysis. Hence, no textbook has managed to cover the full range of methods in current use and explain how to proceed in applied domains. This gap in the literature motivates the present volume. The methods are sketched out, reminding the reader of the ideas underlying them and giving sufficient background for empirical work. The treatment can also be used as a textbook for a course on applied time series econometrics. Topics include: unit root and cointegration analysis, structural vector autoregressions, conditional heteroskedasticity and nonlinear and nonparametric time series models. Crucial to empirical work is the software that is available for analysis. New methodology is typically only gradually incorporated into existing software packages. Therefore a flexible Java interface has been created, allowing readers to replicate the applications and conduct their own analyses.
Statistical Models
by
Freedman, David A.
in
Bootstrap (Statistics)
,
Linear models (Statistics)
,
Mathematical statistics
2009,2012
This lively and engaging book explains the things you have to know in order to read empirical papers in the social and health sciences, as well as the techniques you need to build statistical models of your own. The discussion in the book is organized around published studies, as are many of the exercises. Relevant journal articles are reprinted at the back of the book. Freedman makes a thorough appraisal of the statistical methods in these papers and in a variety of other examples. He illustrates the principles of modelling, and the pitfalls. The discussion shows you how to think about the critical issues - including the connection (or lack of it) between the statistical models and the real phenomena. The book is written for advanced undergraduates and beginning graduate students in statistics, as well as students and professionals in the social and health sciences.
Statistical control in correlational studies: 10 essential recommendations for organizational researchers
by
Becker, Thomas E.
,
Breaugh, James A.
,
Spector, Paul E.
in
correlational studies
,
Organizational behavior
,
Organizational research
2016
Statistical control is widely used in correlational studies with the intent of providing more accurate estimates of relationships among variables, more conservative tests of hypotheses, or ruling out alternative explanations for empirical findings. However, the use of control variables can produce uninterpretable parameter estimates, erroneous inferences, irreplicable results, and other barriers to scientific progress. As a result, methodologists have provided a great deal of advice regarding the use of statistical control, to the point that researchers might have difficulties sifting through and prioritizing the available suggestions. We integrate and condense this literature into a set of 10 essential recommendations that are generally applicable and which, if followed, would substantially enhance the quality of published organizational research. We provide explanations, qualifications, and examples following each recommendation.
Journal Article
Robust methods in biostatistics
2009
Robust statistics is an extension of classical statistics that specifically takes into account the concept that the underlying models used to describe data are only approximate. Its basic philosophy is to produce statistical procedures which are stable when the data do not exactly match the postulated models as it is the case for example with outliers. Robust Methods in Biostatistics proposes robust alternatives to common methods used in statistics in general and in biostatistics in particular and illustrates their use on many biomedical datasets. The methods introduced include robust estimation, testing, model selection, model check and diagnostics. They are developed for the following general classes of models: Linear regression Generalized linear models Linear mixed models Marginal longitudinal data models Cox survival analysis model The methods are introduced both at a theoretical and applied level within the framework of each general class of models, with a particular emphasis put on practical data analysis. This book is of particular use for research students,applied statisticians and practitioners in the health field interested in more stable statistical techniques. An accompanying website provides R code for computing all of the methods described, as well as for analyzing all the datasets used in the book.
Text as Data
2019
An ever-increasing share of human interaction, communication, and culture is recorded as digital text. We provide an introduction to the use of text as an input to economic research. We discuss the features that make text different from other forms of data, offer a practical overview of relevant statistical methods, and survey a variety of applications.
Journal Article
Influential Observations and Inference in Accounting Research
by
Leone, Andrew J.
,
Minutti-Meza, Miguel
,
Wasley, Charles E.
in
Accounting
,
Efficacy
,
Financial reporting
2019
Accounting studies often encounter observations with extreme values that can influence coefficient estimates and inferences. Two widely used approaches to address influential observations in accounting studies are winsorization and truncation. While expedient, both depend on researcher-selected cutoffs, applied on a variable-by-variable basis, which, unfortunately, can alter legitimate data points. We compare the efficacy of winsorization, truncation, influence diagnostics (Cook's Distance), and robust regression at identifying influential observations. Replication of three published accounting studies shows that the choice impacts estimates and inferences. Simulation evidence shows that winsorization and truncation are ineffective at identifying influential observations. While influence diagnostics and robust regression both outperform winsorization and truncation, overall, robust regression outperforms the other methods. Since robust regression is a theoretically appealing and easily implementable approach based on a model's residuals, we recommend that future accounting studies consider using robust regression, or at least report sensitivity tests using robust regression.
Journal Article
Clustering of secondary school students in Portugal
2020
The dataset about the secondary schools in Portugal has been handled in the paper. Nowadays data analysis and mathematical statistics methods allow researchers and staff of universities to understand hidden dependencies in the data about students. In the original data competition for which the handled dataset was presented the main goal was to explain the final exams grades by means of social and behavioral parameters of a person. In the paper this question is researched in a new way. The clustering technique allows dividing students into a few groups. Mathematical models of the final grade are special for each cluster. Thus, models achieve some kind of individuality saving generality. Comparison of results of models constructed for the whole dataset and for each cluster has been prepared. Such data analysis technique can be implemented to handle another datasets with different set of features. Obtaining results of data analysis the staff is able to make conclusions on individual way of dealing with every cluster or students and some clusters can be analyzed in individual manner.
Journal Article
Too Big to Fail: Large Samples and the p-Value Problem
by
Lin, Mingfeng
,
Lucas, Henry C
,
Shmueli, Galit
in
Confidence intervals
,
Electronic commerce
,
Hypotheses
2013
The Internet has provided IS researchers with the opportunity to conduct studies with extremely large samples, frequently well over 10,000 observations. There are many advantages to large samples, but researchers using statistical inference must be aware of the p-value problem associated with them. In very large samples, p-values go quickly to zero, and solely relying on p-values can lead the researcher to claim support for results of no practical significance. In a survey of large sample IS research, we found that a significant number of papers rely on a low p-value and the sign of a regression coefficient alone to support their hypotheses. This research commentary recommends a series of actions the researcher can take to mitigate the p-value problem in large samples and illustrates them with an example of over 300,000 camera sales on eBay. We believe that addressing the p-value problem will increase the credibility of large sample IS research as well as provide more insights for readers. [PUBLICATION ABSTRACT]
Journal Article
Improving reporting standards for polygenic scores in risk prediction studies
by
Kinnear, Kim
,
Janssens, A. Cecile J. W.
,
Khera, Amit V.
in
631/208/1516
,
631/208/205
,
631/208/2489
2021
Polygenic risk scores (PRSs), which often aggregate results from genome-wide association studies, can bridge the gap between initial discovery efforts and clinical applications for the estimation of disease risk using genetics. However, there is notable heterogeneity in the application and reporting of these risk scores, which hinders the translation of PRSs into clinical care. Here, in a collaboration between the Clinical Genome Resource (ClinGen) Complex Disease Working Group and the Polygenic Score (PGS) Catalog, we present the Polygenic Risk Score Reporting Standards (PRS-RS), in which we update the Genetic Risk Prediction Studies (GRIPS) Statement to reflect the present state of the field. Drawing on the input of experts in epidemiology, statistics, disease-specific applications, implementation and policy, this comprehensive reporting framework defines the minimal information that is needed to interpret and evaluate PRSs, especially with respect to downstream clinical applications. Items span detailed descriptions of study populations, statistical methods for the development and validation of PRSs and considerations for the potential limitations of these scores. In addition, we emphasize the need for data availability and transparency, and we encourage researchers to deposit and share PRSs through the PGS Catalog to facilitate reproducibility and comparative benchmarking. By providing these criteria in a structured format that builds on existing standards and ontologies, the use of this framework in publishing PRSs will facilitate translation into clinical care and progress towards defining best practice.
An updated set of reporting standards for the development, interpretation and evaluation of polygenic risk scores is presented, which should aid the translation of these scores into clinical applications.
Journal Article
Presidential Address: The Scientific Outlook in Financial Economics
2017
Given the competition for top journal space, there is an incentive to produce \"significant\" results. With the combination of unreported tests, lack of adjustment for multiple tests, and direct and indirect p-hacking, many of the results being published will fail to hold up in the future. In addition, there are basic issues with the interpretation of statistical significance. Increasing thresholds may be necessary, but still may not be sufficient: if the effect being studied is rare, even t > 3 will produce a large number of false positives. Here I explore the meaning and limitations of a p-value. I offer a simple alternative (the minimum Bayes factor). I present guidelines for a robust, transparent research culture in financial economics. Finally, I offer some thoughts on the importance of risk-taking (from the perspective of authors and editors) to advance our field.
Journal Article