Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,113
result(s) for
"Selection procedures"
Sort by:
A two-step, test-guided Mokken scale analysis, for nonclustered and clustered data
by
Koopman, Letty
,
van der Ark, L. Andries
,
Zijlstra, Bonne J. H.
in
Algorithms
,
Humans
,
Medicine
2022
Purpose
Mokken scale analysis (MSA) is an attractive scaling procedure for ordinal data. MSA is frequently used in health-related quality of life research. Two of MSA's prime features are the scalability coefficients and the automated item selection procedure (AISP). The AISP partitions a (large) set of items into scales based on the observed item scores; the resulting scales can be used as measurement instruments. There exist two issues in MSA: First, point estimates, standard errors, and test statistics for scalability coefficients are inappropriate for clustered item scores, which are omnipresent in quality of life research data. Second, the AISP insufficiently takes sampling fluctuation of Mokken’s scalability coefficients into account.
Methods
We solved both issues by providing point estimates and standard errors for the scalability coefficients for clustered data and by implementing a Wald-based significance test in the AISP algorithm, resulting in a test-guided AISP (T-AISP), that is available for both nonclustered and clustered test scores.
Results
We integrated the T-AISP into a two-step, test-guided MSA for scale construction, to guide the analysis for nonclustered and clustered data. The first step is performing a T-AISP and select the final scale(s). For clustered data, within-group dependency is investigated on the final scale(s). In the second step, the strength of the scale(s) is determined and further analyses are performed. The procedure was demonstrated on clustered item scores obtained from administering a questionnaire on quality of life in schools to 639 students nested in 30 classrooms.
Conclusions
We developed a two-step, test-guided MSA for scale construction that takes into account sample fluctuation of all scalability coefficients and that can be applied to item scores obtained by a nonclustered or clustered sampling design.
Journal Article
Why would Romanian migrants from Western Europe return to their country of origin?
by
Homocianu, Daniel
,
Plopeanu, Aurelian-Petruș
in
Applied Sociology
,
binary logistic regressions
,
Business
2020
After conducting a survey among Romanian individuals left abroad, we analyze the particular influences relating to their intentions to return to their country of origin. Using Data Mining classifiers, Lasso variable selection procedures and binary logistic regressions for data collected in 2018 in several Western European countries, we have found that what matters the most for their intentions to return is the plan for starting a business in Romania in the near future. This is very useful for articulating appropriate policies. Other variables corresponding to the attachment to Romania, adaptation to the current foreign country, including the perception regarding the local discrimination, economic reasons and voting behaviour could manifest particular influences on their intentions to return. It has turned out that Romanians gone abroad to Latin countries from Western Europe, who plan to start a business at home are more likely to return to Romania than the ones gone in nonLatin countries.
Journal Article
Tracking of nociceptive thresholds using adaptive psychophysical methods
2014
Psychophysical thresholds reflect the state of the underlying nociceptive mechanisms. For example, noxious events can activate endogenous analgesic mechanisms that increase the nociceptive threshold. Therefore, tracking thresholds over time facilitates the investigation of the dynamics of these underlying mechanisms. Threshold tracking techniques should use efficient methods for stimulus selection and threshold estimation. This study compares, in simulation and in human psychophysical experiments, the performance of different combinations of adaptive stimulus selection procedures and threshold estimation methods. Monte Carlo simulations were first performed to compare the bias and precision of threshold estimates produced by three different stimulus selection procedures (simple staircase, random staircase, and minimum entropy procedure) and two estimation methods (logistic regression and Bayesian estimation). Logistic regression and Bayesian estimations resulted in similar precision only when the prior probability distributions (PDs) were chosen appropriately. The minimum entropy and simple staircase procedures achieved the highest precision, while the random staircase procedure was the least sensitive to different procedure-specific settings. Next, the simple staircase and random staircase procedures, in combination with logistic regression, were compared in a human subject study (
n
= 30). Electrocutaneous stimulation was used to track the nociceptive perception threshold before, during, and after a cold pressor task, which served as the conditioning stimulus. With both procedures, habituation was detected, as well as changes induced by the conditioning stimulus. However, the random staircase procedure achieved a higher precision. We recommend using the random staircase over the simple staircase procedure, in combination with logistic regression, for nonstationary threshold tracking experiments.
Journal Article
Consistent Moment Selection Procedures for Generalized Method of Moments Estimation
This paper considers a generalized method of moments (GMM) estimation problem in which one has a vector of moment conditions, some of which are correct and some incorrect. The paper introduces several procedures for consistently selecting the correct moment conditions. The procedures also can consistently determine whether there is a sufficient number of correct moment conditions to identify the unknown parameters of interest. The paper specifies moment selection criteria that are GMM analogues of the widely used BIC and AIC model selection criteria. (The latter is not consistent.) The paper also considers downward and upward testing procedures. All of the moment selection procedures discussed in this paper are based on the minimized values of the GMM criterion function for different vectors of moment conditions. The procedures are applicable in time-series and cross-sectional contexts. Application of the results of the paper to instrumental variables estimation problems yields consistent procedures for selecting instrumental variables.
Journal Article
Sequential selection procedures and false discovery rate control
by
Wager, Stefan
,
Tibshirani, Robert
,
Chouldechova, Alexandra
in
Discovery
,
equations
,
False discovery rate
2016
We consider a multiple‐hypothesis testing setting where the hypotheses are ordered and one is only permitted to reject an initial contiguous block H1,…,Hk of hypotheses. A rejection rule in this setting amounts to a procedure for choosing the stopping point k. This setting is inspired by the sequential nature of many model selection problems, where choosing a stopping point or a model is equivalent to rejecting all hypotheses up to that point and none thereafter. We propose two new testing procedures and prove that they control the false discovery rate in the ordered testing setting. We also show how the methods can be applied to model selection by using recent results on p‐values in sequential model selection settings.
Journal Article
Robust Variable Selection With Exponential Squared Loss
2013
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this article, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness in a way that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are -consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower noncausal selection rate. Furthermore, we reanalyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.
Journal Article
Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options
2008
How can scholars select cases from a large universe for in-depth case study analysis? Random sampling is not typically a viable approach when the total number of cases to be selected is small. Hence attention to purposive modes of sampling is needed. Yet, while the existing qualitative literature on case selection offers a wide range of suggestions for case selection, most techniques discussed require in-depth familiarity of each case. Seven case selection procedures are considered, each of which facilitates a different strategy for within-case analysis. The case selection procedures considered focus on typical, diverse, extreme, deviant, influential, most similar, and most different cases. For each case selection procedure, quantitative approaches are discussed that meet the goals of the approach, while still requiring information that can reasonably be gathered for a large number of cases.
Journal Article
Development of Machine Learning Models with Blood‐based Digital Biomarkers for Diagnosis of Alzheimer's Disease
2025
Background Alzheimer's disease (AD) involves complex alterations in biological pathways, making comprehensive blood biomarkers crucial for accurate and earlier diagnosis. However, the cost‐effectiveness and operational complexity of method using blood‐based biomarkers significantly limit its availability in clinical practice. Methods We developed low‐cost, convenient machine learning‐based with digital biomarkers (MLDB) using plasma spectra data to detect AD or mild cognitive impairment(MCI) from healthy controls (HCs) and discriminate AD from different types of neurodegenerative diseases. We included 1,324 individuals, including 293 with amyloid beta positive AD, 151 with mild cognitive impairment (MCI), 106 with Lewy body dementia (DLB), 106 with frontotemporal dementia (FTD), 135 with progressive supranuclear palsy (PSP) and 533 healthy controls (HCs). Result Random forest classifier and feature selection procedures were used to select digital biomarkers. MLDB achieved area under the curves (AUCs) of 0.92 (AD vs HC, Sensitivity 88.2%, specificity 84.1%), 0.89 (MCI vs HC, Sensitivity 88.8%, specificity 86.4%), 0.83 (AD vs DLB, Sensitivity 77.2%, specificity 74.6%), 0.80 (AD vs FTD, sensitivity 74.2%, specificity 72.4%), and 0.93 (AD vs PSP, sensitivity 76.1%, specificity 75.7%). Digital biomarkers distinguishing AD from HC were negatively correlated with plasma p‐tau217 (r=‐0.22, p <0.05) and GFAP (r=‐0.09, p <0.05). Conclusion The ATR‐FTIR (Attenuated Total Reflectance‐Fourier Transform Infrared) plasma spectra features can identify AD‐related pathological changes. These spectral features serve as digital biomarkers, significantly aiding in the early screening and diagnosis of AD.
Journal Article
Current recommendations for procedure selection in class I and II obesity developed by an expert modified Delphi consensus
by
Shanu N. Kothari
,
Farah Husain
,
Sérgio Santoro
in
Artificial intelligence
,
Bariatric surgery
,
Bariatric surgery; Class I and II obesity; Consensus; Metabolic surgery; Procedure selection
2024
Journal Article
Bayesian Model Selection in High-Dimensional Settings
2012
Standard assumptions incorporated into Bayesian model selection procedures result in procedures that are not competitive with commonly used penalized likelihood methods. We propose modifications of these methods by imposing nonlocal prior densities on model parameters. We show that the resulting model selection procedures are consistent in linear model settings when the number of possible covariates p is bounded by the number of observations n, a property that has not been extended to other model selection procedures. In addition to consistently identifying the true model, the proposed procedures provide accurate estimates of the posterior probability that each identified model is correct. Through simulation studies, we demonstrate that these model selection procedures perform as well or better than commonly used penalized likelihood methods in a range of simulation settings. Proofs of the primary theorems are provided in the Supplementary Material that is available online.
Journal Article