Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
21,646 result(s) for "Regression coefficients"
Sort by:
Spatial Homogeneity Pursuit of Regression Coefficients for Large Datasets
Spatial regression models have been widely used to describe the relationship between a response variable and some explanatory variables over a region of interest, taking into account the spatial dependence of the observations. In many applications, relationships between response variables and covariates are expected to exhibit complex spatial patterns. We propose a new approach, referred to as spatially clustered coefficient (SCC) regression, to detect spatially clustered patterns in the regression coefficients. It incorporates spatial neighborhood information through a carefully constructed regularization to automatically detect change points in space and to achieve computational scalability. Our numerical studies suggest that SCC works very effectively, capturing not only clustered coefficients, but also smoothly varying coefficients because of its strong local adaptivity. This flexibility allows researchers to explore various spatial structures in regression coefficients. We also establish theoretical properties of SCC. We use SCC to explore the relationship between the temperature and salinity of sea water in the Atlantic basin; this can provide important insights about the evolution of individual water masses and the pathway and strength of meridional overturning circulation in oceanography. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.
Measurement bias and effect restoration in causal inference
This paper highlights several areas where graphical techniques can be harnessed to address the problem of measurement errors in causal inference. In particular, it discusses the control of unmeasured confounders in parametric and nonparametric models and the computational problem of obtaining bias-free effect estimates in such models. We derive new conditions under which causal effects can be restored by observing proxy variables of unmeasured confounders with/without external studies.
Methods to quantify variable importance: implications for the analysis of noisy ecological data
Determining the importance of independent variables is of practical relevance to ecologists and managers concerned with allocating limited resources to the management of natural systems. Although techniques that identify explanatory variables having the largest influence on the response variable are needed to design management actions effectively, the use of various indices to evaluate variable importance is poorly understood. Using Monte Carlo simulations, we compared six different indices commonly used to evaluate variable importance; zero‐order correlations, partial correlations, semipartial correlations, standardized regression coefficients, Akaike weights, and independent effects. We simulated four scenarios to evaluate the indices under progressively more complex circumstances that included correlation between explanatory variables, as well as a spurious variable that was correlated with other explanatory variables, but not with the dependent variable. No index performed perfectly under all circumstances, but partial correlations and Akaike weights performed poorly in all cases. Zero‐order correlations was the only measure that detected the presence of a spurious variable, whereas only independent effects assigned overlap areas correctly once the spurious variable was removed. We therefore recommend using zero‐order correlations to eliminate predictor variables with correlations near zero, followed by the use of independent effects to assign overlap areas and rank variable importance.
Elasticity Coefficients as Effects Measures in Model Formulations
When dealing with mathematical or statistical models involving one or more explanatory (independent) variables, one often wants to determine the effects of such variables on a response (independent) variable. In the case of linear regression models, one such effects measure is the so-called standardized regression coefficients used in various statistical software packages and discussed in regression textbooks. However, since strong reservations have been expressed against the use of standardized regression coefficients because of various limitations, the objective or initial working hypothesis behind the present research was that some alternative measure without such limitations ought to be explored. Consequently, elasticity coefficients are formulated and proposed as both relative and absolute such measures. While the standardized regression coefficients lack any convenient interpretation, the proposed elasticity coefficients have the particularly desirable property of having a logical and intuitively appealing interpretation in terms of the relative change in the value of the response variable as a consequence of a relative change (of 1 percent) in the value of one or more of the explanatory variables. Those elasticity measures have the flexibility of being applicable to individual or to all explanatory variables and to individual or to all observations or data sets. A numerical example is used to illustrate the use of these new measures. Comparison between values of the standardized regression coefficients and those of the corresponding elasticity coefficients based on reported data from various sources is provided. Also, the form of the elasticity coefficients for a variety of different types of models is presented. Statistical inferences are also discussed.
Parametric modeling of quantile regression coefficient functions
Estimating the conditional quantiles of outcome variables of interest is frequent in many research areas, and quantile regression is foremost among the utilized methods. The coefficients of a quantile regression model depend on the order of the quantile being estimated. For example, the coefficients for the median are generally different from those of the 10th centile. In this article, we describe an approach to modeling the regression coefficients as parametric functions of the order of the quantile. This approach may have advantages in terms of parsimony, efficiency, and may expand the potential of statistical modeling. Goodness-of-fit measures and testing procedures are discussed, and the results of a simulation study are presented. We apply the method to analyze the data that motivated this work. The described method is implemented in the qrcm R package.
Parametric Modeling of Quantile Regression Coefficient Functions with Censored and Truncated Data
Quantile regression coefficient functions describe how the coefficients of a quantile regression model depend on the order of the quantile. A method for parametric modeling of quantile regression coefficient functions was discussed in a recent article. The aim of the present work is to extend the existing framework to censored and truncated data. We propose an estimator and derive its asymptotic properties. We discuss goodness-of-fit measures, present simulation results, and analyze the data that motivated this article. The described estimator has been implemented in the R package qrcm.
Effect Sizes and Statistical Methods for Meta-Analysis in Higher Education
Quantitative meta-analysis is a very useful, yet underutilized, technique for synthesizing research findings in higher education. Meta-analytic inquiry can be more challenging in higher education than in other fields of study as a result of (a) concerns about the use of regression coefficients as a metric for comparing the magnitude of effects across studies, and (b) the non-independence of observations that occurs when a single study contains multiple effect sizes. This methodological note discusses these two important issues and provides concrete suggestions for addressing them. First, metaanalysis scholars have concluded that standardized regression coefficients, which are often provided in higher education manuscripts, constitute an appropriate metric of effect size. Second, hierarchical linear modeling (HLM) analyses provide an effective method for conducting meta-analytic research while accounting for the non-independence of observations, and HLM is generally superior to other proposed methods that attempt to remedy this same problem. A discussion of how to implement these techniques appropriately is provided.
Parametric modelling of M-quantile regression coefficient functions with application to small area estimation
Small area estimation methods can be used to obtain reliable estimates of a parameter of interest within an unplanned domain or subgroup of the population for which only a limited sample size is available. A standard approach to small area estimation is to use a linear mixed model in which the heterogeneity between areas is accounted for by area level effects. An alternative solution, which has gained popularity in recent years, is to use M-quantile regression models. This approach requires much weaker assumptions than the standard linear mixed model and enables computing outlier robust estimators of the area means. We introduce a new framework for M-quantile regression, in which the model coefficients, ß(τ)/, are described by (flexible) parametric functions of τ. We illustrate the advantages of this approach and its application to small area estimation. Using the European Union Survey on Income and Living Conditions data, we estimate the average equivalized household income in three Italian regions. The paper is accompanied by an R package Mqrcm that implements the necessary procedures for estimation, inference and prediction.
Thermal error modeling of the spindle based on multiple variables for the precision machine tool
Thermal error, especially the one caused by the thermal expansion of spindle in axial direction, seriously impacts the accuracy of the precision machine tool. Thermal error compensation based on the thermal error model with high accuracy and robustness is an effective and economic way to reduce the impact and enhance the accuracy. Generally, thermal error models are built only on temperatures at some points in the spindle system. However, the thermal error is also closely related to other working parameters. Through the theoretical analysis, the simulation, and the experimental testing in this paper, it is found out that thermal error is determined by multiple variables, such as the temperature, the spindle rotation speed, the historical spindle temperature, the historical thermal error, and the time lag between the present and previous times. In order to examine the performance of thermal error models based on multiple variables, two common methods are used for modeling—the multiple regression method and the back propagation network. The data for modeling are collected from experiments conducted on the spindle of a precision machine tool under various working conditions. The modeling results demonstrate that models established based on the multiple variables have better accuracy and robustness. It also turns out that data filtering before modeling can further improve the performance of the models. Therefore, models based on multiple variables with good accuracy and robustness can be very useful for the further thermal error compensation. In addition, by taking relative importance analysis of multiple variables based on standardized regression coefficients, the influence of each variable to the thermal error is revealed. The ranking of coefficients can also be used as a new criterion for the optimal temperature variable selection in the future research.
Deep facial recognition after medical alterations
Medical alterations on the facial region introduce skin consistency deviations amongst images of the same individual, thus making face recognition after medical alterations complicated than in regular circumstances. Cosmetic surgeries lead to medical identity thefts. Thus this makes one’s security a serious concern and human identification after medical alterations a critical dare. As prevailing techniques for human identification after surgery are not favorable, so in the present picture cosmetic therapies score above facial recognition. Neural models are not used till date to recognize surgically altered faces. The proposed approach uses a Deep Feed Forward Neural Network to recognize surgically altered faces. The innovation lies in weight update during back propagation which results into optimization of computational complexity and thus less training. While calculating error gradient during weight update, trace of inversed Hessian matrix is evaluated instead of computing the entire matrix which results into cumbersome calculations. Trace reveals vital facial features essential for recognition. Training deep models is computationally expensive but our scheme reduces computation complexity. Rank 1 Recognition Rates (RR) are obtained empirically by bootstrap sampling with 95% confidence interval level for the plastic surgery facial dataset. RR values 97.89% and 98.24% obtained for global and local surgeries are best reported till date in literature. This dataset is unbalanced so with biased metrics (RR and MSE (Mean Square Error)) unbiased metrics (F-score and R (Regression Coefficient)) are also analyzed. The recognition results obtained are equivalent to existing deep models which are computationally expensive and require large processing power.