Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
6
result(s) for
"zero-truncated Poisson regression"
Sort by:
Model averaging and muddled multimodel inferences
2015
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
t
statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (
Centrocercus urophasianus
) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Journal Article
Lagrangian Zero Truncated Poisson Distribution: Properties Regression Model and Applications
by
Shibu, Damodaran
,
Chesneau, Christophe
,
Maya, Radhakumari
in
Datasets
,
Dispersion
,
Likelihood ratio
2022
In this paper, we construct a new Lagrangian discrete distribution, named the Lagrangian zero truncated Poisson distribution (LZTPD). It can be presented as a generalization of the zero truncated Poissson distribution (ZTPD) and an alternative to the intervened Poisson distribution (IPD), which was elaborated for modelling both over-dispersed and under-dispersed count datasets. The mathematical aspects of the LZTPD are thoroughly investigated, and its connection to other discrete distributions is crucially observed. Further, we define a finite mixture of LZTPDs and establish its identifiability condition along with some distributional aspects. Statistical work is then performed. The maximum likelihood and method of moment approaches are used to estimate the unknown parameters of the LZTPD. Simulation studies are also undertaken as an assessment of the long-term performance of the estimates. The significance of one additional parameter in the LZTPD is tested using a generalized likelihood ratio test. Moreover, we propose a new count regression model named the Lagrangian zero truncated Poisson regression model (LZTPRM) and its parameters are estimated by the maximum likelihood estimation method. Two real-world datasets are considered to demonstrate the LZTPD’s real-world applicability, and healthcare data are analyzed to demonstrate the LZTPRM’s superiority.
Journal Article
Crash Prediction Modeling for Horizontal Curves on Two-Lane, Two-Way Rural Highways Based on Consistency and Self-Explaining Characteristics Using Zero-Truncated Data
by
Naderan, Ali
,
Saffarzadeh, Mahmoud
,
Ghorbani, Mehran
in
Civil Engineering
,
Consistency
,
Data collection
2023
Consistency and self-explaining characteristics play important roles in road safety performance, especially at rural highway curves. This study aims to take into account the effect of several contributing factors associated with consistency and self-explaining. The scope is limited to the roads based on the crash frequency that occurred at horizontal curves of Two-Lane, Two-Way Rural Highways (TLTWRHs). The main contribution is to simultaneously consider the traffic, geometry, consistency, and self-explaining variables, as novel parameters, for a set of 224 selected horizontal curves of TLTWRHs in Iran. The curves with existing at least one fatal crash in their history for the three-year period, from 2018 to 2020, have been selected as a case study. The collected data was zero-truncated and under-dispersion. The modeling process was carried out using the Poisson, Zero-Truncated Poisson (ZTP), and Conway-Maxwell-Poisson (COM-Poisson) regression models followed by analyzing the results. The results showed that the COM-Poisson regression model could effectively be used in the case of under-dispersed zero-truncated crash data and demonstrated that there are strong relationships between the crash frequency and the consistency variables: the ratio of curve radius to the average radius of the adjacent curves (as the alignment consistency variable) and the difference between expected and existing superelevation (as the vehicle stability consistency variable). Furthermore, the findings indicated enhancing the Field of View (FOV), as one of the self-explaining characteristics of the roads, is an effective low-cost approach for improving road safety on TLTWRH horizontal curves, compared to the other measures. Moreover, the results confirmed that constructing TLTWRH self-explaining horizontal curves is four times more effective than improving its consistency in terms of crash reduction meanwhile the curve self-explaining is 33% more effective than the superelevation improvement of TLTWRH curves.
Journal Article
Ratio Plot and Ratio Regression with Applications to Social and Medical Sciences
2016
We consider count data modeling, in particular, the zero-truncated case as it arises naturally in capture–recapture modeling as the marginal distribution of the count of identifications of the members of a target population. Whereas in wildlife ecology these distributions are often of a well-defined type, this is less the case for social and medical science applications since study types are often entirely observational. Hence, in these applications, violations of the assumptions underlying closed capture–recapture are more likely to occur than in carefully designed capture–recapture experiments. As a consequence, the marginal count distribution might be rather complex. The purpose of this note is to sketch some of the major ideas in the recent developments in ratio plotting and ratio regression designed to explore the pattern of the distribution underlying the capture process. Ratio plotting and ratio regression are based upon considering the ratios of neighboring probabilities which can be estimated by ratios of observed frequencies. Frequently, these ratios show patterns which can be easily modeled by a regression model. The fitted regression model is then used to predict the frequency of hidden zero counts. Particular attention is given to regression models corresponding to the negative binomial, multiplicative binomial and the Conway–Maxwell–Poisson distribution.
Journal Article
Determinants of intensity of utilization of Baobab products in Kenya
by
Kiprotich, Collins
,
Kavoi, M. Muendo
,
Mithöfer, Dagmar
in
Adansonia digitata
,
affordability
,
Agricultural economics
2019
Baobab tree is central to livelihoods of majority of rural communities living in marginal areas of Kenya in the wake of climate change, low agriculture productivity and falling food security. This study examined factors influencing intensity of utilization of baobab products in Kenya. Data on socio-economic, demographic characteristics, and attitude toward baobab pulp were collected from 353 consumers in rural and urban markets. Descriptive were used to describe consumer characteristics. Zero-truncated Poisson regression was used to analyze factors influencing intensity of utilization of baobab products. Exploratory factor analysis was used to assess the attitudes of consumers toward baobab pulp. The model results revealed that education level (p < 0.01) and household size (p < 0.01) negatively influenced the intensity of utilization, while years of product usage (p < 0.01), and awareness level (p < 0.01) had a positive influence. Exploratory factor analysis generated four factors that explained 61.16% of the total explained variance. \"Availability, affordability, and income value\" factor had the highest factor loading in the analysis, while \"Trust and nutritive value\" factor had the second highest loading. The study findings recommend strategies that could promote baobab utilization. This include; ensuring that baobab products are available, accessible, and affordable. Likewise, sustained product packaging, certification, and labeling are essential. Other promotional approaches include community nutritional training and information dissemination through both formal and informal education.
Journal Article
Regression modeling of one-inflated positive count data
by
Hassanzadeh, Fatemeh
,
Kazemi, Iraj
in
Computer simulation
,
Economic models
,
Economic Theory/Quantitative Economics/Mathematical Methods
2017
This paper extends regression modeling of positive count data to deal with excessive proportion of one counts. In particular, we propose one-inflated positive (OIP) regression models and present some of their properties. Also, the stochastic hierarchical representation of one-inflated positive poisson and negative binomial regression models are achieved. It is illustrated that the standard OIP model may be inadequate in the presence of one inflation and the lack of independence. Thus, to take into account the inherent correlation of responses, a class of two-level OIP regression models with subjects heterogeneity effects is introduced. A simulation study is conducted to highlight theoretical aspects. Results show that when one-inflation or over-dispersion in the data generating process is ignored, parameter estimates are inefficient and statistically reliable findings are missed. Finally, we analyze a real data set taken from a length of hospital stay study to illustrate the usefulness of our proposed models.
Journal Article