Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
9,729
result(s) for
"Estimate reliability"
Sort by:
Dynamic population mapping using mobile phone data
2014
During the past few decades, technologies such as remote sensing, geographical information systems, and global positioning systems have transformed the way the distribution of human population is studied and modeled in space and time. However, the mapping of populations remains constrained by the logistics of censuses and surveys. Consequently, spatially detailed changes across scales of days, weeks, or months, or even year to year, are difficult to assess and limit the application of human population maps in situations in which timely information is required, such as disasters, conflicts, or epidemics. Mobile phones (MPs) now have an extremely high penetration rate across the globe, and analyzing the spatiotemporal distribution of MP calls geolocated to the tower level may overcome many limitations of census-based approaches, provided that the use of MP data is properly assessed and calibrated. Using datasets of more than 1 billion MP call records from Portugal and France, we show how spatially and temporarily explicit estimations of population densities can be produced at national scales, and how these estimates compare with outputs produced using alternative human population mapping methods. We also demonstrate how maps of human population changes can be produced over multiple timescales while preserving the anonymity of MP users. With similar data being collected every day by MP network providers across the world, the prospect of being able to map contemporary and changing human population distributions over relatively short intervals exists, paving the way for new applications and a near real-time understanding of patterns and processes in human geography.
Journal Article
A Simple Way to Estimate Bid-Ask Spreads from Daily High and Low Prices
2012
We develop a bid-ask spread estimator from daily high and low prices. Daily high (low) prices are almost always buy (sell) trades. Hence, the high-low ratio reflects both the stock's variance and its bid-ask spread. Although the variance component of the high-low ratio is proportional to the return interval, the spread component is not. This allows us to derive a spread estimator as a function of high-low ratios over 1-day and 2-day intervals. The estimator is easy to calculate, can be applied in a variety of research areas, and generally outperforms other low-frequency estimators.
Journal Article
Getting serious about test–retest reliability: a critique of retest research and some recommendations
2014
Purpose To focus attention on the need for rigorous and carefully designed test–retest reliability assessments for new patient-reported outcomes and to encourage retest researchers to be thoughtful, ambitious, and creative in their retest efforts. Methods The paper outlines key challenges that confront retest researchers, calls attention to some limitations in meeting those challenges, and describes some strategies to improve retest research. Results Modest retest coefficients are often reported as acceptable, and many important decisions—such as the retest interval—appear not to be evidence-based. Retest assessments are seldom undertaken before a measure has been finalized, which rules out using retest data to select strong, reproducible items. Conclusions Strategies for improving retest research include seeking input from patients or experts regarding the stability of the construct to support decisions about the retest interval, analyzing item-level retest data to identify items to revise or discard, establishing a priori standards of acceptability for reliability coefficients, using large, heterogeneous, and representative retest samples and collecting follow-up data to better understand consistent and inconsistent responses over time.
Journal Article
Missing and spurious interactions and the reconstruction of complex networks
2009
Network analysis is currently used in a myriad of contexts, from identifying potential drug targets to predicting the spread of epidemics and designing vaccination strategies and from finding friends to uncovering criminal activity. Despite the promise of the network approach, the reliability of network data is a source of great concern in all fields where complex networks are studied. Here, we present a general mathematical and computational framework to deal with the problem of data reliability in complex networks. In particular, we are able to reliably identify both missing and spurious interactions in noisy network observations. Remarkably, our approach also enables us to obtain, from those noisy observations, network reconstructions that yield estimates of the true network properties that are more accurate than those provided by the observations themselves. Our approach has the potential to guide experiments, to better characterize network data sets, and to drive new discoveries.
Journal Article
How social influence can undermine the wisdom of crowd effect
2011
Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects' convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others' responses was provided. Although groups are initially \"wise,\" knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The \"social influence effect\" diminishes the diversity of the crowd without improvements of its collective error. The \"range reduction effect\" moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The \"confidence effect\" boosts individuals' confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.
Journal Article
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
2014
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, that is, sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
Journal Article
Error Analysis of Satellite Precipitation Products in Mountainous Basins
by
Nikolopoulos, Efthymios I.
,
Borga, Marco
,
Anagnostou, Emmanouil N.
in
Centroids
,
Error analysis
,
Estimate reliability
2014
Accurate quantitative precipitation estimation over mountainous basins is of great importance because of their susceptibility to hazards such as flash floods, shallow landslides, and debris flows, triggered by heavy precipitation events (HPEs). In situ observations over mountainous areas are limited, but currently available satellite precipitation products can potentially provide the precipitation estimation needed for hydrological applications. In this study, four widely used satellite-based precipitation products [Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42, version 7 (3B42-V7), and in near–real time (3B42-RT); Climate Prediction Center (CPC) morphing technique (CMORPH); and Precipitation Estimation from Remotely Sensed Imagery Using Artificial Neural Networks (PERSIANN)] are evaluated with respect to their performance in capturing the properties of HPEs over different basin scales. Evaluation is carried out over the upper Adige River basin (eastern Italian Alps) for an 8-yr period (2003–10). Basin-averaged rainfall derived from a dense rain gauge network in the region is used as a reference. Satellite precipitation error analysis is performed for warm (May–August) and cold (September–December) season months as well as for different quantile ranges of basin-averaged precipitation accumulations. Three error metrics and a score system are introduced to quantify the performances of the various satellite products. Overall, no single precipitation product can be considered ideal for detecting and quantifying HPE. Results show better consistency between gauges and the two 3B42 products, particularly during warm season months that are associated with high-intensity convective events. All satellite products are shown to have a magnitude-dependent error ranging from overestimation at low precipitation regimes to underestimation at high precipitation accumulations; this effect is more pronounced in the CMORPH and PERSIANN products.
Journal Article
Estimating divergence times in large molecular phylogenies
by
Filipski, Alan
,
Kumar, Sudhir
,
Tamura, Koichiro
in
autocorrelation
,
Bayesian analysis
,
Bayesian theory
2012
Molecular dating of species divergences has become an important means to add a temporal dimension to the Tree of Life. Increasingly larger datasets encompassing greater taxonomic diversity are becoming available to generate molecular timetrees by using sophisticated methods that model rate variation among lineages. However, the practical application of these methods is challenging because of the exorbitant calculation times required by current methods for contemporary data sizes, the difficulty in correctly modeling the rate heterogeneity in highly diverse taxonomic groups, and the lack of reliable clock calibrations and their uncertainty distributions for most groups of species. Here, we present a method that estimates relative times of divergences for all branching points (nodes) in very large phylogenetic trees without assuming a specific model for lineage rate variation or specifying any clock calibrations. The method (RelTime) performed better than existing methods when applied to very large computer simulated datasets where evolutionary rates were varied extensively among lineages by following autocorrelated and uncorrelated models. On average, RelTime completed calculations 1,000 times faster than the fastest Bayesian method, with even greater speed difference for larger number of sequences. This speed and accuracy will enable molecular dating analysis of very large datasets. Relative time estimates will be useful for determining the relative ordering and spacing of speciation events, identifying lineages with significantly slower or faster evolutionary rates, diagnosing the effect of selected calibrations on absolute divergence times, and estimating absolute times of divergence when highly reliable calibration points are available.
Journal Article
Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling
by
Prelec, Drazen
,
Loewenstein, George
,
John, Leslie K.
in
Biological and medical sciences
,
Bleeding time
,
Correlations
2012
Cases of clear scientific misconduct have received significant media attention recently, but less flagrantly questionable research practices may be more prevalent and, ultimately, more damaging to the academic enterprise. Using an anonymous elicitation format supplemented by incentives for honest reporting, we surveyed over 2,000 psychologists about their involvement in questionable research practices. The impact of truth-telling incentives on self-admissions of questionable research practices was positive, and this impact was greater for practices that respondents judged to be less defensible. Combining three different estimation methods, we found that the percentage of respondents who have engaged in questionable practices was surprisingly high. This finding suggests that some questionable practices may constitute the prevailing research norm.
Journal Article
Balancing on the Creative Highwire: Forecasting the Success of Novel Ideas in Organizations
2016
Betting on the most promising new ideas is key to creativity and innovation in organizations, but predicting the success of novel ideas can be difficult. To select the best ideas, creators and managers must excel at creative forecasting, the skill of predicting the outcomes of new ideas. Using both a field study of 339 professionals in the circus arts industry and a lab experiment, I examine the conditions for accurate creative forecasting, focusing on the effect of creators' and managers' roles. In the field study, creators and managers forecasted the success of new circus acts with audiences, and the accuracy of these forecasts was assessed using data from 13,248 audience members. Results suggest that creators were more accurate than managers when forecasting about others' novel ideas, but not their own. This advantage over managers was undermined when creators previously had poor ideas that were successful in the marketplace anyway. Results from the lab experiment show that creators' advantage over managers in predicting success may be tied to the emphasis on both divergent thinking (idea generation) and convergent thinking (idea evaluation) in the creator role, while the manager role emphasizes only convergent thinking. These studies highlight that creative forecasting is a critical bridge linking creativity and innovation, shed light on the importance of roles in creative forecasting, and advance theory on why creative success is difficult to sustain over time.
Journal Article