Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
1,575 result(s) for "False positive results"
Sort by:
Healthcare professionals' experiences of caring for women with false‐positive screening test results in the National Health Service Breast Screening Programme
Background Understanding healthcare professionals' (HCPs) experiences of caring for women with false‐positive screening test results in the National Health Service Breast Screening Programme (NHSBSP) is important for reducing the impact of such results. Methods Interviews were undertaken with 12 HCPs from a single NHSBSP unit, including advanced radiographer practitioners, breast radiographers, breast radiologists, clinical nurse specialists (CNSs), and a radiology healthcare assistant. Data were analysed thematically using Template Analysis. Results Two themes were produced: (1) Gauging and navigating women's anxiety during screening assessment was an inevitable and necessary task for all participants. CNSs were perceived as particularly adept at this, while breast radiographers reported a lack of adequate formal training. (2) Controlling the delivery of information to women (including amount, type and timing of information). HCPs reported various communication strategies to facilitate women's information processing and retention during a distressing time. Conclusions Women's anxiety could be reduced through dedicated CNS support, but this should not replace support from other HCPs. Breast radiographers may benefit from more training to emotionally support recalled women. While HCPs emphasised taking a patient‐centred communication approach, the use of other strategies (e.g., standardised scripts) and the constraints of the ‘one‐stop shop’ model pose challenges to such an approach. Patient and Public Contribution During the study design, two Patient and Public Involvement members (women with false‐positive‐breast screening test results) were consulted to gain an understanding of patient perspectives and experiences of being recalled specifically in the NHSBSP. Their feedback informed the formulations of the research aim, objectives and the direction of the interview guide.
Digestive enzymes of fungal origin as a relevant cause of false positive Aspergillus antigen testing in intensive care unit patients
BackgroundGalactomannan antigen (GM) testing is widely used in the diagnosis of invasive aspergillosis (IA). Digestive enzymes play an important role in enzyme substitution therapy in exocrine pancreatic insufficiency. As digestive enzymes of fungal origin like Nortase contain enzymes from Aspergillus, a false-positive result of the test might be possible because of cross-reacting antigens of the cell wall of the producing fungi. We, therefore, asked whether the administration of fungal enzymes is a relevant cause of false-positive GM antigen test results.MethodsPatients with a positive GM antigen test between January 2016 and April 2020 were included in the evaluation and divided into two groups: group 1—Nortase-therapy, group 2—no Nortase-therapy. In addition, dissolved Nortase samples were analyzed in vitro for GM and β-1,3-D-glucan. For statistical analysis, the chi-squared and Mann‒Whitney U tests were used.ResultsSixty-five patients were included in this evaluation (30 patients receiving Nortase and 35 patients not receiving Nortase). The overall false positivity rate of GM testing was 43.1%. Notably, false-positive results were detected significantly more often in the Nortase group (73.3%) than in the control group (17.1%, p < 0.001). While the positive predictive value of GM testing was 0.83 in the control group, there was a dramatic decline to 0.27 in the Nortase group. In vitro analysis proved that the Nortase enzyme preparation was highly positive for the fungal antigens GM and β-1,3-D-glucan.ConclusionsOur data demonstrate that the administration of digestive enzymes of fungal origin like Nortase leads to a significantly higher rate of false-positive GM test results compared to that in patients without digestive enzyme treatment.
The False Positive Risk: A Proposal Concerning What to Do About p-Values
It is widely acknowledged that the biomedical literature suffers from a surfeit of false positive results. Part of the reason for this is the persistence of the myth that observation of p < 0.05 is sufficient justification to claim that you have made a discovery. It is hopeless to expect users to change their reliance on p-values unless they are offered an alternative way of judging the reliability of their conclusions. If the alternative method is to have a chance of being adopted widely, it will have to be easy to understand and to calculate. One such proposal is based on calculation of false positive risk(FPR). It is suggested that p-values and confidence intervals should continue to be given, but that they should be supplemented by a single additional number that conveys the strength of the evidence better than the p-value. This number could be the minimum FPR (that calculated on the assumption of a prior probability of 0.5, the largest value that can be assumed in the absence of hard prior data). Alternatively one could specify the prior probability that it would be necessary to believe in order to achieve an FPR of, say, 0.05.
Multicollinearity: How common factors cause Type 1 errors in multivariate regression
Research summary: In multivariate regression analyses of correlated variables, we sometimes observe pairs of estimated beta coefficients large in absolute magnitude and opposite in sign. T-statistics are also large, suggesting meaningful findings. I found 64 recently published Strategic Management Journal articles with results exhibiting these characteristics. In this article, I demonstrate that such results may be Type 1 errors (false positives): If regressors are correlated via an unobservable common factor, estimated beta coefficients will misleadingly tend toward infinite magnitudes in opposite directions, even if the variables' real effects are small and of the same sign. Diagnostics such as Variance Inflation Factors (VIF) will misleadingly validate Type 1 errors as legitimate results. After establishing general results via mathematical analysis and simulation, I provide guidelines for detection and mitigation. Managerial summary: This article demonstrates mathematically how regression analyses with correlated independent variables may generate beta coefficients of opposite sign to the variables' true effects. To assess the likelihood of this possibility, I propose that: if (a) absolute correlation of two independent variables is about ±0.3 or more (smaller correlations may be problematic for large data sets), (b) the two variables have beta coefficients of opposite sign, if correlated positively, and of the same sign, if correlated negatively, and (c) the bivariate correlation of one independent variable with the dependent variable is of the opposite sign from the beta coefficient, then the beta might be a false positive. To facilitate such analysis, authors should provide complete correlation tables, including dependent variables, interaction terms, and quadratic terms.
False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant
In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (< .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.
Comprehensive investigation of sources of misclassification errors in routine HIV testing in Zimbabwe
Introduction Misclassification errors have been reported in rapid diagnostic HIV tests (RDTs) in sub‐Saharan African countries. These errors can lead to missed opportunities for prevention‐of‐mother‐to‐child‐transmission (PMTCT), early infant diagnosis and adult HIV‐prevention, unnecessary lifelong antiretroviral treatment (ART) and wasted resources. Few national estimates or systematic quantifications of sources of errors have been produced. We conducted a comprehensive assessment of possible sources of misclassification errors in routine HIV testing in Zimbabwe. Methods RDT‐based HIV test results were extracted from routine PMTCT programme records at 62 sites during national antenatal HIV surveillance in 2017. Positive‐ (PPA) and negative‐percent agreement (NPA) for HIV RDT results and the false‐HIV‐positivity rate for people with previous HIV‐positive results (“known‐positives”) were calculated using results from external quality assurance testing done for HIV surveillance purposes. Data on indicators of quality management systems, RDT kit performance under local climatic conditions and user/clerical errors were collected using HIV surveillance forms, data‐loggers and a Smartphone camera application (7 sites). Proportions of cases with errors were compared for tests done in the presence/absence of potential sources of errors. Results NPA was 99.9% for both pregnant women (N = 17224) and male partners (N = 2173). PPA was 90.0% (N = 1187) and 93.4% (N = 136) for women and men respectively. 3.5% (N = 1921) of known‐positive individuals on ART were HIV negative. Humidity and temperature exceeding manufacturers’ recommendations, particularly in storerooms (88.6% and 97.3% respectively), and premature readings of RDT output (56.0%) were common. False‐HIV‐negative cases, including interpretation errors, occurred despite staff training and good algorithm compliance, and were not reduced by existing external or internal quality assurance procedures. PPA was lower when testing room humidity exceeded 60% (88.0% vs. 93.3%; p = 0.007). Conclusions False‐HIV‐negative results were still common in Zimbabwe in 2017 and could be reduced with HIV testing algorithms that use RDTs with higher sensitivity under real‐world conditions and greater practicality under busy clinic conditions, and by strengthening proficiency testing procedures in external quality assurance systems. New false‐HIV‐positive RDT results were infrequent but earlier errors in testing may have resulted in large numbers of uninfected individuals being on ART.
Lessons From Pinocchio
Deception researchers widely acknowledge that cues to deception—observable behaviors that may differ between truthful and deceptive messages—tend to be weak. Nevertheless, several deception cues have been reported with unusually large effect sizes, and some researchers have advocated the use of such cues as tools for detecting deceit and assessing credibility in practical contexts. By examining data from empirical deception-cue research and using a series of Monte Carlo simulations, I demonstrate that many estimated effect sizes of deception cues may be greatly inflated by publication bias, small numbers of estimates, and low power. Indeed, simulations indicate the informational value of the present deception literature is quite low, such that it is not possible to determine whether any given effect is real or a false positive. I warn against the hazards of relying on potentially illusory cues to deception and offer some recommendations for improving the state of the science of deception.
A Scientific Approach to Entrepreneurial Decision Making: Evidence from a Randomized Control Trial
A classical approach to collecting and elaborating information to make entrepreneurial decisions combines search heuristics, such as trial and error, effectuation, and confirmatory search. This paper develops a framework for exploring the implications of a more scientific approach to entrepreneurial decision making. The panel sample of our randomized control trial includes 116 Italian startups and 16 data points over a period of about one year. Both the treatment and control groups receive 10 sessions of general training on how to obtain feedback from the market and gauge the feasibility of their idea. We teach the treated startups to develop frameworks for predicting the performance of their idea and conduct rigorous tests of their hypotheses, very much as scientists do in their research. We let the firms in the control group instead follow their intuitions about how to assess their idea, which has typically produced fairly standard search heuristics. We find that entrepreneurs who behave like scientists perform better, are more likely to pivot to a different idea, and are not more likely to drop out than the control group in the early stages of the startup. These results are consistent with the main prediction of our theory: a scientific approach improves precision—it reduces the odds of pursuing projects with false positive returns and increases the odds of pursuing projects with false negative returns. This paper was accepted by Marie Thursby, entrepreneurship and innovation.
Improving Ecological Inference by Predicting Individual Ethnicity from Voter Registration Records
In both political behavior research and voting rights litigation, turnout and vote choice for different racial groups are often inferred using aggregate election results and racial composition. Over the past several decades, many statistical methods have been proposed to address this ecological inference problem. We propose an alternative method to reduce aggregation bias by predicting individual-level ethnicity from voter registration records. Building on the existing methodological literature, we use Bayes's rule to combine the Census Bureau's Surname List with various information from geocoded voter registration records. We evaluate the performance of the proposed methodology using approximately nine million voter registration records from Florida, where self-reported ethnicity is available. We find that it is possible to reduce the false positive rate among Black and Latino voters to 6% and 3%, respectively, while maintaining the true positive rate above 80%. Moreover, we use our predictions to estimate turnout by race and find that our estimates yields substantially less amounts of bias and root mean squared error than standard ecological inference estimates. We provide open-source software to implement the proposed methodology.
The impact of false positive COVID-19 results in an area of low prevalence
False negative results in COVID-19 testing are well recognised and frequently discussed. False positive results, while less common and less frequently discussed, still have several adverse implications, including potential exposure of a non-infected person to the virus in a cohorted area. Although false positive results are proportionally greater in low prevalence settings, the consequences are significant at all times and potentially of greater significance in high-prevalence settings. We evaluated COVID-19 results in one area during a period of low prevalence. The consequences of these results are discussed and implications for these results in both high and low prevalence settings are considered. We also provide recommendations to minimise the risk and impact of false-positive results.