Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
16,535 result(s) for "Data coding"
Sort by:
Quality of Cancer-Related Clinical Coding in Primary Care in North Central London: Mixed Methods Quality Improvement Project
The North Central London (NCL) Cancer Alliance carried out a quality improvement (QI) project to fill a distinct knowledge gap regarding the quality of clinical coded data in a primary care electronic health care record system across the whole cancer pathway. This study aims to establish the quality of cancer-related clinical coding in NCL primary care, encompassing both quantitative measures (eg, coding completeness and diversity) and qualitative dimensions such as clinical relevance and workflow alignment. This was a mixed methods QI project in which we combined an observational dataset review and qualitative data from stakeholder interviews, workshops, and discussions. In the dataset review, we evaluated completeness, diversity, validation, and granularity in cancer clinical coding along the patient cancer pathway, which was split into three domains: (1) patient characteristics and risk factors, (2) cancer screening attendance, and (3) living with cancer. It was conducted in NCL primary care electronic health record systems, covering a population of over 1.4 million adults across 5 boroughs. Cancer-related clinical coding in NCL primary care revealed significant gaps despite high completeness for ethnicity (912,679/1,055,083, 86.5%) and language (898,023/1,307,601, 68.7%). Employment status (29,848/1,229,644, 2.4%) and family history of cancer (183,424/1,236,580, 14.8%) were underrecorded, with wide variation in coding practices. Screening data showed good alignment with national datasets for cervical and bowel screening but fragmented and inconsistent breast screening data due to a lack of standardized codes. Cancer diagnosis coding was incomplete (4604/5260, 87.5% recorded), and treatment and staging data were almost entirely absent, limiting proactive management of long-term consequences. Stakeholder input highlighted inconsistent template use, limited data updates, and insufficient incentives as key barriers to better coding. The QI project has provided a detailed insight into the many dimensions of cancer coding and sheds light on many factors that underpin variation and coding preference. We offer a number of recommendations. The prioritized ones include the need for a cancer clinical coding data framework for primary care supported by appropriate funding and incentivization; improvements in the breast screening pathway and its interface with primary care; improvements in the quality of secondary care information that is sent to primary care; and dissemination of the importance of coding of cancer activity in primary care.
Standardizing Data Collection in Traumatic Brain Injury
Collaboration among investigators, centers, countries, and disciplines is essential to advancing the care for traumatic brain injury (TBI). It is thus important that we “speak the same language.” Great variability, however, exists in data collection and coding of variables in TBI studies, confounding comparisons between and analysis across different studies. Randomized controlled trials can never address the many uncertainties concerning treatment approaches in TBI. Pooling data from different clinical studies and high-quality observational studies combined with comparative effectiveness research may provide excellent alternatives in a cost-efficient way. Standardization of data collection and coding is essential to this end. Common data elements (CDEs) are presented for demographics and clinical variables applicable across the broad spectrum of TBI. Most recommendations represent a consensus derived from clinical practice. Some recommendations concern novel approaches, for example assessment of the intensity of therapy in severely injured patients. Up to three levels of detail for coding data elements were developed: basic, intermediate, and advanced, with the greatest level of detail attained in the advanced version. More detailed codings can be collapsed into the basic version. Templates were produced to summarize coding formats, explanation of choices, and recommendations for procedures. Endorsement of the recommendations has been obtained from many authoritative organizations. The development of CDEs for TBI should be viewed as a continuing process; as more experience is gained, refinement and amendments will be required. This proposed process of standardization will facilitate comparative effectiveness research and encourage high-quality meta-analysis of individual patient data.
Fuzzy Clustering and Kernel PCA-Based High-Dimensional Imbalanced Data Integration with Octree Encoding
Due to the high-dimensional and unbalanced characteristics of national economic accounting data, there is a large amount of redundant information in the data, which will lead to problems such as boundary shift and integration overfitting shift when integrating the data, and will increase the difficulty of subsequent data integration. For this reason, a fuzzy clustering-based method for integrating high-dimensional unbalanced data of national accounts is proposed. Using the kernel principal component analysis method to reduce the dimensionality of high-dimensional imbalanced national economic accounting data, in order to reduce the complexity and sparsity of the data while preserving the main information of the original data as much as possible. Use fuzzy clustering algorithm for data clustering. Fuzzy clustering allows data points to belong to multiple clusters simultaneously, with each cluster having a membership measure that represents the strength of the relationship between data points and each cluster. Introducing deviation maximization for optimizing fuzzy clustering methods to ensure that the distance between each data point and its cluster center is as large as possible, while ensuring that the distance between data points within the same cluster is as small as possible. Based on text free grammar rules and conversion functions, convert national economic accounting data into hesitant fuzzy language data and obtain the optimal data attribute weight vector. Calculate the distance between different categories and the minimum distance, and determine the repulsion phenomenon between unknown and known classes through the objective function. Using Lagrange multipliers to solve the objective function and obtain the optimal clustering center. According to the optimal clustering center, complete the clustering of national economic accounting data and obtain different categories of national economic accounting data. According to the experimental results, the data integration imbalance of the proposed method ranges from 1.68% to 32.85%, and the total number of samples fluctuates between 139 and 5136. The three indicators of the integrated data are all greater than 0.88. Through actual coding cases, the coding ability of our method for highdimensional imbalanced data in national economic accounting has been verified.
Rhetorical code studies : discovering arguments in and around code
\"In Rhetorical Code Studies, Kevin Brock explores how software code serves as a means of meaningful communication through which amateur and professional software developers construct arguments--arguments that are not only made up of logical procedures but also of implicit and explicit claims about how a given program works (or should work). These claims appear as procedures and as conventional discourse in the form of code comments and in email messages, forum posts, and other venues for conversation with other developers. To investigate the rhetorical qualities of code, Brock extends ongoing conversations in rhetoric and composition on software by turning to a number of case examples ranging from large, well-known projects like Mozilla Firefox to small-scale programs like the \"FizzBuzz\" test common in many programming job interviews. These examples, which involve specific examination of code texts as well as the contexts surrounding their composition, demonstrate the variety and depth of rhetorical activity taking place in and around code, from individual differences in style to changes in large-scale community norms\"-- Provided by publisher.
Coding Algorithms for Defining Comorbidities in ICD-9-CM and ICD-10 Administrative Data
Objectives: Implementation of the International Statistical Classification of Disease and Related Health Problems, 10th Revision (ICD-10) coding system presents challenges for using administrative data. Recognizing this, we conducted a multistep process to develop ICD-10 coding algorithms to define Charlson and Elixhauser comorbidities in administrative data and assess the performance of the resulting algorithms. Methods: ICD-10 coding algorithms were developed by \"translation\" of the ICD-9-CM codes constituting Deyo's (for Charlson comorbidities) and Elixhauser's coding algorithms and by physicians' assessment of the face-validity of selected ICD-10 codes. The process of carefully developing ICD-10 algorithms also produced modified and enhanced ICD-9-CM coding algorithms for the Charlson and Elixhauser comorbidities. We then used data on in-patients aged 18 years and older in ICD-9-CM and ICD-10 administrative hospital discharge data from a Canadian health region to assess the comorbidity frequencies and mortality prediction achieved by the original ICD-9-CM algorithms, the enhanced ICD-9-CM algorithms, and the new ICD-10 coding algorithms. Results: Among 56,585 patients in the ICD-9-CM data and 58,805 patients in the ICD-10 data, frequencies of the 17 Charlson comorbidities and the 30 Elixhauser comorbidities remained generally similar across algorithms. The new ICD-10 and enhanced ICD-9-CM coding algorithms either matched or outperformed the original Deyo and Elixhauser ICD-9-CM coding algorithms in predicting in-hospital mortality. The C-statistic was 0.842 for Deyo's ICD-9-CM coding algorithm, 0.860 for the ICD-10 coding algorithm, and 0.859 for the enhanced ICD-9-CM coding algorithm, 0.868 for the original Elixhauser ICD-9-CM coding algorithm, 0.870 for the ICD-10 coding algorithm and 0.878 for the enhanced ICD-9-CM coding algorithm. Conclusions: These newly developed ICD-10 and ICD-9-CM comorbidity coding algorithms produce similar estimates of comorbidity prevalence in administrative data, and may outperform existing ICD-9-CM coding algorithms.
Coder Reliability and Misclassification in the Human Coding of Party Manifestos
The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP “gold standard” codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.
Quality of Diagnosis and Procedure Coding in ICD-10 Administrative Data
Objectives: The International Classification of Disease, 10th Revision (ICD-10) was introduced worldwide beginning in the late 1990s. Because there have been no published data on the quality of coding using ICD-10, the aim of our analysis is to assess the quality of ICD-10 coding in routinely collected hospital discharge data from Australia, which began using ICD-10 in 1998. Methods: Audit data from the years 1998-1999 (n = 7004) and 2000-2001 (n = 7631), excluding same-day chemotherapy and dialysis cases, were used in data analysis. Quality measures included prevalence comparisons, sensitivity, positive predictive value (PPV), and the kappa statistic. Results: Comparison of the audit sample to public hospital discharges showed little difference in age and gender, with audited cases more likely to be overnight stays. There was no difference in the median number of hospital assigned diagnosis and procedure codes per discharge. Agreement of the principal diagnosis code was 85% at the 3-digit level and 79% at the 4-digit level in 1998-1999; this rate had improved to 87% and 81% in 2000-2001. Principal procedure code agreement was 85% in 1998-1999 and 83% in 2000-2001 at the 5-digit level, and 81% and 80% at the 7-digit level, respectively. Specific major diagnoses, comorbid diagnoses, major procedures, and minor procedures showed good-to-excellent coding quality. Conclusions: The transition to ICD-10 has occurred with no loss of data quality, with data showing a high level of reliability and adherence to coding standards. When consideration is given to the nature of the analysis, administrative data can provide highly reliable population-based estimates of hospitalization rates.