Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
53,100 result(s) for "Data retrieval"
Sort by:
Discovering knowledge in data : an introduction to data mining
DANIEL T. LAROSE received his PhD in statistics from the University of Connecticut. An associate professor of statistics at Central Connecticut State University, he developed and directs Data Mining@CCSU, the world's first online master of science program in data mining. He has also worked as a data mining consultant for Connecticut-area companies. He is currently working on the next two books of his three-volume series on Data Mining: Data Mining Methods and Models and Data Mining the Web: Uncovering Patterns in Web Content, scheduled to publish respectively in 2005 and 2006.
Data science for dummies
Begins by explaining large data sets and data formats, including sample Python code for manipulating data. The book explains how to work with relational databases and unstructured data, including NoSQL. The book then moves into preparing data for analysis by cleaning it up or \"munging\" it. From there the book explains data visualization techniques and types of data sets. Part II of the book is all about supervised machine learning, including regression techniques and model validation techniques. Part III explains unsupervised machine learning, including clustering and recommendation engines. Part IV overviews big data processing, including MapReduce, Hadoop, Dremel, Storm, and Spark. The book finishes up with real world applications of data science and how data science fits into organizations.
Retrieval of individual patient data depended on study characteristics: a randomized controlled trial
The aim of the study was to examine the effect of providing a financial incentive to authors of randomized clinical trials (RCTs) to obtain individual patient data (IPD). Parallel-group RCT with authors identified in the RCTs eligible for two systematic reviews. The authors were randomly allocated to the intervention (financial incentive with several contact approaches) or control group (using the same contact approaches). Studied outcomes are proportion of authors who provided IPD, time to obtain IPD, and completeness of IPD received. Of the 129 authors contacted, 37 authors suggested or contacted a person or funder providing relevant details or showed interest to collaborate, whereas 45 authors directed us to contact a person or funder, lacked resources or time, did not have ownership or approval to share the IPD, or claimed IPD was too old. None of the authors shared their IPD. We contacted 17 sponsors and received two complete IPD datasets from one sponsor. The time to obtain IPD was >1 year after a sponsor's positive response. Common barriers included study identification, data ownership, limited data access, and required IPD licenses. IPD sharing may depend on study characteristics, including funding type, study size, study risk of bias, and treatment effect, but not on providing a financial incentive.
From Data Silos to Health Records Without Borders: A Systematic Survey on Patient-Centered Data Interoperability
The widespread use of electronic health records (EHRs) and healthcare information systems (HISs) has led to isolated data silos across healthcare providers, and current interoperability standards like FHIR cannot address some scenarios. For instance, it cannot retrieve patients’ health records if they are stored by multiple healthcare providers with diverse interoperability standards or the same standard but different implementation guides. FHIR and similar standards prioritize institutional interoperability rather than patient-centered interoperability. We explored the challenges in transforming fragmented data silos into patient-centered data interoperability. This research comprehensively reviewed 56 notable studies to analyze the challenges and approaches in patient-centered interoperability through qualitative and quantitative analyses. We classified the challenges into four domains and categorized common features of the propositions to patient-centered interoperability into six categories: EMR integration, EHR usage, FHIR adaptation, blockchain application, semantic interoperability, and personal data retrieval. Our results indicated that “using blockchain” (48%) and “personal data retrieval” (41%) emerged as the most cited features. The Jaccard similarity analysis revealed a strong synergy between blockchain and personal data retrieval (0.47) and recommends their integration as a robust approach to achieving patient-centered interoperability. Conversely, gaps exist between semantic interoperability and personal data retrieval (0.06) and between FHIR adaptation and personal data retrieval (0.08), depicting research opportunities to develop unique contributions for both combinations. Our data-driven insights provide a roadmap for future research and innovation.
Website scraping with Python: using BeautifulSoup and Scrapy
\"Closely examine website scraping and data processing: the technique of extracting data from websites in a format suitable for further analysis. You'll review which tools to use, and compare their features and efficiency. Focusing on BeautifulSoup4 and Scrapy, this concise, focused book highlights common problems and suggests solutions that readers can implement on their own. Website Scraping with Python starts by introducing and installing the scraping tools and explaining the features of the full application that readers will build throughout the book. You'll see how to use BeautifulSoup4 and Scrapy individually or together to achieve the desired results. Because many sites use JavaScript, you'll also employ Selenium with a browser emulator to render these sites and make them ready for scraping. By the end of this book, you'll have a complete scraping application to use and rewrite to suit your needs. As a bonus, the author shows you options of how to deploy your spiders into the Cloud to leverage your computer from long-running scraping tasks\"--Back cover.
A Fine-Grained Attribute Based Data Retrieval with Proxy Re-Encryption Scheme for Data Outsourcing Systems
Attribute based encryption is suitable for data protection in data outsourcing systems such as cloud computing. However, the leveraging of encryption technique may retrain some routine operations over the encrypted data, particularly in the field of data retrieval. This paper presents an attribute based date retrieval with proxy re-encryption (ABDR-PRE) to provide both fine-grained access control and retrieval over the ciphertexts. The proposed scheme achieves fine-grained data access management by adopting KP-ABE mechanism, a delegator can generate the re-encryption key and search indexes for the ciphertexts to be shared over the target delegatee’s attributes. Throughout the process of data sharing, the data are transferred as ciphers thus the server and unauthorized users cannot acquire the sensitive information of the encrypted data so the privacy and confidentiality can be protected. By security analysis, the proposed scheme meets the security requirements confidentiality, keyword semantic security as well as collusion attack resistance.
Research on Automatic Extraction Methods for Connection Information of MEP System Equipment Based on BIM
This paper introduces an automated method for extracting connection information of MEP system equipment based on BIM. By parsing IFC files, this method identifies devices and extracts connection information, enhancing the accuracy and efficiency of data retrieval. Experimental results indicate that this method significantly reduces the time required for information extraction compared to traditional manual approaches. The findings provide new digital tools for the design and management of MEP systems, potentially advancing the HVAC industry towards more intelligent solutions.