Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
15,497 result(s) for "data interoperability"
Sort by:
IoT in Healthcare: Achieving Interoperability of High-Quality Data Acquired by IoT Medical Devices
It is an undeniable fact that Internet of Things (IoT) technologies have become a milestone advancement in the digital healthcare domain, since the number of IoT medical devices is grown exponentially, and it is now anticipated that by 2020 there will be over 161 million of them connected worldwide. Therefore, in an era of continuous growth, IoT healthcare faces various challenges, such as the collection, the quality estimation, as well as the interpretation and the harmonization of the data that derive from the existing huge amounts of heterogeneous IoT medical devices. Even though various approaches have been developed so far for solving each one of these challenges, none of these proposes a holistic approach for successfully achieving data interoperability between high-quality data that derive from heterogeneous devices. For that reason, in this manuscript a mechanism is produced for effectively addressing the intersection of these challenges. Through this mechanism, initially, the collection of the different devices’ datasets occurs, followed by the cleaning of them. In sequel, the produced cleaning results are used in order to capture the levels of the overall data quality of each dataset, in combination with the measurements of the availability of each device that produced each dataset, and the reliability of it. Consequently, only the high-quality data is kept and translated into a common format, being able to be used for further utilization. The proposed mechanism is evaluated through a specific scenario, producing reliable results, achieving data interoperability of 100% accuracy, and data quality of more than 90% accuracy.
Interoperability analysis of IFC-based data exchange between heterogeneous BIM software
Traditionally, the one-to-one interaction between heterogeneous software has become the most commonly used method for multi-disciplinary collaboration in building projects, resulting in numerous data interfaces, different data formats, and inefficient collaboration. As the prevalence of Building Information Modeling (BIM) increases in building projects, it is expected that the exchange of Industry Foundation Classes (IFC)-based data can smoothly take place between heterogeneous BIM software. However, interoperability issues frequently occur during bidirectional data exchanges using IFC. Hence, a data interoperability experiment, including architectural, structural and MEP models from a practical project, was conducted to analyze these issues in the process of data import and re-export between heterogeneous software. According to the results, the fundamental causes of interoperability issues can be concluded as follows: (a) software tools cannot well interpret several objects belonging to other disciplines due to the difference in domain knowledge; (b) software tools have diverse methods to represent the same geometry, properties and relations, leading to inconsistent model data. Furthermore, this paper presents a suggested method for improving the existing bidirectional data sharing and exchange: BIM software tools export models using IFC format, and these IFC models are imported into a common IFC-based BIM platform for data interoperability.
Data Interoperability in Context: The Importance of Open-Source Implementations When Choosing Open Standards
Following the proposal by Tsafnat et al (2024) to converge on three open health data standards, this viewpoint offers a critical reflection on their proposed alignment of openEHR, Fast Health Interoperability Resources (FHIR), and Observational Medical Outcomes Partnership (OMOP) as default data standards for clinical care and administration, data exchange, and longitudinal analysis, respectively. We argue that open standards are a necessary but not sufficient condition to achieve health data interoperability. The ecosystem of open-source software needs to be considered when choosing an appropriate standard for a given context. We discuss two specific contexts, namely standardization of (1) health data for federated learning, and (2) health data sharing in low- and middle-income countries. Specific design principles, practical considerations, and implementation choices for these two contexts are described, based on ongoing work in both areas. In the case of federated learning, we observe convergence toward OMOP and FHIR, where the two standards can effectively be used side-by-side given the availability of mediators between the two. In the case of health information exchanges in low and middle-income countries, we see a strong convergence toward FHIR as the primary standard. We propose practical guidelines for context-specific adaptation of open health data standards.
Using a Diverse Test Suite to Assess Large Language Models on Fast Health Care Interoperability Resources Knowledge: Comparative Analysis
Recent natural language processing breakthroughs, particularly with the emergence of large language models (LLMs), have demonstrated remarkable capabilities on general knowledge benchmarks. However, there is limited data on the performance and understanding of these models in relation to the Fast Healthcare Interoperability Resources (FHIR) standard. The complexity and specialized nature of FHIR present challenges for LLMs, which are typically trained on broad datasets and may have a limited understanding of the nuances required for domain-specific tasks. Improving health data interoperability can greatly benefit the use of clinical data and interaction with electronic health records. This study presents the Fast Healthcare Interoperability Resources (FHIR) Workbench, a comprehensive suite of datasets designed to evaluate the ability of LLMs to understand and apply the FHIR standard. In total, 4 evaluation datasets were created to assess the FHIR knowledge and capabilities of LLMs. These tasks include multiple-choice questions on general FHIR concepts and the FHIR Representational State Transfer (REST) application programming interface, as well as correctly identifying the resource type and generating FHIR resources from unstructured clinical patient notes. In addition, we evaluate open-source LLMs, such as Qwen 2.5 Coder and DeepSeek-V3, and commercial LLMs, including GPT-4o and Gemini 2, on these tasks in a zero-shot setting. To provide context for interpreting LLM performance, a subset of the datasets was human-evaluated by recruiting 6 participants with varying levels of FHIR expertise. Our evaluation across multiple FHIR tasks revealed nuanced performance metrics. Commercial models demonstrated exceptional capabilities, with GPT-4o achieving a 0.9990 F1-score on the FHIR-ResourceID task, 0.9400 on the FHIR-QA task, and 0.9267 on the FHIR-RESTQA task. Open-source models also demonstrated strong performance, with DeepSeek-v3 achieving 0.9400 on FHIR-QA, 0.9400 on FHIR-RESTQA, and 0.9142 on FHIR-ResourceID. Qwen 2.5 Coder-7B-Instruct demonstrated high accuracy, scoring 0.9533 on FHIR-QA and 0.8920 on FHIR-ResourceID. However, all models struggled with the Note2FHIR task, with performance ranging from 0.0382 (OLMo) to a maximum of 0.3633 (GPT-4.5-preview), highlighting the significant challenge of converting unstructured clinical text into FHIR-compliant resources. Human participants achieved accuracy scores ranging from 0.50 to 1.0 across the first 3 tasks. This study highlights the competitive performance of both open-source models, such as Qwen and DeepSeek, and commercial models, such as GPT-4o and Gemini, in FHIR-related tasks. While open-source models are advancing rapidly, commercial models still have an advantage for specific, complex tasks. The FHIR Workbench offers a valuable platform for evaluating the capabilities of these models and promoting improvements in health data interoperability.
Collection and Processing of Data from Wrist Wearable Devices in Heterogeneous and Multiple-User Scenarios
Over recent years, we have witnessed the development of mobile and wearable technologies to collect data from human vital signs and activities. Nowadays, wrist wearables including sensors (e.g., heart rate, accelerometer, pedometer) that provide valuable data are common in market. We are working on the analytic exploitation of this kind of data towards the support of learners and teachers in educational contexts. More precisely, sleep and stress indicators are defined to assist teachers and learners on the regulation of their activities. During this development, we have identified interoperability challenges related to the collection and processing of data from wearable devices. Different vendors adopt specific approaches about the way data can be collected from wearables into third-party systems. This hinders such developments as the one that we are carrying out. This paper contributes to identifying key interoperability issues in this kind of scenario and proposes guidelines to solve them. Taking into account these topics, this work is situated in the context of the standardization activities being carried out in the Internet of Things and Machine to Machine domains.
From Data Silos to Health Records Without Borders: A Systematic Survey on Patient-Centered Data Interoperability
The widespread use of electronic health records (EHRs) and healthcare information systems (HISs) has led to isolated data silos across healthcare providers, and current interoperability standards like FHIR cannot address some scenarios. For instance, it cannot retrieve patients’ health records if they are stored by multiple healthcare providers with diverse interoperability standards or the same standard but different implementation guides. FHIR and similar standards prioritize institutional interoperability rather than patient-centered interoperability. We explored the challenges in transforming fragmented data silos into patient-centered data interoperability. This research comprehensively reviewed 56 notable studies to analyze the challenges and approaches in patient-centered interoperability through qualitative and quantitative analyses. We classified the challenges into four domains and categorized common features of the propositions to patient-centered interoperability into six categories: EMR integration, EHR usage, FHIR adaptation, blockchain application, semantic interoperability, and personal data retrieval. Our results indicated that “using blockchain” (48%) and “personal data retrieval” (41%) emerged as the most cited features. The Jaccard similarity analysis revealed a strong synergy between blockchain and personal data retrieval (0.47) and recommends their integration as a robust approach to achieving patient-centered interoperability. Conversely, gaps exist between semantic interoperability and personal data retrieval (0.06) and between FHIR adaptation and personal data retrieval (0.08), depicting research opportunities to develop unique contributions for both combinations. Our data-driven insights provide a roadmap for future research and innovation.
Semantic and Syntactic Interoperability for Agricultural Open-Data Platforms in the Context of IoT Using Crop-Specific Trait Ontologies
In recent years, Internet-of-Things (IoT)-based applications have been used in various domains such as health, industry and agriculture. Considerable amounts of data in diverse formats are collected from wireless sensor networks (WSNs) integrated into IoT devices. Semantic interoperability of data gathered from IoT devices is generally being carried out using existing sensor ontologies. However, crop-specific trait ontologies—which include site-specific parameters concerning hazelnut as a particular agricultural product—can be used to make links between domain-specific variables and sensor measurement values as well. This research seeks to address how to use crop-specific trait ontologies for linking site-specific parameters to sensor measurement values. A data-integration approach for semantic and syntactic interoperability is proposed to achieve this objective. An open-data platform is developed and its usability is evaluated to justify the viability of the proposed approach. Furthermore, this research shows how to use web services and APIs to carry out the syntactic interoperability of sensor data in agriculture domain.
Data integration for infrastructure asset management in small to medium-sized water utilities
Water utilities collect, store and manage vast data sets using many information systems (IS). For infrastructure asset management (IAM) planning those data need to be processed and transformed into information. However, information management efficiency often falls short of desired results. This happens particularly in municipalities where management is structured according to local government models. Along with the existing IS at the utilities' disposal, engineers and managers take their decisions based on information that is often incomplete, inaccurate or out-of-date. One of the main challenges faced by asset managers is integrating the several, often conflicting, sources of information available on the infrastructure, its condition and performance, and the various predictive analyses that can assist in prioritizing projects or interventions. This paper presents an overview of the IS used by Portuguese water utilities and discusses how data from different IS can be integrated in order to support IAM.
Ubiquitous Health Profile (UHPr): a big data curation platform for supporting health data interoperability
The lack of Interoperable healthcare data presents a major challenge, towards achieving ubiquitous health care. The plethora of diverse medical standards, rather than common standards, is widening the gap of interoperability. While many organizations are working towards a standardized solution, there is a need for an alternate strategy, which can intelligently mediate amongst a variety of medical systems, not complying with any mainstream healthcare standards while utilizing the benefits of several standard merging initiates, to eventually create digital health personas. The existence and efficiency of such a platform is dependent upon the underlying storage and processing engine, which can acquire, manage and retrieve the relevant medical data. In this paper, we present the Ubiquitous Health Profile (UHPr), a multi-dimensional data storage solution in a semi-structured data curation engine, which provides foundational support for archiving heterogeneous medical data and achieving partial data interoperability in the healthcare domain. Additionally, we present the evaluation results of this proposed platform in terms of its timeliness, accuracy, and scalability. Our results indicate that the UHPr is able to retrieve an error free comprehensive medical profile of a single patient, from a set of slightly over 116.5 million serialized medical fragments for 390,101 patients while maintaining a good scalablity ratio between amount of data and its retrieval speed.
Towards recommendations for metadata and data handling in plant phenotyping
Recent methodological developments in plant phenotyping, as well as the growing importance of its applications in plant science and breeding, are resulting in a fast accumulation of multidimensional data. There is great potential for expediting both discovery and application if these data are made publicly available for analysis. However, collection and storage of phenotypic observations is not yet sufficiently governed by standards that would ensure interoperability among data providers and precisely link specific phenotypes and associated genomic sequence information. This lack of standards is mainly a result of a large variability of phenotyping protocols, the multitude of phenotypic traits that are measured, and the dependence of these traits on the environment. This paper discusses the current situation of standardization in the area of phenomics, points out the problems and shortages, and presents the areas that would benefit from improvement in this field. In addition, the foundations of the work that could revise the situation are proposed, and practical solutions developed by the authors are introduced.