Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
226,435
result(s) for
"Data exchange"
Sort by:
Enabling Secure Data Exchange through the IOTA Tangle for IoT Constrained Devices
by
Castanier, Fabien
,
Carelli, Alberto
,
Palmieri, Andrea
in
Confidentiality
,
cybersecurity
,
Data analysis
2022
Internet-of-Things (IoT) and sensor technologies have enabled the collection of data in a distributed fashion for analysis and evidence-based decision making. However, security concerns regarding the source, confidentiality and integrity of the data arise. The most common method of protecting data transmission in sensor systems is Transport Layer Security (TLS) or its datagram counterpart (DTLS) today, but exist an alternative option based on Distributed Ledger Technology (DLT) that promise strong security, ease of use and potential for large scale integration of heterogeneous sensor systems. A DLT such as the IOTA Tangle offers great potential to improve sensor data exchange. This paper presents L2Sec, a cryptographic protocol which is able to secure data exchanged over the IOTA Tangle. This protocol is suitable for implementation on constrained devices, such as common IoT devices, leading to greater scalability. The first experimental results evidence the effectiveness of the approach and advocate for the integration of an hardware secure element to improve the overall security of the protocol. The L2Sec source code is released as open source repository on GitHub.
Journal Article
The Cryptographic Key Distribution System for IoT Systems in the MQTT Environment
2023
The Internet of Things (IoT) is a very abundant source of data, as well as a source of many vulnerabilities. A significant challenge is preparing security solutions to protect IoT nodes’ resources and the data exchanged. The difficulty usually stems from the insufficient resources of these nodes in terms of computing power, memory size, range energy resource, and wireless link performance. The paper presents the design and demonstrator of a system for symmetric cryptographic Key Generating, Renewing, and Distributing (KGRD). The system uses the TPM 2.0 hardware module to support cryptographic procedures, including creating trust structures, key generation, and securing the node’s exchange of data and resources. Clusters of sensor nodes and traditional systems can use the KGRD system to secure data exchange in the federated cooperation of systems with IoT-derived data sources. The transmission medium for exchanging data between KGRD system nodes is the Message Queuing Telemetry Transport (MQTT) service, which is commonly used in IoT networks.
Journal Article
Empowering End-of-Life Vehicle Decision Making with Cross-Company Data Exchange and Data Sovereignty via Catena-X
by
Lindow, Kai
,
Manoury, Marvin Michael
,
Mügge, Janine
in
Automobile industry
,
Collaboration
,
Data exchange
2023
The mobility sector is the world’s second-largest producer of energy-related CO2 emissions, and it is facing a global resource shortage. The demand for circular products, the use of secondary materials in future vehicles, and the need for sustainable business models in the mobility sector is increasing. However, a transparent and end-to-end data exchange throughout the entire value network is missing, which is hindering an efficient circular economy. Relevant information on the vehicle, its components, and materials at the end of the product life cycle are often missing. In this context, this paper presents a decision support system based on Digital Twin data for a circular economy solution as a software application. It was developed within the German research project Catena-X following an integrated approach of user-centered design, the V-model, and within the Scaled Agile Framework. By combining these methodological approaches, customer-oriented solutions were developed and continuously improved at each stage of development to shorten the time-to-market. Catena-X is based on Gaia-X principles. In Gaia-X, necessary core services are developed, and contraction negotiation for data exchange and usage policies is enabled and implemented. The decision support system provides important information about the exact composition and condition of the vehicle, its components, and its materials. Thus, it helps to improve efficiency, sustainability, and the implementation of the circular economy. The decision support system was tested and validated with a use case that provided Digital Twin data on the end-of-life vehicle.
Journal Article
Developing a data pricing framework for data exchange
by
Majumdar, Rupsa
,
Gurtoo, Anjula
,
Maileckal, Minnu
in
Business and Management
,
Data attributes
,
Data exchange
2025
Despite emergence of data markets such as Windows Azure Marketplace and India Urban Data Exchange (IUDX), comprehensive frameworks to determine data pricing and/or determine parameters for profit maximization remain a gap. Data valuation often gets guided by the sellers, ignoring the interests of the buyers. The information asymmetry results in lopsided pricing. The data sellers fail to price optimally, and the buyers are unable to optimize their purchasing decisions, thus, reinforcing the need for a structured data pricing framework. The paper reviews literature and applies the stages as reported by Ritchie and Spencer (in: Bryman, Burgess (eds) Analysing qualitative data, Routledge, London, 1994) for applied policy research to determine the main approaches of data pricing and develop a comprehensive pricing framework. Literature selection on pricing attributes and content analysis classifies data pricing models into five broad but distinct themes, based on the data pricing method, namely data characteristics-based pricing, quality-based pricing, query-based pricing, privacy-based pricing, and organizational value-based pricing. Application of the Ritchie and Spencer stages identifies eight factors, namely customer need, customer assigned value, market maturity, market structure, usable data, data quality, seller reputation and seller objectives as defining and intersecting with the five pricing models. A framework is hence developed to guide data pricing. Thereby, the paper creates a platform for prescribing data pricing formulas.
Journal Article
Statistical analysis of enhanced SDEx encryption method based on BLAKE3 hash function
2025
This paper presents a statistical analysis of the enhanced SDEx (Secure Data Exchange) encryption method, using a version that incorporates two session keys. This method has not previously been combined with the BLAKE3 hash function. The statistical analysis was conducted using the NIST Statistical Test Suite. Several real-world sample files were encrypted using the proposed method and then subjected to statistical analysis through selected tests from the NIST suite. These tests aimed to determine whether the resulting ciphertexts meet the criteria for pseudorandomness. Additionally, compression tests were performed using WinRAR, which confirmed that the ciphertexts are not compressible.
Journal Article
A survey of Life Cycle Inventory database implementations and architectures, and recommendations for new database initiatives
by
Lawrence, Ramon
,
Marcolin, Barbara
,
Pelletier, Nathan
in
Agribusiness
,
Agricultural production
,
Applications programs
2020
PurposeBarriers to interoperability pose a serious challenge in the domain of LCI database development and prevent effective data sharing between LCA practitioners. Through analysis of existing LCI database resources and technology, a set of recommendations was developed to promote interoperability, and thus utility, in developing new LCI data resources, including the forthcoming Canadian Agri-food Life Cycle Data Centre.MethodsA review of published literature, user documentation, gray-literature technical reports, and conference proceedings, as well as interviews with LCI database experts, was used to determine current common/best practices in the LCI database field. On this basis, core interoperability criteria and barriers were identified, and potential solutions were explored. A set of minimum, generalizable recommendations for LCI database development was then identified, aimed at ensuring that these core interoperability criteria are met. These recommendations were further informed by current popular practice in computer science and database-driven web application fields.Results and discussionData exchange format and nomenclature were identified as core interoperability criteria. Both the ILCD and EcoSpold formats and nomenclatures were found to be in widespread use, and are recommended for support to ensure interoperability with the most widely used LCI databases. It was also found that technical implementations of LCI databases largely followed practice in the database-driven web application field, and likely has little effect on interoperability between databases. In addition, third-party data providers and networks such as OpenLCA Nexus, ecoinvent, and the UNEP GLAD network were identified as potential solutions to interoperability challenges, allowing for greater distribution and interoperability of data, with minimal expense for early adoption.ConclusionA variety of potential solutions were found for interoperability concerns in LCI database development. A final set of five recommendations for format, nomenclature, third-party providers, third-party initiatives, and technical implementation was developed, which can help to ensure a minimum level of interoperability in the development of new LCI database resources. These recommendations will be implemented in the development of the Canadian Agri-Food Life Cycle Data Centre.
Journal Article
Exploring Stakeholders’ Perceptions of Electronic Personal Health Records for Mobile Populations Living in Disadvantaged Circumstances: A Multi-Country Feasibility Study in Denmark, Ghana, Kenya, and The Netherlands
by
Tensen, Paulien
,
Owusu-Dabo, Ellis
,
Gaifém, Francisca
in
Continuity of care
,
Data exchange
,
Decision making
2025
(1) Background: Mobile populations living in disadvantaged circumstances often face disrupted continuity of care due to incomplete or inaccessible health records. This feasibility study explored the perceived usefulness of Electronic Personal Health Records (EPHRs) in enhancing access to and continuity of care for mobile populations across Denmark, Ghana, Kenya, and The Netherlands. (2) Methods: A qualitative study using ninety semi-structured interviews, with multi-level stakeholders ranging from policymakers to mobile individuals, recruited through purposive and convenience sampling. Interview guides and analysis were informed by the Technology Acceptance Model (TAM), and analysis by the Unified Theory of Acceptance and Use of Technology (UTAUT). (3) Results: Stakeholders highlighted the value of improved medical data sharing and ownership and considered EPHRs promising for enhancing care continuity and efficiency. Key concerns included limited digital and health literacy, and data security and privacy, underscoring the need for education and safeguards against inappropriate data sharing. Due to differences in digital readiness and privacy guidelines, a one-size-fits-all EPHR is unlikely to succeed. (4) Conclusions: EPHRs are considered valuable tools to enhance care continuity and increase patient ownership, but they face technical, structural, and social challenges, including data security and varying levels of digital (health) literacy. Successful implementation requires context-sensitive, co-created solutions supported by strong policy frameworks.
Journal Article
Semantic Interoperability of Electronic Health Records: Systematic Review of Alternative Approaches for Enhancing Patient Information Availability
by
Palojoki, Sari
,
Vuokko, Riikka
,
Lehtonen, Lasse
in
Data exchange
,
Data integrity
,
Data models
2024
Semantic interoperability facilitates the exchange of and access to health data that are being documented in electronic health records (EHRs) with various semantic features. The main goals of semantic interoperability development entail patient data availability and use in diverse EHRs without a loss of meaning. Internationally, current initiatives aim to enhance semantic development of EHR data and, consequently, the availability of patient data. Interoperability between health information systems is among the core goals of the European Health Data Space regulation proposal and the World Health Organization's Global Strategy on Digital Health 2020-2025.
To achieve integrated health data ecosystems, stakeholders need to overcome challenges of implementing semantic interoperability elements. To research the available scientific evidence on semantic interoperability development, we defined the following research questions: What are the key elements of and approaches for building semantic interoperability integrated in EHRs? What kinds of goals are driving the development? and What kinds of clinical benefits are perceived following this development?
Our research questions focused on key aspects and approaches for semantic interoperability and on possible clinical and semantic benefits of these choices in the context of EHRs. Therefore, we performed a systematic literature review in PubMed by defining our study framework based on previous research.
Our analysis consisted of 14 studies where data models, ontologies, terminologies, classifications, and standards were applied for building interoperability. All articles reported clinical benefits of the selected approach to enhancing semantic interoperability. We identified 3 main categories: increasing the availability of data for clinicians (n=6, 43%), increasing the quality of care (n=4, 29%), and enhancing clinical data use and reuse for varied purposes (n=4, 29%). Regarding semantic development goals, data harmonization and developing semantic interoperability between different EHRs was the largest category (n=8, 57%). Enhancing health data quality through standardization (n=5, 36%) and developing EHR-integrated tools based on interoperable data (n=1, 7%) were the other identified categories. The results were closely coupled with the need to build usable and computable data out of heterogeneous medical information that is accessible through various EHRs and databases (eg, registers).
When heading toward semantic harmonization of clinical data, more experiences and analyses are needed to assess how applicable the chosen solutions are for semantic interoperability of health care data. Instead of promoting a single approach, semantic interoperability should be assessed through several levels of semantic requirements A dual model or multimodel approach is possibly usable to address different semantic interoperability issues during development. The objectives of semantic interoperability are to be achieved in diffuse and disconnected clinical care environments. Therefore, approaches for enhancing clinical data availability should be well prepared, thought out, and justified to meet economically sustainable and long-term outcomes.
Journal Article
Mining User Perspectives: Multi Case Study Analysis of Data Quality Characteristics
2025
With the growth of digital economies, data quality forms a key factor in enabling use and delivering value. Existing research defines quality through technical benchmarks or provider-led frameworks. Our study shifts the focus to actual users. Thirty-seven distinct data quality dimensions identified through a comprehensive review of the literature provide limited applicability for practitioners seeking actionable guidance. To address the gap, in-depth interviews of senior professionals from 25 organizations were conducted, representing sectors like computer science and technology, finance, environmental, social, and governance, and urban infrastructure. Data are analysed using content analysis methodology, with 2 level coding, supported by NVivo R1 software. Several newer perspectives emerged. Firstly, data quality is not simply about accuracy or completeness, rather it depends on suitability for real-world tasks. Secondly, trust grows with data transparency. Knowing where the data comes from and the nature of data processing matters as much as the data per se. Thirdly, users are open to paying for data, provided the data is clean, reliable, and ready to use. These and other results suggest data users focus on a narrower, more practical set of priorities, considered essential in actual workflows. Rethinking quality from a consumer’s perspective offers a practical path to building credible and accessible data ecosystems. This study is particularly useful for data platform designers, policymakers, and organisations aiming to strengthen data quality and trust in data exchange ecosystems.
Journal Article
Clinical Amyloid Typing by Proteomics: Performance Evaluation and Data Sharing between Two Centres
by
Canetti, Diana
,
Gilbertson, Janet A.
,
Mauri, Pierluigi
in
Algorithms
,
amyloid proteomics
,
Amyloidosis
2021
Amyloidosis is a relatively rare human disease caused by the deposition of abnormal protein fibres in the extracellular space of various tissues, impairing their normal function. Proteomic analysis of patients’ biopsies, developed by Dogan and colleagues at the Mayo Clinic, has become crucial for clinical diagnosis and for identifying the amyloid type. Currently, the proteomic approach is routinely used at National Amyloidosis Centre (NAC, London, UK) and Istituto di Tecnologie Biomediche-Consiglio Nazionale delle Ricerche (ITB-CNR, Milan, Italy). Both centres are members of the European Proteomics Amyloid Network (EPAN), which was established with the aim of sharing and discussing best practice in the application of amyloid proteomics. One of the EPAN’s activities was to evaluate the quality and the confidence of the results achieved using different software and algorithms for protein identification. In this paper, we report the comparison of proteomics results obtained by sharing NAC proteomics data with the ITB-CNR centre. Mass spectrometric raw data were analysed using different software platforms including Mascot, Scaffold, Proteome Discoverer, Sequest and bespoke algorithms developed for an accurate and immediate amyloid protein identification. Our study showed a high concordance of the obtained results, suggesting a good accuracy of the different bioinformatics tools used in the respective centres. In conclusion, inter-centre data exchange is a worthwhile approach for testing and validating the performance of software platforms and the accuracy of results, and is particularly important where the proteomics data contribute to a clinical diagnosis.
Journal Article