Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
201,478
result(s) for
"DATA QUALITY"
Sort by:
Identifying and managing data quality requirements: a design science study in the field of automated driving
by
Knauss, Eric
,
Pradhan, Shameer Kumar
,
Heyn, Hans-Martin
in
Advanced driver assistance systems
,
Autonomous vehicles
,
Data
2024
Good data quality is crucial for any data-driven system’s effective and safe operation. For critical safety systems, the significance of data quality is even higher since incorrect or low-quality data may cause fatal faults. However, there are challenges in identifying and managing data quality. In particular, there is no accepted process to define and continuously test data quality concerning what is necessary for operating the system. This lack is problematic because even safety-critical systems become increasingly dependent on data. Here, we propose a Candidate Framework for Data Quality Assessment and Maintenance (CaFDaQAM) to systematically manage data quality and related requirements based on design science research. The framework is constructed based on an advanced driver assistance system (ADAS) case study. The study is based on empirical data from a literature review, focus groups, and design workshops. The proposed framework consists of four components: a Data Quality Workflow, a List of Data Quality Challenges, a List of Data Quality Attributes, and Solution Candidates. Together, the components act as tools for data quality assessment and maintenance. The candidate framework and its components were validated in a focus group.
Journal Article
Spatial data quality : from process to decisions
This book provides an up-to-date overview of research being done in the field of spatial data quality, which looks at understanding, measuring, describing, and communicating information about the imperfections of geographic data used by GIS and other mapping software. It presents results from a number of current research projects in this area, from the assessment of data accuracy to legal aspects relating to the quality of geographic information.--From publisher's description.
Data Quality Management in the Internet of Things
2021
Nowadays, IoT is being used in more and more application areas and the importance of IoT data quality is widely recognized by practitioners and researchers. The requirements for data and its quality vary from application to application or organization in different contexts. Many methodologies and frameworks include techniques for defining, assessing, and improving data quality. However, due to the diversity of requirements, it can be a challenge to choose the appropriate technique for the IoT system. This paper surveys data quality frameworks and methodologies for IoT data, and related international standards, comparing them in terms of data types, data quality definitions, dimensions and metrics, and the choice of assessment dimensions. The survey is intended to help narrow down the possible choices of IoT data quality management technique.
Journal Article
The creation, management, and use of data quality information for life cycle assessment
2018
PurposeDespite growing access to data, questions of “best fit” data and the appropriate use of results in supporting decision making still plague the life cycle assessment (LCA) community. This discussion paper addresses revisions to assessing data quality captured in a new US Environmental Protection Agency guidance document as well as additional recommendations on data quality creation, management, and use in LCA databases and studies.ApproachExisting data quality systems and approaches in LCA were reviewed and tested. The evaluations resulted in a revision to a commonly used pedigree matrix, for which flow and process level data quality indicators are described, more clarity for scoring criteria, and further guidance on interpretation are given.DiscussionIncreased training for practitioners on data quality application and its limits are recommended. A multi-faceted approach to data quality assessment utilizing the pedigree method alongside uncertainty analysis in result interpretation is recommended. A method of data quality score aggregation is proposed and recommendations for usage of data quality scores in existing data are made to enable improved use of data quality scores in LCA results interpretation. Roles for data generators, data repositories, and data users are described in LCA data quality management. Guidance is provided on using data with data quality scores from other systems alongside data with scores from the new system. The new pedigree matrix and recommended data quality aggregation procedure can now be implemented in openLCA software.Future workAdditional ways in which data quality assessment might be improved and expanded are described. Interoperability efforts in LCA data should focus on descriptors to enable user scoring of data quality rather than translation of existing scores. Developing and using data quality indicators for additional dimensions of LCA data, and automation of data quality scoring through metadata extraction and comparison to goal and scope are needed.
Journal Article
Facilitating harmonized data quality assessments. A data quality framework for observational health research data collections with software implementations in R
by
Struckmann, Stephan
,
Huebner, Marianne
,
Sauerbrei, Willi
in
Blood pressure
,
Comorbidity
,
Data collection
2021
Background
No standards exist for the handling and reporting of data quality in health research. This work introduces a data quality framework for observational health research data collections with supporting software implementations to facilitate harmonized data quality assessments.
Methods
Developments were guided by the evaluation of an existing data quality framework and literature reviews. Functions for the computation of data quality indicators were written in R. The concept and implementations are illustrated based on data from the population-based Study of Health in Pomerania (SHIP).
Results
The data quality framework comprises 34 data quality indicators. These target four aspects of data quality: compliance with pre-specified structural and technical requirements (
integrity
); presence of data values (
completeness
); inadmissible or uncertain data values and contradictions (
consistency
); unexpected distributions and associations (
accuracy
). R functions calculate data quality metrics based on the provided study data and metadata and R Markdown reports are generated. Guidance on the concept and tools is available through a dedicated website.
Conclusions
The presented data quality framework is the first of its kind for observational health research data collections that links a formal concept to implementations in R. The framework and tools facilitate harmonized data quality assessments in pursue of transparent and reproducible research. Application scenarios comprise data quality monitoring while a study is carried out as well as performing an initial data analysis before starting substantive scientific analyses but the developments are also of relevance beyond research.
Journal Article
Power quality enhancement using Artificial Intelligence techniques
\"This text discusses sensitivity parametric analysis for the single tuned filter parameters and presents an optimization-based method for solving the allocation problem of the distributed generation units and capacitor banks in distribution systems. It also highlights the importance of artificial intelligence techniques such as water cycle algorithms in solving power quality problems such as over-voltage and harmonic distortion. Features: Presents a sensitivity parametric analysis for the single tuned filter parameters. Discusses optimization-based methods for solving the allocation problem of the distributed generation units and capacitor banks in distribution systems. Highlights the importance of artificial intelligence techniques (water cycle algorithm) for solving power quality problems such as over-voltage and harmonic distortion. Showcases a procedure for harmonic mitigation in active distribution systems using the single tuned harmonic filters. Helps in learning how to determine the optimal planning of the single tuned filters to mitigate the harmonic distortion in distorted systems. It will serve as an ideal reference text for graduate students and academic researchers in the fields of electrical engineering, electronics and communication engineering, Power systems planning and analysis\"-- Provided by publisher.
Big data quality framework: a holistic approach to continuous quality management
by
Dssouli, Rachida
,
Serhani, Mohamed Adel
,
Bouhaddioui, Chafik
in
Attributes
,
Big Data
,
Big data quality
2021
Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper.
Journal Article