Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
337
result(s) for
"Datenmanagement"
Sort by:
Improving Data Quality in Crowdsourced Data for Indonesian Election Monitor: A Case Study in KawalPilpres
by
Gunawan, F
,
Ruldeviyani, Y
2020
ICT has enabled democratic process to be more transparent and enabled citizens' participation in the election process. However, public trust is a mandatory requirement for a good democratic transition. Participation in monitoring election process could be implemented as a crowdsourcing effort to improve public trust in the election result which in this article based on a case study of KawalPilpres to monitor 2019 Indonesian presidential election. Trust factor is a key success for monitoring effort. Therefore, data quality becomes necessity. Data quality is assessed using Loshin's maturity assessment and analysed using Loshin's improvement strategies. Based on our assessments, there are three top categories for improvements namely governance, expectations, and policies of data quality management.
Journal Article
The importance of interpretability and visualization in machine learning for applications in medicine and health care
by
Vellido, Alfredo
in
Aprenentatge automàtic
,
Artificial Intelligence
,
Computational Biology/Bioinformatics
2020
In a short period of time, many areas of science have made a sharp transition towards data-dependent methods. In some cases, this process has been enabled by simultaneous advances in data acquisition and the development of networked system technologies. This new situation is particularly clear in the life sciences, where data overabundance has sparked a flurry of new methodologies for data management and analysis. This can be seen as a perfect scenario for the use of machine learning and computational intelligence techniques to address problems in which more traditional data analysis approaches might struggle. But, this scenario also poses some serious challenges. One of them is model interpretability and explainability, especially for complex nonlinear models. In some areas such as medicine and health care, not addressing such challenge might seriously limit the chances of adoption, in real practice, of computer-based systems that rely on machine learning and computational intelligence methods for data analysis. In this paper, we reflect on recent investigations about the interpretability and explainability of machine learning methods and discuss their impact on medicine and health care. We pay specific attention to one of the ways in which interpretability and explainability in this context can be addressed, which is through data and model visualization. We argue that, beyond improving model interpretability as a goal in itself, we need to integrate the medical experts in the design of data analysis interpretation strategies. Otherwise, machine learning is unlikely to become a part of routine clinical and health care practice.
Journal Article
A review of diagnostic and prognostic capabilities and best practices for manufacturing
by
Weiss, Brian A
,
Helu, Moneer
,
Vogl, Gregory W
in
Advanced manufacturing technologies
,
Best practice
,
Cost benefit analysis
2019
Prognostics and health management (PHM) technologies reduce time and costs for maintenance of products or processes through efficient and cost-effective diagnostic and prognostic activities. PHM systems use real-time and historical state information of subsystems and components to provide actionable information, enabling intelligent decision-making for improved performance, safety, reliability, and maintainability. However, PHM is still an emerging field, and much of the published work has been either too exploratory or too limited in scope. Future smart manufacturing systems will require PHM capabilities that overcome current challenges, while meeting future needs based on best practices, for implementation of diagnostics and prognostics. This paper reviews the challenges, needs, methods, and best practices for PHM within manufacturing systems. This includes PHM system development of numerous areas highlighted by diagnostics, prognostics, dependability analysis, data management, and business. Based on current capabilities, PHM systems are shown to benefit from open-system architectures, cost-benefit analyses, method verification and validation, and standards.
Journal Article
Construction of English Network Teaching Platform Relying on Computer Big Data
2021
In the application of data and information in recent years, a large number of large Internet English teaching databases have gradually been established and the requirements for data management and collection have become more mature. Related Internet companies need to be able to build a huge database based on the collected data. Data system, so as to better carry out simple English teaching big data application and query work, but because the quality and efficiency of current data collection are not high enough, the difficulty of collection is very large, so the construction of the database is extremely difficult and relevant Of personnel can find targeted data collection methods. At the same time, because the number of visits to English teaching big data is extremely large and the related visits can reach an astonishing millions at the peak, it also poses an inevitable test for the management of the collection department for combing and it needs to pass reasonable data management. And applications to build a huge data management system, so as to better support the English teaching large database to complete data query and management actions and achieve the optimization of database load balance.
Journal Article
The Materials Data Facility: Data Services to Advance Materials Science Research
by
Chard, K.
,
Tuecke, S.
,
Blaiszik, B.
in
Access control
,
Chemistry/Food Science
,
Cloud computing
2016
With increasingly strict data management requirements from funding agencies and institutions, expanding focus on the challenges of research replicability, and growing data sizes and heterogeneity, new data needs are emerging in the materials community. The materials data facility (MDF) operates two cloud-hosted services, data publication and data discovery, with features to promote open data sharing, self-service data publication and curation, and encourage data reuse, layered with powerful data discovery tools. The data publication service simplifies the process of copying data to a secure storage location, assigning data a citable persistent identifier, and recording custom (e.g., material, technique, or instrument specific) and automatically-extracted metadata in a registry while the data discovery service will provide advanced search capabilities (e.g., faceting, free text range querying, and full text search) against the registered data and metadata. The MDF services empower individual researchers, research projects, and institutions to (I) publish research datasets, regardless of size, from local storage, institutional data stores, or cloud storage, without involvement of third-party publishers; (II) build, share, and enforce extensible domain-specific custom metadata schemas; (III) interact with published data and metadata via representational state transfer (REST) application program interfaces (APIs) to facilitate automation, analysis, and feedback; and (IV) access a data discovery model that allows researchers to search, interrogate, and eventually build on existing published data. We describe MDF’s design, current status, and future plans.
Journal Article
GIS- Based Screening Model of Coastal City Karachi for Plantation of Biofuel Source
2020
Geospatial techniques are mediating in decision making, diversified data management and critical analysis. Jatropha Curcas. is a biodiesel crop and friendly to the regions of saline water environment. This study focuses to map the suitable plantation sites for biodiesel energy crop by using meteorological parameters and satellite imageries of ASTER GDEM and Landsat 8. The thematic layers of soil adjacent to existing vegetation, topographical elevation, slope, land surface temperature, and humidity are created and analyzed with soil types, bareness index and stream orders. Suitability of sites for plantation is a function of these variables which are found to be favorable in the study area. It should be taken into consideration that Jatropha Curcas plantation in Karachi which may contribute in local economic prosperity and support in maintaining heat-sink for the industrialized city.
Journal Article
From recursive to dynamic: An algorithm for dealing with a problem
2021
One of the problems in data science is related to data management, especially big data. It is indirectly related to algorithms in computer science. Therefore, the meeting between big data and the appropriate algorithm to produce a good output requires characterization and explanation. This paper reveals one of them, by involving the recursive and dynamic principles of programming, and by comparison giving choices as answers to problems.
Journal Article
High resolution mass spectrometry-based non-target screening can support regulatory environmental monitoring and chemicals management
by
Farmen, Eivind
,
Tornero, Victoria
,
Slobodnik, Jaroslav
in
Contaminants
,
Data management
,
Data retrieval
2019
Non-target screening (NTS) including suspect screening with high resolution mass spectrometry has already shown its feasibility in detecting and identifying emerging contaminants, which subsequently triggered exposure mitigating measures. NTS has a large potential for tasks such as effective evaluation of regulations for safe marketing of substances and products, prioritization of substances for monitoring programmes and assessment of environmental quality. To achieve this, a further development of NTS methodology is required, including: (i) harmonized protocols and quality requirements, (ii) infrastructures for efficient data management, data evaluation and data sharing and (iii) sufficient resources and appropriately trained personnel in the research and regulatory communities in Europe. Recommendations for achieving these three requirements are outlined in the following discussion paper. In particular, in order to facilitate compound identification it is recommended that the relevant information for interpretation of mass spectra, as well as about the compounds usage and production tonnages, should be made accessible to the scientific community (via open-access databases). For many purposes, NTS should be implemented in combination with effect-based methods to focus on toxic chemicals.
Journal Article
Research on Object-oriented Management Accounting Dynamic Budget Management based on Big Data
2020
The advent of the big data era makes a lot of work face the situation that will be eliminated or left behind without following the trend of reform. This article begins with a simple metaphor to illustrate the importance of management accounting in the enterprise, later pointed out the current management accounting in the development of the enterprise facing four problems, targeted put forward in the application of big data management accounting many advantages, and illustrate with examples, so that this study to achieve the unity of theory and practice,and we will discuss it from the perspective of the analysis of big data.
Journal Article