Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
13,038 result(s) for "Big data Data processing."
Sort by:
Big Data: Big Data Analysis, Issues and Challenges and Technologies
The data generated at an exponential rate has resulted in Big Data. This data has many characteristics and consists of structured, unstructured, and semi-structured data formats. It contains valuable information for the different types of stakeholders based on their need however it is not possible to meet them with the help of traditional tools and techniques. Here the big data technologies play a crucial role to handle, store, and process this tremendous amount of data in real-time. Big data analytics is used to extract meaningful information or patterns from the voluminous data. It can be further divided into three types i.e. text analytics, audio analytics, video analytics, and social media analytics. Big data analytics if followed by big data analysis process plays a significant role in generating meaningful information from big data. Big data analysis process consists of data acquisition, data storage, data management, data analytics, and finally data visualization. However, it is not simple and brings many challenges that need to be resolved. This paper presents the issues and challenges related to big data, prominent characteristics of big data, big data analytics, big data analysis process, and technologies used for processing the massive data.
Marine big data
\"As the volume of marine big data has increased dramatically, one of the main concerns is how to fully exploit the value of such data in the development of marine economy and marine science and technology. The book covers data acquisition, feature classification, processing and applications of marine big data in evaluation and decision-making, using case studies such as storm surge and marine oil spill disaster\"-- Provided by publisher.
Performance Analysis of IoT-Based Sensor, Big Data Processing, and Machine Learning Model for Real-Time Monitoring System in Automotive Manufacturing
With the increase in the amount of data captured during the manufacturing process, monitoring systems are becoming important factors in decision making for management. Current technologies such as Internet of Things (IoT)-based sensors can be considered a solution to provide efficient monitoring of the manufacturing process. In this study, a real-time monitoring system that utilizes IoT-based sensors, big data processing, and a hybrid prediction model is proposed. Firstly, an IoT-based sensor that collects temperature, humidity, accelerometer, and gyroscope data was developed. The characteristics of IoT-generated sensor data from the manufacturing process are: real-time, large amounts, and unstructured type. The proposed big data processing platform utilizes Apache Kafka as a message queue, Apache Storm as a real-time processing engine and MongoDB to store the sensor data from the manufacturing process. Secondly, for the proposed hybrid prediction model, Density-Based Spatial Clustering of Applications with Noise (DBSCAN)-based outlier detection and Random Forest classification were used to remove outlier sensor data and provide fault detection during the manufacturing process, respectively. The proposed model was evaluated and tested at an automotive manufacturing assembly line in Korea. The results showed that IoT-based sensors and the proposed big data processing system are sufficiently efficient to monitor the manufacturing process. Furthermore, the proposed hybrid prediction model has better fault prediction accuracy than other models given the sensor data as input. The proposed system is expected to support management by improving decision-making and will help prevent unexpected losses caused by faults during the manufacturing process.
A review on COVID-19 forecasting models
The novel coronavirus (COVID-19) has spread to more than 200 countries worldwide, leading to more than 36 million confirmed cases as of October 10, 2020. As such, several machine learning models that can forecast the outbreak globally have been released. This work presents a review and brief analysis of the most important machine learning forecasting models against COVID-19. The work presented in this study possesses two parts. In the first section, a detailed scientometric analysis presents an influential tool for bibliometric analyses, which were performed on COVID-19 data from the Scopus and Web of Science databases. For the above-mentioned analysis, keywords and subject areas are addressed, while the classification of machine learning forecasting models, criteria evaluation, and comparison of solution approaches are discussed in the second section of the work. The conclusion and discussion are provided as the final sections of this study.
DolphinNext: a distributed data processing platform for high throughput genomics
Background The emergence of high throughput technologies that produce vast amounts of genomic data, such as next-generation sequencing (NGS) is transforming biological research. The dramatic increase in the volume of data, the variety and continuous change of data processing tools, algorithms and databases make analysis the main bottleneck for scientific discovery. The processing of high throughput datasets typically involves many different computational programs, each of which performs a specific step in a pipeline. Given the wide range of applications and organizational infrastructures, there is a great need for highly parallel, flexible, portable, and reproducible data processing frameworks. Several platforms currently exist for the design and execution of complex pipelines. Unfortunately, current platforms lack the necessary combination of parallelism, portability, flexibility and/or reproducibility that are required by the current research environment. To address these shortcomings, workflow frameworks that provide a platform to develop and share portable pipelines have recently arisen. We complement these new platforms by providing a graphical user interface to create, maintain, and execute complex pipelines. Such a platform will simplify robust and reproducible workflow creation for non-technical users as well as provide a robust platform to maintain pipelines for large organizations. Results To simplify development, maintenance, and execution of complex pipelines we created DolphinNext. DolphinNext facilitates building and deployment of complex pipelines using a modular approach implemented in a graphical interface that relies on the powerful Nextflow workflow framework by providing 1. A drag and drop user interface that visualizes pipelines and allows users to create pipelines without familiarity in underlying programming languages. 2. Modules to execute and monitor pipelines in distributed computing environments such as high-performance clusters and/or cloud 3. Reproducible pipelines with version tracking and stand-alone versions that can be run independently. 4. Modular process design with process revisioning support to increase reusability and pipeline development efficiency. 5. Pipeline sharing with GitHub and automated testing 6. Extensive reports with R-markdown and shiny support for interactive data visualization and analysis. Conclusion DolphinNext is a flexible, intuitive, web-based data processing and analysis platform that enables creating, deploying, sharing, and executing complex Nextflow pipelines with extensive revisioning and interactive reporting to enhance reproducible results.
Data, now bigger and better!
\"Data is too big to be left to the data analysts! Here, Prickly Paradigm brings together five researchers whose work is deeply informed by anthropology, understood as more than a basket of ethnographic methods like participants observation and interviewing. The value of anthropology lies also in its conceptual frameworks, frameworks that are comparative as well as field-based. Kinship! Gifts! Everything old is new when the anthropological archive washes over 'big data'. Bringing together anthropology's classic debates and contemporary interventions, this book counters the future-oriented hype and speculation so characteristic of discussions regarding big data. By drawing as well on long experience in industry contexts, the contributors provide analytical provocations that can help reframe what may prove to be some of the most important shifts in technology and society in the first half of the twenty-first century\"--Back cover.
Towards design and implementation of Industry 4.0 for food manufacturing
Today’s factories are considered as smart ecosystems with humans, machines and devices interacting with each other for efficient manufacturing of products. Industry 4.0 is a suite of enabler technologies for such smart ecosystems that allow transformation of industrial processes. When implemented, Industry 4.0 technologies have a huge impact on efficiency, productivity and profitability of businesses. The adoption and implementation of Industry 4.0, however, require to overcome a number of practical challenges, in most cases, due to the lack of modernisation and automation in place with traditional manufacturers. This paper presents a first of its kind case study for moving a traditional food manufacturer, still using the machinery more than one hundred years old, a common occurrence for small- and medium-sized businesses, to adopt the Industry 4.0 technologies. The paper reports the challenges we have encountered during the transformation process and in the development stage. The paper also presents a smart production control system that we have developed by utilising AI, machine learning, Internet of things, big data analytics, cyber-physical systems and cloud computing technologies. The system provides novel data collection, information extraction and intelligent monitoring services, enabling improved efficiency and consistency as well as reduced operational cost. The platform has been developed in real-world settings offered by an Innovate UK-funded project and has been integrated into the company’s existing production facilities. In this way, the company has not been required to replace old machinery outright, but rather adapted the existing machinery to an entirely new way of operating. The proposed approach and the lessons outlined can benefit similar food manufacturing industries and other SME industries.