Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
2,150 result(s) for "COMPUTERS / Database Administration "
Sort by:
Data science strategy for dummies
All the answers to your data science questions Over half of all businesses are using data science to generate insights and value from big data. How are they doing it? Data Science Strategy For Dummies answers all your questions about how to build a data science capability from scratch, starting with the \"what\" and the \"why\" of data science and covering what it takes to lead and nurture a top-notch team of data scientists. With this book, you'll learn how to incorporate data science as a strategic function into any business, large or small. Find solutions to your real-life challenges as you uncover the stories and value hidden within data. * Learn exactly what data science is and why it's important * Adopt a data-driven mindset as the foundation to success * Understand the processes and common roadblocks behind data science * Keep your data science program focused on generating business value * Nurture a top-quality data science team In non-technical language, Data Science Strategy For Dummies outlines new perspectives and strategies to effectively lead analytics and data science functions to create real value.
Principles of big data : preparing, sharing, and analyzing complex information
Principles of Big Data helps readers avoid the common mistakes that endanger all Big Data projects.By stressing simple, fundamental concepts, this book teaches readers how to organize large volumes of complex data, and how to achieve data permanence when the content of the data is constantly changing.
Moving objects databases
Moving Objects Databases is the first uniform treatment of moving objects databases, the technology that supports GPS and RFID. It focuses on the modeling and design of data from moving objects — such as people, animals, vehicles, hurricanes, forest fires, oil spills, armies, or other objects — as well as the storage, retrieval, and querying of that very voluminous data.It includes homework assignments at the end of each chapter, exercises throughout the text that students can complete as they read, and a solutions manual in the back of the book.This book is intended for graduate or advanced undergraduate students. It is also recommended for computer scientists and database systems engineers and programmers in government, industry and academia; professionals from other disciplines, e.g., geography, geology, soil science, hydrology, urban and regional planning, mobile computing, bioterrorism and homeland security, etc.Focuses on the modeling and design of data from moving objects--such as people, animals, vehicles, hurricanes, forest fires, oil spills, armies, or other objects--as well as the storage, retrieval, and querying of that very voluminous data.Demonstrates through many practical examples and illustrations how new concepts and techniques are used to integrate time and space in database applications.Provides exercises and solutions in each chapter to enable the reader to explore recent research results in practice.
Redash V5 Quick Start Guide
Data exploration and visualization is vital to Business Intelligence, the backbone of almost every enterprise or organization. Redash is a querying and visualization tool developed to simplify how marketing and business development departments are exposed to data. If you want to learn to create interactive dashboards with Redash, explore.
Blockchain Technology and Applications
Blockchain is an emerging technology that can radically improve transactions security at banking, supply chain, and other transaction networks. It's estimated that Blockchain will generate $3.1 trillion in new business value by 2030. Essentially, it provides the basis for a dynamic distributed ledger that can be applied to save time when recording transactions between parties, remove costs associated with intermediaries, and reduce risks of fraud and tampering. This book explores the fundamentals and applications of Blockchain technology. Readers will learn about the decentralized peer-to-peer network, distributed ledger, and the trust model that defines Blockchain technology. They will also be introduced to the basic components of Blockchain (transaction, block, block header, and the chain), its operations (hashing, verification, validation, and consensus model), underlying algorithms, and essentials of trust (hard fork and soft fork). Private and public Blockchain networks similar to Bitcoin and Ethereum will be introduced, as will concepts of Smart Contracts, Proof of Work and Proof of Stack, and cryptocurrency including Facebook's Libra will be elucidated. Also, the book will address the relationship between Blockchain technology, Internet of Things (IoT), Artificial Intelligence (AI), Cybersecurity, Digital Transformation and Quantum Computing. Readers will understand the inner workings and applications of this disruptive technology and its potential impact on all aspects of the business world and society. A look at the future trends of Blockchain Technology will be presented in the book.
Activity learning : discovering, recognizing, and predicting human behavior from sensor data
Defines the notion of an activity model learned from sensor data and presents key algorithms that form the core of the field Activity Learning: Discovering, Recognizing and Predicting Human Behavior from Sensor Data provides an in-depth look at computational approaches to activity learning from sensor data. Each chapter is constructed to provide practical, step-by-step information on how to analyze and process sensor data. The book discusses techniques for activity learning that include the following: * Discovering activity patterns that emerge from behavior-based sensor data * Recognizing occurrences of predefined or discovered activities in real time * Predicting the occurrences of activities The techniques covered can be applied to numerous fields, including security, telecommunications, healthcare, smart grids, and home automation. An online companion site enables readers to experiment with the techniques described in the book, and to adapt or enhance the techniques for their own use. With an emphasis on computational approaches, Activity Learning: Discovering, Recognizing, and Predicting Human Behavior from Sensor Data provides graduate students and researchers with an algorithmic perspective to activity learning.
Benchmarking of eight recurrent neural network variants for breath phase and adventitious sound detection on a self-developed open-access lung sound database—HF_Lung_V1
A reliable, remote, and continuous real-time respiratory sound monitor with automated respiratory sound analysis ability is urgently required in many clinical scenarios—such as in monitoring disease progression of coronavirus disease 2019—to replace conventional auscultation with a handheld stethoscope. However, a robust computerized respiratory sound analysis algorithm for breath phase detection and adventitious sound detection at the recording level has not yet been validated in practical applications. In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686 stridor labels, and 4,740 rhonchus labels), and 15,606 discontinuous adventitious sound labels (all crackles). We conducted benchmark tests using long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and adventitious sound detection. We also conducted a performance comparison between the LSTM-based and GRU-based models, between unidirectional and bidirectional models, and between models with and without a CNN. The results revealed that these models exhibited adequate performance in lung sound analysis. The GRU-based models outperformed, in terms of F1 scores and areas under the receiver operating characteristic curves, the LSTM-based models in most of the defined tasks. Furthermore, all bidirectional models outperformed their unidirectional counterparts. Finally, the addition of a CNN improved the accuracy of lung sound analysis, especially in the CAS detection tasks.
Data Sharing in the Post-Genomic World: The Experience of the International Cancer Genome Consortium (ICGC) Data Access Compliance Office (DACO)
ICGC and the Development of Controlled Access Policies Controlled access mechanisms may be viewed as the product of dual imperatives: 1) the legal and ethical requirements of regulators and research ethics committees, as well as research funders and study participants, to protect the confidentiality of data from re-identification and misuse by third parties; and 2) pressure, largely from within the science community, to protect data-producing investigators from acts of free riding by other members of the community (e.g., by ensuring they are properly acknowledged in publications and that no parasitic patents are deposited on the data by subsequent data users). Early models of databases having a two-tiered open/controlled access system included the database of Genotypes and Phenotypes (dbGaP) at the US National Institutes of Health (http://www.ncbi.nlm.nih.gov/gap), the Wellcome Trust Case Control Consortium (WTCCC) (http://www.wtccc.org.uk/), the Malaria Genomic Epidemiology Network (MalariaGEN) (http://www.malariagen.net/), and the European Genome-phenome Archive (EGA) (https://www.ebi.ac.uk/ega/).
Adaptive and Scalable Database Management with Machine Learning Integration: A PostgreSQL Case Study
The increasing complexity of managing modern database systems, particularly in terms of optimizing query performance for large datasets, presents significant challenges that traditional methods often fail to address. This paper proposes a comprehensive framework for integrating advanced machine learning (ML) models within the architecture of a database management system (DBMS), with a specific focus on PostgreSQL. Our approach leverages a combination of supervised and unsupervised learning techniques to predict query execution times, optimize performance, and dynamically manage workloads. Unlike existing solutions that address specific optimization tasks in isolation, our framework provides a unified platform that supports real-time model inference and automatic database configuration adjustments based on workload patterns. A key contribution of our work is the integration of ML capabilities directly into the DBMS engine, enabling seamless interaction between the ML models and the query optimization process. This integration allows for the automatic retraining of models and dynamic workload management, resulting in substantial improvements in both query response times and overall system throughput. Our evaluations using the Transaction Processing Performance Council Decision Support (TPC-DS) benchmark dataset at scale factors of 100 GB, 1 TB, and 10 TB demonstrate a reduction of up to 42% in query execution times and a 74% improvement in throughput compared with traditional approaches. Additionally, we address challenges such as potential conflicts in tuning recommendations and the performance overhead associated with ML integration, providing insights for future research directions. This study is motivated by the need for autonomous tuning mechanisms to manage large-scale, heterogeneous workloads while answering key research questions, such as the following: (1) How can machine learning models be integrated into a DBMS to improve query optimization and workload management? (2) What performance improvements can be achieved through dynamic configuration tuning based on real-time workload patterns? Our results suggest that the proposed framework significantly reduces the need for manual database administration while effectively adapting to evolving workloads, offering a robust solution for modern large-scale data environments.