Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
746,878 result(s) for "DATA BASES"
Sort by:
The data revolution : a critical analysis of big data, open data and data infrastructures
\"Our world is becoming ever more data-driven, transforming how business is conducted, governance enacted, and knowledge produced. Yet, the nature of data and the scope and implications of the changes taking place are not always clear. The Data Revolution is a must read for anyone interested in why data have become so important in the contemporary era.\" -- Back cover.
Identifiers for the 21st century: How to design, provision, and reuse persistent identifiers to maximize utility and impact of life science data
In many disciplines, data are highly decentralized across thousands of online databases (repositories, registries, and knowledgebases). Wringing value from such databases depends on the discipline of data science and on the humble bricks and mortar that make integration possible; identifiers are a core component of this integration infrastructure. Drawing on our experience and on work by other groups, we outline 10 lessons we have learned about the identifier qualities and best practices that facilitate large-scale data integration. Specifically, we propose actions that identifier practitioners (database providers) should take in the design, provision and reuse of identifiers. We also outline the important considerations for those referencing identifiers in various circumstances, including by authors and data generators. While the importance and relevance of each lesson will vary by context, there is a need for increased awareness about how to avoid and manage common identifier problems, especially those related to persistence and web-accessibility/resolvability. We focus strongly on web-based identifiers in the life sciences; however, the principles are broadly relevant to other disciplines.
The foundations of operational resilience - assessing the ability to operate in an anti-access/area denial (A2/AD) environment : the analytical framework, lexicon, and characteristics of the Operational Resilience Analysis Model (ORAM)
\"Although much work has been done considering the issue of airbase resilience especially in the Asia-Pacific region these studies have typically focused on a single aspect of the problem (such as hardening or runway repair) but have not considered the issues in total. There is a need to view the issue more holistically, especially given the strategic implications of U.S. power projection in anti-access/area denial (A2/AD) environments. The authors of this report developed a modeling framework and lexicon for conducting a detailed analysis of future Air Force operational resilience in an A2/AD environment; the analysis itself focused on different regions (Pacific, Southwest Asia, etc.) to bound the problem and identify a robust set of strategic assumptions and planning requirements. The study was set within the context of efforts to rebalance the joint force in the Asia-Pacific region. This report describes the Operational Resilience Analysis Model (ORAM) built for this effort, which was used to evaluate the impact of different courses of action from an operational standpoint. The authors explain the ORAM model, discuss the inputs that go into modeling Blue (friendly) and Red (enemy) capabilities, and illustrate the model using a simple notional case. They conclude with some suggestions for follow-on work to improve the functionality of ORAM and to address data uncertainties in the model\"--Publisher's website.
Data Literacy in Practice
Accelerate your journey to smarter decision making by mastering the fundamentals of data literacy and developing the mindset to work confidently with data Key Features Get a solid grasp of data literacy fundamentals to support your next steps in your careerLearn how to work with data and extract meaningful insights to take the right actionsApply your knowledge to real-world business intelligence projects Book Description Data is more than a mere commodity in our digital world. It is the ebb and flow of our modern existence. Individuals, teams, and enterprises working with data can unlock a new realm of possibilities. And the resultant agility, growth, and inevitable success have one origin—data literacy. This comprehensive guide is written by two data literacy pioneers, each with a thorough footprint within the data and analytics commercial world and lectures at top universities in the US and the Netherlands. Complete with best practices, practical models, and real-world examples, Data Literacy in Practice will help you start making your data work for you by building your understanding of data literacy basics and accelerating your journey to independently uncovering insights. You’ll learn the four-pillar model that underpins all data and analytics and explore concepts such as measuring data quality, setting up a pragmatic data management environment, choosing the right graphs for your readers, and questioning your insights. By the end of the book, you'll be equipped with a combination of skills and mindset as well as with tools and frameworks that will allow you to find insights and meaning within your data for data-informed decision making. What you will learn Start your data literacy journey with simple and actionable stepsApply the four-pillar model for organizations to transform data into insightsDiscover which skills you need to work confidently with dataVisualize data and create compelling visual data storiesMeasure, improve, and leverage your data to meet organizational goalsMaster the process of drawing insights, ask critical questions and action your insightsDiscover the right steps to take when you analyze insights Who this book is for This book is for data analysts, data professionals, and data teams starting or wanting to accelerate their data literacy journey. If you’re looking to develop the skills and mindset you need to work independently with data, as well as a solid knowledge base of the tools and frameworks, you’ll find this book useful.
Research applications of primary biodiversity databases in the digital age
Our world is in the midst of unprecedented change-climate shifts and sustained, widespread habitat degradation have led to dramatic declines in biodiversity rivaling historical extinction events. At the same time, new approaches to publishing and integrating previously disconnected data resources promise to help provide the evidence needed for more efficient and effective conservation and management. Stakeholders have invested considerable resources to contribute to online databases of species occurrences. However, estimates suggest that only 10% of biocollections are available in digital form. The biocollections community must therefore continue to promote digitization efforts, which in part requires demonstrating compelling applications of the data. Our overarching goal is therefore to determine trends in use of mobilized species occurrence data since 2010, as online systems have grown and now provide over one billion records. To do this, we characterized 501 papers that use openly accessible biodiversity databases. Our standardized tagging protocol was based on key topics of interest, including: database(s) used, taxa addressed, general uses of data, other data types linked to species occurrence data, and data quality issues addressed. We found that the most common uses of online biodiversity databases have been to estimate species distribution and richness, to outline data compilation and publication, and to assist in developing species checklists or describing new species. Only 69% of papers in our dataset addressed one or more aspects of data quality, which is low considering common errors and biases known to exist in opportunistic datasets. Globally, we find that biodiversity databases are still in the initial stages of data compilation. Novel and integrative applications are restricted to certain taxonomic groups and regions with higher numbers of quality records. Continued data digitization, publication, enhancement, and quality control efforts are necessary to make biodiversity science more efficient and relevant in our fast-changing environment.
THINGS: A database of 1,854 object concepts and more than 26,000 naturalistic object images
In recent years, the use of a large number of object concepts and naturalistic object images has been growing strongly in cognitive neuroscience research. Classical databases of object concepts are based mostly on a manually curated set of concepts. Further, databases of naturalistic object images typically consist of single images of objects cropped from their background, or a large number of naturalistic images of varying quality, requiring elaborate manual image curation. Here we provide a set of 1,854 diverse object concepts sampled systematically from concrete picturable and nameable nouns in the American English language. Using these object concepts, we conducted a large-scale web image search to compile a database of 26,107 high-quality naturalistic images of those objects, with 12 or more object images per concept and all images cropped to square size. Using crowdsourcing, we provide higher-level category membership for the 27 most common categories and validate them by relating them to representations in a semantic embedding derived from large text corpora. Finally, by feeding images through a deep convolutional neural network, we demonstrate that they exhibit high selectivity for different object concepts, while at the same time preserving variability of different object images within each concept. Together, the THINGS database provides a rich resource of object concepts and object images and offers a tool for both systematic and large-scale naturalistic research in the fields of psychology, neuroscience, and computer science.
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite \"goodness\" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
High Performance PostgreSQL for Rails: Reliable, Scalable, Maintainable Database Applications
Build faster, more reliable Rails apps by taking the best advanced PostgreSQL and Active Record capabilities, and using them to solve your application scale and growth challenges. Gain the skills needed to comfortably work with multi-terabyte databases, and with complex Active Record, SQL, and specialized Indexes. Develop your skills with PostgreSQL on your laptop, then take them into production, while keeping everything in sync. Make slow queries fast, perform any schema or data migration without errors, use scaling techniques like read/write splitting, partitioning, and sharding, to meet demanding workload requirements from Internet scale consumer apps to enterprise SaaS.Deepen your firsthand knowledge of high-scale PostgreSQL databases and Ruby on Rails applications with dozens of practical and hands-on exercises. Unlock the mysteries surrounding complex Active Record. Make any schema or data migration change confidently, without downtime. Grow your experience with modern and exclusive PostgreSQL features like SQL Merge, Returning, and Exclusion constraints. Put advanced capabilities like Full Text Search and Publish Subscribe mechanisms built into PostgreSQL to work in your Rails apps. Improve the quality of the data in your database, using the advanced and extensible system of types and constraints to reduce and eliminate application bugs. Tackle complex topics like how to improve query performance using specialized indexes. Discover how to effectively use built-in database functions and write your own, administer replication, and make the most of partitioning and foreign data wrappers. Use more than 40 well-supported open source tools to extend and enhance PostgreSQL and Ruby on Rails. Gain invaluable insights into database administration by conducting advanced optimizations - including high-impact database maintenance - all while solving real-world operational challenges. Take your new skills into production today and then take your PostgreSQL and Rails applications to a whole new level of reliability and performance.What You Need:A computer running macOS, Linux, or Windows and WSL2PostgreSQL version 16, installed by package manager, compiled, or running with DockerAn Internet connection
Big data, data mining and machine learning
With big data analytics comes big insights into profitability Big data is big business. But having the data and the computational power to process it isn't nearly enough to produce meaningful results. Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners is a complete resource for technology and marketing executives looking to cut through the hype and produce real results that hit the bottom line. Providing an engaging, thorough overview of the current state of big data analytics and the growing trend toward high performance computing architectures, the book is a detail-driven look into how big data analytics can be leveraged to foster positive change and drive efficiency. With continued exponential growth in data and ever more competitive markets, businesses must adapt quickly to gain every competitive advantage available. Big data analytics can serve as the linchpin for initiatives that drive business, but only if the underlying technology and analysis is fully understood and appreciated by engaged stakeholders. This book provides a view into the topic that executives, managers, and practitioners require, and includes: * A complete overview of big data and its notable characteristics * Details on high performance computing architectures for analytics, massively parallel processing (MPP), and in-memory databases * Comprehensive coverage of data mining, text analytics, and machine learning algorithms * A discussion of explanatory and predictive modeling, and how they can be applied to decision-making processes Big Data, Data Mining, and Machine Learning provides technology and marketing executives with the complete resource that has been notably absent from the veritable libraries of published books on the topic. Take control of your organization's big data analytics to produce real results with a resource that is comprehensive in scope and light on hyperbole.
Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews
This study developed, calibrated, and evaluated a machine learning classifier designed to reduce study identification workload in Cochrane for producing systematic reviews. A machine learning classifier for retrieving randomized controlled trials (RCTs) was developed (the “Cochrane RCT Classifier”), with the algorithm trained using a data set of title–abstract records from Embase, manually labeled by the Cochrane Crowd. The classifier was then calibrated using a further data set of similar records manually labeled by the Clinical Hedges team, aiming for 99% recall. Finally, the recall of the calibrated classifier was evaluated using records of RCTs included in Cochrane Reviews that had abstracts of sufficient length to allow machine classification. The Cochrane RCT Classifier was trained using 280,620 records (20,454 of which reported RCTs). A classification threshold was set using 49,025 calibration records (1,587 of which reported RCTs), and our bootstrap validation found the classifier had recall of 0.99 (95% confidence interval 0.98–0.99) and precision of 0.08 (95% confidence interval 0.06–0.12) in this data set. The final, calibrated RCT classifier correctly retrieved 43,783 (99.5%) of 44,007 RCTs included in Cochrane Reviews but missed 224 (0.5%). Older records were more likely to be missed than those more recently published. The Cochrane RCT Classifier can reduce manual study identification workload for Cochrane Reviews, with a very low and acceptable risk of missing eligible RCTs. This classifier now forms part of the Evidence Pipeline, an integrated workflow deployed within Cochrane to help improve the efficiency of the study identification processes that support systematic review production. •Systematic review processes need to become more efficient.•Machine learning is sufficiently mature for real-world use.•A machine learning classifier was built using data from Cochrane Crowd.•It was calibrated to achieve very high recall.•It is now live and in use in Cochrane review production systems.