Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
186 result(s) for "Bioacoustics Research."
Sort by:
Measuring factors affecting honey bee
Soybean (Glycine max(L.) Merr.) is an important agricultural crop around the world, and previous studies suggest that honey bees (Apis mellifera Linnaeus) can be a component for optimizing soybean production through pollination. Determining when bees are present in soybean fields is critical for assessing pollination activity and identifying periods when bees are absent so that bee-toxic pesticides may be applied. There are currently several methods for detecting pollinator activity, but these existing methods have substantial limitations, including the bias of pan trappings against large bees and the limited duration of observation possible using manual techniques. This study aimed to develop a new method for detecting honey bees in soybean fields using bioacoustics monitoring. Microphones were placed in soybean fields to record the audible wingbeats of foraging bees. Foraging activity was identified using the wingbeat frequency of honey bees (234 [+ or -] 14 Hz) through a combination of algorithmic and manual approaches. A total of 243 honey bees were detected over 10 days of recording in 4 soybean fields. Bee activity was significantly greater in blooming fields than in non- blooming fields. Temperature had no significant effect on bee activity, but bee activity differed significantly between soybean varieties, suggesting that soybean attractiveness to honey bees is heavily dependent on varietal characteristics. Refinement of bioacoustics methods, particularly through the incorporation of machine learning, could provide a practical tool for measuring the activity of honey bees and other flying insects in soybeans as well as other crops and ecosystems.
Validation prediction
Automated recognition is increasingly used to extract species detections from audio recordings; however, the time required to manually review each detection can be prohibitive. We developed a flexible protocol called “validation prediction” that uses machine learning to predict whether recognizer detections are true or false positives and can be applied to any recognizer type, ecological application, or analytical approach. Validation prediction uses a predictable relationship between recognizer score and the energy of an acoustic signal but can also incorporate any other ecological or spectral predictors (e.g., time of day, dominant frequency) that will help separate true from false-positive recognizer detections. First, we documented the relationship between recognizer score and the energy of an acoustic signal for two different recognizer algorithm types (hidden Markov models and convolutional neural networks). Next, we demonstrated our protocol using a case study of two species, the Common Nighthawk (Chordeiles minor) and Ovenbird (Seiurus aurocapilla). We reduced the number of detections that required validation by 75.7% and 42.9%, respectively, while retaining at least 98% of the true-positive detections. Validation prediction substantially improves the efficiency of using automated recognition on acoustic data sets. Our method can be of use to wildlife monitoring and research programs and will facilitate using automated recognition to mine bioacoustic data sets.
Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.
High levels of cryptic species diversity uncovered in Amazonian frogs
One of the greatest challenges for biodiversity conservation is the poor understanding of species diversity. Molecular methods have dramatically improved our ability to uncover cryptic species, but the magnitude of cryptic diversity remains unknown, particularly in diverse tropical regions such as the Amazon Basin. Uncovering cryptic diversity in amphibians is particularly pressing because amphibians are going extinct globally at an alarming rate. Here, we use an integrative analysis of two independent Amazonian frog clades, Engystomops toadlets and Hypsiboas treefrogs, to test whether species richness is underestimated and, if so, by how much. We sampled intensively in six countries with a focus in Ecuador (Engystomops: 252 individuals from 36 localities; Hypsiboas: 208 individuals from 65 localities) and combined mitochondrial DNA, nuclear DNA, morphological, and bioacoustic data to detect cryptic species. We found that in both clades, species richness was severely underestimated, with more undescribed species than described species. In Engystomops, the two currently recognized species are actually five to seven species (a 150–250% increase in species richness); in Hypsiboas, two recognized species represent six to nine species (a 200–350% increase). Our results suggest that Amazonian frog biodiversity is much more severely underestimated than previously thought.
Future directions for soundscape ecology: The importance of ornithological contributions
Building upon the rich legacies of bioacoustics and animal communication, soundscape ecology represents a new perspective through which ecologists can use the acoustic properties of ecosystems to understand the complex interactions of organisms, geophysical dynamics, and human activities. In this paper, we focus on the potential benefits of a soundscape approach for enhancing ornithological research and of ornithological perspectives for advancing the nascent field of soundscape ecology. We first summarize 4 major grounding principles of soundscape ecology in relation to avian ecology, evolution, and behavior. We then propose 3 research objectives that we envision as future directions for soundscape ecology: development of (1) soundscape metrics and interpretation, (2) understanding of soundscape drivers, and (3) soundscape-based disturbance indicators. Ornithological contributions can help advance the field of soundscape ecology to obtain these research objectives across various spatial, temporal, and organizational scales. Detailed ornithological knowledge can aid in the improvement of soundscape databases, interpretation of soundscape metrics, and validation of soundscape theories. Such contributions should also invite input from other taxonomic-group specialists, further enriching soundscape ecology. Reciprocally, soundscape approaches can enrich ornithology by offering an acoustic-based theoretical framework grounded in a broad ecological context, hosting soundscape collections from diverse ecosystems, and advancing acoustic methodologies. This paper is intended to stimulate further discussion and collaboration between ornithologists, soundscape ecologists, and any researchers studying sound in an ecological context in order to enhance research in these important domains of ecology.
The Acoustic Index User's Guide: A practical manual for defining, generating and understanding current and future acoustic indices
Ecoacoustics, the study of environmental sound, is a rapidly growing discipline offering ecological insights at scales ranging from individual organisms to whole ecosystems. Substantial methodological developments over the last 15 years have streamlined extraction of ecological information from audio recordings. One widely used set of methods are acoustic indices, which offer numerical summaries of the spectral, temporal and amplitude patterns in audio recordings. Currently, the specifics of each index's background, methodology and the soundscape patterns they are designed to summarise, are spread across multiple sources. Critically, details of index calculation are sometimes scarce, making it challenging for users to understand how index values are generated. Discrepancies in understanding can lead to misuse of acoustic indices or reporting of spurious results. This hinders ecological inference, replicability and discourages adoption of these tools for conservation and ecosystem monitoring, where they might otherwise provide useful insight. Here we present the Acoustic Index User's Guide—an interactive RShiny web app that defines and deconstructs eight of the most commonly used acoustic indices to facilitate consistent application across the discipline. We break the acoustic indices calculations down into easy‐to‐follow steps to better enable practical application and critical interpretation of acoustic indices. We demonstrate typical soundscape patterns using a suite of 91 example audio recordings: 66 real‐world soundscapes from terrestrial, aquatic and subterranean systems around the world, and 25 synthetic files demonstrating archetypal soundscape patterns. Our interpretation figures signpost specific soundscape patterns likely to be reflected in acoustic indices' values. This RShiny app is a living resource; additional acoustic indices will be added in the future through collaboration with authors of pre‐existing and new indices. The app also serves as a best‐practice template for the information required when publishing new acoustic indices, so that authors can facilitate the widest possible understanding and uptake of their indices. In turn, improved understanding of acoustic indices will aid effective hypothesis generation, application and interpretation in ecological research, ecosystem monitoring and conservation management.
Integrating AI models into ecological research workflows: The case of terrestrial bioacoustics
Data collected by autonomous sensors, including camera traps and acoustic recorders, have enormous potential to generate new scientific insights in ecology and related fields. Modern machine learning and AI classification methods are critical to analysing these often immense data streams. Accordingly, considerable effort has been dedicated to building AI models that accurately detect and classify species and events of interest in these data. These AI models, however, form only one part of a larger research framework that is needed to answer ecological questions using sensor data. We argue that a deep understanding of this research context is required to develop and apply appropriate AI models that can support scientific advances in ecology and evolution. In this manuscript, we contextualize the use of AI methods in autonomous biodiversity surveys, focusing on terrestrial bioacoustics as a case study, by discussing six sequential areas that together form a research project: hardware, field deployments, data management, detection and classification using AI and related models, statistical analysis and ecological insight. For each area, we briefly highlight several ways in which decisions made in that area can constrain, support or interact with the development and application of AI models. We conclude with several suggestions for better development and integration of AI models into ecological research, including the need for additional research at the interface of AI models and statistical analysis, the question of achieving human‐level performance with AI models and the sources of future methodological advances in AI for ecology.
BirdNET can be as good as experts for acoustic bird monitoring in a European city
BirdNET has become a leading tool for recognising bird species in audio recordings. However, its applicability in ecological research has been questioned over the sometimes large number of species falsely identified. Using species-specific confidence thresholds has been identified as a powerful approach to solving this issue. However, determining these thresholds is time and resource-consuming. While optimising the parameter setting of the algorithm could be an alternative strategy, the effect of parameter settings on the algorithm’s performance is not well understood. Here, we compared the species identification of BirdNET against expert identification using an acoustic dataset from a single site in Munich, Germany. The performance of BirdNET was evaluated using three performance metrics: precision, recall, and F1-score, using 24 combinations of the parameters: week, sensitivity, and overlap at four temporal aggregations (pooling of data across time intervals). We found that BirdNET performance varied widely depending on parameter settings (0.46–0.84). When given more data (higher temporal aggregation) and with tuned parameters, BirdNET came close to matching the expert identification (F1 score = 0.84). While BirdNET missed five species of the 23 species identified by the experts, our confirmation test revealed that BirdNET also found one species missed by the experts. To understand how each parameter affects F1 score, we trained linear mixed effects models. Our models showed that the confidence threshold had the strongest effect on the F1 score (p < 0.001) and significantly interacted with temporal aggregation, sensitivity, and overlap. Our results showed that while there are still limitations, using appropriate parameter settings, aggregating results over longer periods and undertaking some basic validation, BirdNET can yield results comparable to experts without the need for time-consuming estimation of species-specific thresholds.
What is soundscape ecology? An introduction and overview of an emerging new science
We summarize the foundational elements of a new area of research we call soundscape ecology. The study of sound in landscapes is based on an understanding of how sound, from various sources—biological, geophysical and anthropogenic—can be used to understand coupled natural-human dynamics across different spatial and temporal scales. Useful terms, such as soundscapes, biophony, geophony and anthrophony, are introduced and defined. The intellectual foundations of soundscape ecology are described—those of spatial ecology, bioacoustics, urban environmental acoustics and acoustic ecology. We argue that soundscape ecology differs from the humanities driven focus of acoustic ecology although soundscape ecology will likely need its rich vocabulary and conservation ethic. An integrative framework is presented that describes how climate, land transformations, biodiversity patterns, timing of life history events and human activities create the dynamic soundscape. We also summarize what is currently known about factors that control temporal soundscape dynamics and variability across spatial gradients. Several different phonic interactions (e.g., how anthrophony affects biophony) are also described. Soundscape ecology tools that will be needed are also discussed along with the several ways in which soundscapes need to be managed. This summary article helps frame the other more application-oriented papers that appear in this special issue.