Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,486 result(s) for "Bird vocalization"
Sort by:
Identification, Analysis and Characterization of Base Units of Bird Vocal Communication: The White Spectacled Bulbul (Pycnonotus xanthopygos) as a Case Study
Animal vocal communication is a broad and multi-disciplinary field of research. Studying various aspects of communication can provide key elements for understanding animal behavior, evolution, and cognition. Given the large amount of acoustic data accumulated from automated recorders, for which manual annotation and analysis is impractical, there is a growing need to develop algorithms and automatic methods for analyzing and identifying animal sounds. In this study we developed an automatic detection and analysis system based on audio signal processing algorithms and deep learning that is capable of processing and analyzing large volumes of data without human bias. We selected the White Spectacled Bulbul (Pycnonotus xanthopygos) as our bird model because it has a complex vocal communication system with a large repertoire which is used by both sexes, year-round. It is a common, widespread passerine in Israel, which is relatively easy to locate and record in a broad range of habitats. Like many passerines, the bulbul's vocal communication consists of two primary hierarchies of utterances, syllables and words. To extract each of these units’ characteristics, the fundamental frequency contour was modeled using a low degree Legendre polynomial, enabling it to capture the different patterns of variation from different vocalizations, so that each pattern could be effectively expressed using very few coefficients. In addition, a mel-spectrogram was computed for each unit, and several features were extracted both in the time-domain (e.g. zero-crossing rate and energy) and frequency-domain (e.g. spectral centroid and spectral flatness). We applied both linear and non-linear dimensionality reduction algorithms on feature vectors and validated the findings that were obtained manually, namely by listening and examining the spectrograms visually. Using these algorithms, we show that the Bulbul has a complex vocabulary of more than 30 words, that there are multiple syllables that are combined in different words, and that a particular syllable can appear in several words. Using our system, researchers will be able to analyze hundreds of hours of audio recordings, to obtain objective evaluation of repertoires, and to identify different vocal units and distinguish between them, thus gaining a broad perspective on bird vocal communication.
Avian Song over Time: Variability and Stability
Information is reviewed on the dynamics of birds singing over time, analyzing the rate of change in individual and population repertoires, and considering the factors affecting the rate of such changes. The available data indicate very significant periods of persistence of vocal patterns (song types) in songbird populations. The rate of change in population and individual repertoires is higher in species with an unlimited period of imprinting a song compared to species with a fixed period. The population repertoire of song types in numerous populations inhabiting vast and continuous habitats is more stable than in small and isolated populations occupying structurally fragmented habitats. The most common vocal patterns are the most conserved from year to year, while rare variants often disappear from the population repertoire over time. Abnormal climatic phenomena that cause significant changes in the age composition of populations contribute to rapid changes of dialects. Cases of rapid synchronous changes in vocal repertoires in individuals in local populations, as well as in populations separated from each other by a great distance, are considered in detail. The causes that give rise to this require further research. The most likely reasons may be an exchange of vocal models at wintering grounds or the simultaneous introduction of a large number of migrants into the study populations, which in species with an open training period may cause changes in the repertoire of local individuals also borrowing new vocal models.
Bird Species Identification Using Spectrogram Based on Multi-Channel Fusion of DCNNs
Deep convolutional neural networks (DCNNs) have achieved breakthrough performance on bird species identification using a spectrogram of bird vocalization. Aiming at the imbalance of the bird vocalization dataset, a single feature identification model (SFIM) with residual blocks and modified, weighted, cross-entropy function was proposed. To further improve the identification accuracy, two multi-channel fusion methods were built with three SFIMs. One of these fused the outputs of the feature extraction parts of three SFIMs (feature fusion mode), the other fused the outputs of the classifiers of three SFIMs (result fusion mode). The SFIMs were trained with three different kinds of spectrograms, which were calculated through short-time Fourier transform, mel-frequency cepstrum transform and chirplet transform, respectively. To overcome the shortage of the huge number of trainable model parameters, transfer learning was used in the multi-channel models. Using our own vocalization dataset as a sample set, it is found that the result fusion mode model outperforms the other proposed models, the best mean average precision (MAP) reaches 0.914. Choosing three durations of spectrograms, 100 ms, 300 ms and 500 ms for comparison, the results reveal that the 300 ms duration is the best for our own dataset. The duration is suggested to be determined based on the duration distribution of bird syllables. As for the performance with the training dataset of BirdCLEF2019, the highest classification mean average precision (cmAP) reached 0.135, which means the proposed model has certain generalization ability.
Female singing: an overlooked component of incubation behaviour in a temperate migratory passerine
Recent studies have shown that birdsong is not exclusively a male trait. However, despite increasing research intensity, female singing is still rarely reported in temperate migratory species. Here, we report the observation and description of female vocalization in the great reed warbler, Acrocephalus arundinaceus. We analysed vocal expression of individually marked great reed warbler females in two central European populations in Slovakia and the Czech Republic and show that these vocalizations meet criteria for song. We found that 39.5% of nesting females sang from the nest during early incubation within two hours of video recording. Female mating status, locality, day of the season, and male singing activity did not predict song use in this species, but song rates decreased over the breeding period. Based on current and previous observations, we hypothesize that female great reed warblers use song to signal their territorial presence and reproductive status, potentially deterring conspecific female competitors. However, given that this study was done only in one context and moment in the breeding cycle (early incubation), we encourage further investigation of the functions of female song in this and other temperate migratory species whose female song was overlooked in the past.
A Bird Vocalization Classification Method Based on Bidirectional FBank with Enhanced Robustness
Recent advances in audio signal processing and pattern recognition have made the classification of bird vocalization a focus of bioacoustic research. However, the accurate classification of birdsongs is challenged by environmental noise and the limitations of traditional feature extraction methods. This study proposes the iWAVE-BiFBank method, an innovative approach combining improved wavelet adaptive denoising (iWAVE) and a bidirectional Mel-filter bank (BiFBank) for effective birdsong classification with enhanced robustness. The iWAVE method achieves adaptive optimization using the autocorrelation coefficient and peak-sum-ratio (PSR), overcoming the manual adjustments required with and incompleteness of traditional methods. BiFBank combines FBank and inverse FBank (iFBank) to enhance feature representation. This fusion addresses the shortcomings of FBank and introduces novel transformation methods and filter designs to iFBank, with a focus on high-frequency components. The iWAVE-BiFBank method creates a robust feature set, which can effectively reduce the noise of audio signals and capture both low- and high-frequency information. Experiments were conducted on a dataset of 16 species of birds, and the proposed method was verified with a random forest (RF) classifier. The results show that iWAVE-BiFBank achieves an accuracy of 94.00%, with other indicators, including the F1 score, exceeding 93.00%, outperforming all other tested methods. Overall, the proposed method effectively reduces audio noise, comprehensively captures the characteristics of bird vocalization, and provides improved classification performance.
Decoding nature’s melody: significance and challenges of machine learning in assessing bird diversity via soundscape analysis
The broad application of passive acoustic monitoring provides a critical data foundation for studying soundscape ecology, necessitating automated analysis methods to accurately extract ecological information from vast soundscape data. This review comprehensively and cohesively examines two predominant approaches in soundscape analysis: soundscape component recognition and acoustic indices methods. Focusing on machine learning (ML)-based analysis methods for bird diversity assessment over the past five years, this review surveys representative research within each category, outlining their respective strengths and limitations. This not only addresses the growing interest in this field but also identifies research gaps and poses key questions for future studies. The insights from this review are anticipated to significantly enhance the understanding of ML applications in soundscape analysis, guiding subsequent investigative efforts in this rapidly evolving discipline, and thereby better supporting long-term biodiversity monitoring and conservation initiatives.
Human presence is positively related to the number of bird calls and songs: Assessment in a national park
Human disturbance has been shown to provoke physiological and behavioral responses in birds, so nature-based tourism might reduce bird abundance and diversity. The negative consequences of human disturbance might be expected to be maximized during eventual massive events in highly protected areas such as national parks. In this study, the consequences for soundscapes of human presence and disturbance of thousands of visitors during an ornithological fair (massive event) on the bird community of the Monfragüe National Park (Spain) were analyzed. We found that the number and diversity of bird vocalizations did not decrease during the massive event. In contrast, the presence of people in the Monfragüe National Park was associated with an increase in the number and diversity of vocalizations. The effect of human presence on the number of calls and songs differed: the number of calls mainly increased during the massive event when people were present, while the number of songs increased when people were present, particularly during the measurement campaign without the massive event. The human shield hypothesis, along with other behavioral and environmental factors, might potentially explain the results obtained.
Passive acoustic surveys reveal interactions between frugivorous birds and fruiting trees on a large forest dynamics plot
Long‐term vegetation plots represent one of the largest types of research investments in ecology, but efforts to interrelate data on plants with that on animals are constrained because of the disturbance produced by human observers. Recent advances in the automated identification of animal sounds on large datasets of autonomously collected audio recordings hold the potential to describe plant–animal interactions, such as between frugivorous birds and fruiting trees, without such disturbance. We deployed an array of nine autonomous recording units (ARUs) on the 400 × 500 m Bubeng Forest Dynamics Plot, in Xishuangbanna, southwest China, and collected a total of 1965 h of recordings across two seasons. Animal Sound Identifier (ASI) software was used to detect the vocalizations of five frugivorous bird species, and the probability of detection was related to the number of mature fruiting trees within a 50 m radius of the ARUs. There were more significant positive relationships than would be expected by chance in analyses that investigated bird/tree interactions across 3 months, both in the wet season and the dry season, as well as in short‐term analyses within the dry season months of October and November. The analysis identified 54 interactions between bird and tree species with significant positive relationships. Follow‐up observations of birds on the plot validated that such interactions were more likely to be observed than other interactions. We demonstrate that ARUs and automated voice identification can map the distribution and/or movement of vocal animals across large vegetation plots, allowing this data on animals to be inter‐related to that on plants. We suggest that ARUs be added to the standardized protocols of the plot network, leveraging their vast amount of information about vegetation to describe plant–animal interactions currently, and monitor changes in the future. We combine tree data ‐ the number of mature fruiting trees near the locations of autonomous recording units ‐ and bird data ‐ the amount of detections of frugivorous birds by the vocal identification software, Animal Sound Identifier. Short‐term (two days per month) and long term (everyday for three months) analyses showed non‐random results for some months/seasons, and identified potential tree species the birds were interacting with. Follow‐up observations verified that birds were interacting with the species they were associated with, demonstrating that this non‐invasive method can interrelate animal data to the vast amount of vegetation data that forest dynamic plots gather.