Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,130 result(s) for "few-shot learning"
Sort by:
A survey of few-shot learning in smart agriculture: developments, applications, and challenges
With the rise of artificial intelligence, deep learning is gradually applied to the field of agriculture and plant science. However, the excellent performance of deep learning needs to be established on massive numbers of samples. In the field of plant science and biology, it is not easy to obtain a large amount of labeled data. The emergence of few-shot learning solves this problem. It imitates the ability of humans’ rapid learning and can learn a new task with only a small number of labeled samples, which greatly reduces the time cost and financial resources. At present, the advanced few-shot learning methods are mainly divided into four categories based on: data augmentation, metric learning, external memory, and parameter optimization, solving the over-fitting problem from different viewpoints. This review comprehensively expounds on few-shot learning in smart agriculture, introduces the definition of few-shot learning, four kinds of learning methods, the publicly available datasets for few-shot learning, various applications in smart agriculture, and the challenges in smart agriculture in future development.
Few-shot cotton leaf spots disease classification based on metric learning
Background Cotton diceases seriously affect the yield and quality of cotton. The type of pest or disease suffered by cotton can be determined by the disease spots on the cotton leaves. This paper presents a few-shot learning framework that can be used for cotton leaf disease spot classification task. This can be used in preventing and controlling cotton diseases timely. First, disease spots on cotton leaf’s disease images are segmented by different methods, compared by using support vector machine (SVM) method and threshold segmentation, and discussed the suitable one. Then, with segmented disease spot images as input, a disease spot dataset is established, and the cotton leaf disease spots were classified using a classical convolutional neural network classifier, the structure and framework of convolutional neural network had been designed. At last, the features of two different images are extracted by a parallel two-way convolutional neural network with weight sharing. Then, the network uses a loss function to learn the metric space, in which similar leaf samples are close to each other and different leaf samples are far away from each other. In summary, this work can be regarded as a significang reference and the benchmark comparison for the follow-up studies of few-shot learning tasks in the agricultural field. Results To achieve the classification of cotton leaf spots by small sample learning, a metric-based learning method was developed to extract cotton leaf spot features and classify the sick leaves. The threshold segmentation and SVM were compared in the extracting of leaf spot. The results showed that both of these two method can extract the leaf spot in a good performance, SVM expented more time, but the leaf spot which extracted from SVM was much more suitable for classifying, thus SVM method can retain much more information of leaf spot, such as color, shape, textures, ect, which can help classficating the leaf spot. In the process of leaf spot classification, the two-way parallel convolutional neural network was established for building the leaf spot feature extractor, and feature classifier is constructed. After establishing the metric space, KNN was used as the spot classifier, and for the construction of convolutional neural networks, commonly used models were selected for comparison, and a spatial structure optimizer (SSO) is introduced for local optimization of the model, include Vgg, DesenNet, and ResNet. Experimentally, it is demonstrated that the classification accuracy of DenseNet is the highest, compared to the other two networks, and the classification accuracy of S-DenseNet is 7.7% higher then DenseNet on average for different number of steps. Conclusions As the step increasing, the accuracy of DesenNet, and ResNet are all improved, and after using SSO, each of these neural networks can achieved better performance. But The extent of increase varies, DesenNet with SSO had been improved the most obviously.
Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning
Object recognition in the real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to recognize the populated head visual concepts reliably and meanwhile efficiently learn about emerging new tail categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, by either learning strong classifiers for head or learning to learn few-shot classifiers for the tail. In this paper, we investigate the problem of generalized few-shot learning (GFSL)—a model during the deployment is required to learn about tail categories with few shots and simultaneously classify the head classes. We propose the ClAssifier SynThesis LEarning (Castle), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of head classes with a shared neural dictionary, shedding light upon the inductive GFSL. Furthermore, we propose an adaptive version of Castle (aCastle) that adapts the head classifiers conditioned on the incoming tail training examples, yielding a framework that allows effective backward knowledge transfer. As a consequence, aCastle can handle GFSL with classes from heterogeneous domains effectively. Castle and aCastle demonstrate superior performances than existing GFSL algorithms and strong baselines on MiniImageNet as well as TieredImageNet datasets. More interestingly, they outperform previous state-of-the-art methods when evaluated with standard few-shot learning criteria.
Wavelet-Prototypical Network Based on Fusion of Time and Frequency Domain for Fault Diagnosis
Neural networks for fault diagnosis need enough samples for training, but in practical applications, there are often insufficient samples. In order to solve this problem, we propose a wavelet-prototypical network based on fusion of time and frequency domain (WPNF). The time domain and frequency domain information of the vibration signal can be sent to the model simultaneously to expand the characteristics of the data, a parallel two-channel convolutional structure is proposed to process the information of the signal. After that, a wavelet layer is designed to further extract features. Finally, a prototypical layer is applied to train this network. Experimental results show that the proposed method can accurately identify new classes that have never been used during the training phase when the number of samples in each class is very small, and it is far better than other traditional machine learning models in few-shot scenarios.
Radar target recognition based on few-shot learning
With the continuous development of target recognition technology, people pay more and more attention to the cost of sample generation, tag addition and network training. Active learning can choose as few samples as possible to achieve a better recognition effect. In this paper, a small number of the simulation generated radar cross-section time series are selected as the training data, combined with the least confidence and edge sampling, a sample selection method based on few-shot learning is proposed. The effectiveness of the method is verified by the target type recognition test in multi time radar cross-section time series. Using the algorithm in this paper, 10 kinds of trajectory data are selected from all 19 kinds of trajectory data, and the training model is tested, which can achieve similar results with 19 kinds of trajectory data training model. Compared with the random selection method, the accuracy is improved by 4–10% in different time lengths.
HybridPrompt: Domain-Aware Prompting for Cross-Domain Few-Shot Learning
Cross-Domain Few-Shot Learning (CD-FSL) aims at recognizing unseen classes from target domains that vastly differ from training classes from source domains, utilizing only a few labeled samples. However, the substantial domain disparities between target and source domains pose huge challenges to few-shot generalization. To resolve domain disparities, we propose HybridPrompt, a novel architecture for Domain-Aware Prompting that integrates a variety of cross-domain learned prompts as knowledge experts for CD-FSL. The proposed method enjoys several merits. First, to encode knowledge from diverse source domains, several Domain Prompts are introduced to capture domain-specific knowledge. Subsequently, to facilitate the cross-domain transfer of valuable knowledge, a Transferred Prompt is specifically tailored for each target task by retrieving highly relevant Domain Prompts based on domain properties. Finally, to complement insufficient transferred information, an Adaptive Prompt is learned to incorporate additional target characteristics for model adaptation. Consequently, the collaboration of these three types of prompts contributes to a hybridly prompted model that achieves domain-aware encoding, transfer, and adaptation, thereby enhancing adaptability on unseen domains. Extensive experimental results on the Meta-Dataset benchmark demonstrate that our method achieves superior performance against state-of-the-art methods. The source code is available at https://github.com/Jamine-W/HybridPrompt.
An open‐source general purpose machine learning framework for individual animal re‐identification using few‐shot learning
Animal re‐identification remains a challenging problem due to the cost of tagging systems and the difficulty of permanently attaching a physical marker to some animals, such as sea stars. Due to these challenges, photo identification is a good fit to solve this problem whether evaluated by humans or through machine learning. Accurate machine learning methods are an improvement over manual identification as they are capable of evaluating a large number of images automatically and recent advances have reduced the need for large training datasets. This study aimed to create an accurate, robust, general purpose machine learning framework for individual animal re‐identification using images both from publicly available data as well as two groups of sea stars of different species under human care. Open‐source code was provided to accelerate work in this space. Images of two species of sea star (Asterias rubens and Anthenea australiae) were taken using a consumer‐grade smartphone camera and used as original datasets to train a machine learning model to re‐identify an individual animal using few examples. The model's performance was evaluated on these original sea star datasets which contained between 39–54 individuals and 983–1204 images, as well as using six publicly available re‐identification datasets for tigers, beef cattle noses, chimpanzee faces, zebras, giraffes and ringed seals ranging between 45–2056 individuals and 829–6770 images. Using time aware‐splits, which are a data splitting technique ensuring that the model only sees an individual's images from a previous collection event during training to avoid information leaking, the model achieved high (>99%) individual re‐identification mean average precision for the top prediction (mAP@1) for the two species of sea stars. The re‐identification mAP@1 for the mammalian datasets was more variable, ranging from 83% to >99%. However, this model outperformed published state‐of‐the‐art re‐identification results for the publicly available datasets. The reported approach for animal re‐identification is generalizable, with the same machine learning framework achieving good performance in two distinct species of sea stars with different physical attributes, as well as seven different mammalian species. This demonstrates that this methodology can be applied to nearly any species where individual re‐identification is required. This study presents a precise, practical, non‐invasive approach to animal re‐identification using only basic image collection methods.
Meta-FSEO: A Meta-Learning Fast Adaptation with Self-Supervised Embedding Optimization for Few-Shot Remote Sensing Scene Classification
The performance of deep learning is heavily influenced by the size of the learning samples, whose labeling process is time consuming and laborious. Deep learning algorithms typically assume that the training and prediction data are independent and uniformly distributed, which is rarely the case given the attributes and properties of different data sources. In remote sensing images, representations of urban land surfaces can vary across regions and by season, demanding rapid generalization of these surfaces in remote sensing data. In this study, we propose Meta-FSEO, a novel model for improving the performance of few-shot remote sensing scene classification in varying urban scenes. The proposed Meta-FSEO model deploys self-supervised embedding optimization for adaptive generalization in new tasks such as classifying features in new urban regions that have never been encountered during the training phase, thus balancing the requirements for feature classification tasks between multiple images collected at different times and places. We also created a loss function by weighting the contrast losses and cross-entropy losses. The proposed Meta-FSEO demonstrates a great generalization capability in remote sensing scene classification among different cities. In a five-way one-shot classification experiment with the Sentinel-1/2 Multi-Spectral (SEN12MS) dataset, the accuracy reached 63.08%. In a five-way five-shot experiment on the same dataset, the accuracy reached 74.29%. These results indicated that the proposed Meta-FSEO model outperformed both the transfer learning-based algorithm and two popular meta-learning-based methods, i.e., MAML and Meta-SGD.
Semi-supervised few-shot learning approach for plant diseases recognition
Background Learning from a few samples to automatically recognize the plant leaf diseases is an attractive and promising study to protect the agricultural yield and quality. The existing few-shot classification studies in agriculture are mainly based on supervised learning schemes, ignoring unlabeled data's helpful information. Methods In this paper, we proposed a semi-supervised few-shot learning approach to solve the plant leaf diseases recognition. Specifically, the public PlantVillage dataset is used and split into the source domain and target domain. Extensive comparison experiments considering the domain split and few-shot parameters (N-way, k-shot) were carried out to validate the correctness and generalization of proposed semi-supervised few-shot methods. In terms of selecting pseudo-labeled samples in the semi-supervised process, we adopted the confidence interval to determine the number of unlabeled samples for pseudo-labelling adaptively. Results The average improvement by the single semi-supervised method is 2.8%, and that by the iterative semi-supervised method is 4.6%. Conclusions The proposed methods can outperform other related works with fewer labeled training data.
Detecting Abnormality of Battery Lifetime from First‐Cycle Data Using Few‐Shot Learning
The service life of large battery packs can be significantly influenced by only one or two abnormal cells with faster aging rates. However, the early‐stage identification of lifetime abnormality is challenging due to the low abnormal rate and imperceptible initial performance deviations. This work proposes a lifetime abnormality detection method for batteries based on few‐shot learning and using only the first‐cycle aging data. Verified with the largest known dataset with 215 commercial lithium‐ion batteries, the method can identify all abnormal batteries, with a false alarm rate of only 3.8%. It is also found that any capacity and resistance‐based approach can easily fail to screen out a large proportion of the abnormal batteries, which should be given enough attention. This work highlights the opportunities to diagnose lifetime abnormalities via “big data” analysis, without requiring additional experimental effort or battery sensors, thereby leading to extended battery life, increased cost‐benefit, and improved environmental friendliness. The lifetime of large battery packs can be influenced by only one or two abnormal cells with faster aging rates in it. This work proposes a method to predict battery lifetime abnormality using only the first‐cycle battery aging data and achieves a typical accuracy >90%. It can be used to screen out abnormal batteries before grouping.