Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
10,365 result(s) for "Shot"
Sort by:
New frontiers in artificial intelligence for biodiversity research and conservation with multimodal language models
The integration of artificial intelligence (AI) into biodiversity research and conservation is growing rapidly, demonstrating great potential in reducing the intensive human labour required for data preprocessing, thereby, facilitating larger data collections that offer ecological insights at unprecedented scales. However, most of these AI applications for biodiversity are still in the early stages of development, hindered by challenges inherent in real‐world datasets and the limited accessibility of these technologies to practitioners without extensive programming knowledge. The recent advent of multimodal language models, which can process and generate multiple data modalities, has significantly expanded the realm of possible AI applications in biodiversity research. These models have demonstrated the ability to classify species and recognize more complex concepts, such as animal postures and orientations, without prior exposure during training. Multimodal language models can also provide explanations for their predictions and interact with humans in natural language, thereby making them more transparent, intuitive and accessible to non‐specialists. Despite these advancements, the use of multimodal language models for biodiversity still needs to overcome unique barriers to application, including high computational and financial demands, reliance on prompt engineering for consistent model performance on large datasets and insufficient open‐source sharing of state‐of‐the‐art methods. This paper explores the transformative potential of multimodal language models for biodiversity research and discusses several possible applications in biodiversity research. We also discuss challenges to implement these models in real‐world conservation scenarios and propose directions for future research to overcome these hurdles. Our goal is to encourage robust discussions and research into the integration of multimodal language models to advance AI for biodiversity research and conservation.
Influence of Semi-Random and Regular Shot Peening on Selected Surface Layer Properties of Aluminum Alloy
This paper attempts to compare regular shot peening (RSP) and semi-random shot peening (SRSP). A characteristic of the first method is that the peening elements hit the treated surface in sequence, with a regular distance maintained between the dimples. The other method (SRSP) is a controlled modification of the shot-peening process, which is random by nature. The shot-peening method used in this study differs from conventional shot peening (shot blasting and vibratory shot peening) in that it allows controlled and repeatable determination of the configuration and distribution of impacts exerted by the peening element on the workpiece surface, which makes the process more repeatable and easier to model. Specimens of EN-AW 7075 aluminum alloy were used for testing. The following variables were used in the experiments: ball diameter, impact energy, and distance between the dimples. Microhardness distribution in the surface layer, 2D surface roughness, and surface topography were analyzed. FEM simulations of the residual stress distribution in the surface layer were performed. It has been found that regular shot peening results in reduced surface roughness, while semi-random shot peening leads to higher surface layer hardening.
Few-Shot Segmentation via Divide-and-Conquer Proxies
Few-Shot segmentation (FSS) is a marginally explored but challenging task that aims to identify unseen classes of objects with only a handful of densely annotated samples. By and large, current FSS approaches perform meta-inference based on the prototype learning paradigm, which fails to fully exploit the underlying information from support image-mask pairs, resulting in multiple segmentation failures, such as incomplete objects, ambiguous boundaries, and distractor activation. For this purpose, a flexible and generic framework is developed in the spirit of divide-and-conquer. We first implement a novel self-reasoning scheme on the labeled support image, and then divide the coarse segmentation mask into several regions with different properties. By employing effective masked average pooling techniques, a series of support-induced proxies are generated on the fly, each performing a specific role in conquering the above challenges. Furthermore, we meticulously devise the parallel decoder structure and semantic consistency regularization to eliminate confusion and enhance discrimination. In stark contrast to conventional prototype-based approaches, our proposed divide-and-conquer proxies (DCP) can provide “episode” level guidelines that go well beyond the object cues themselves. Extensive experiments are conducted on FSS benchmarks to verify the effectiveness, including standard settings as well as cross-domain settings. In particular, we propose a temporal DCP and successfully extend it to video object segmentation via memory repository and progressive propagation, illustrating the high scalability. The source codes are available at https://github.com/chunbolang/DCP.
Remote Sensing Object Detection in the Deep Learning Era—A Review
Given the large volume of remote sensing images collected daily, automatic object detection and segmentation have been a consistent need in Earth observation (EO). However, objects of interest vary in shape, size, appearance, and reflecting properties. This is not only reflected by the fact that these objects exhibit differences due to their geographical diversity but also by the fact that these objects appear differently in images collected from different sensors (optical and radar) and platforms (satellite, aerial, and unmanned aerial vehicles (UAV)). Although there exists a plethora of object detection methods in the area of remote sensing, given the very fast development of prevalent deep learning methods, there is still a lack of recent updates for object detection methods. In this paper, we aim to provide an update that informs researchers about the recent development of object detection methods and their close sibling in the deep learning era, instance segmentation. The integration of these methods will cover approaches to data at different scales and modalities, such as optical, synthetic aperture radar (SAR) images, and digital surface models (DSM). Specific emphasis will be placed on approaches addressing data and label limitations in this deep learning era. Further, we survey examples of remote sensing applications that benefited from automatic object detection and discuss future trends of the automatic object detection in EO.
Online absolute calibration of fast FEL pulse energy measurements
One of the challenges facing modern free‐electron laser (FEL) facilities is the accurate pulse‐to‐pulse online measurement of the absolute flux of the X‐ray pulses, for use by both machine operators for optimization and users of the photon beam to better understand their data. This manuscript presents a methodology that combines existing slow‐measurement methods currently used in gas detectors across the world and fast uncalibrated signals from multipliers, meant for relative flux pulse‐to‐pulse measurements, which create a shot‐to‐shot absolute flux measurement through the use of sensor‐based conditional triggers and algorithms at SwissFEL. A new method for online pulse‐resolved absolute pulse energy measurements at free‐electron lasers is presented.
A survey of few-shot learning in smart agriculture: developments, applications, and challenges
With the rise of artificial intelligence, deep learning is gradually applied to the field of agriculture and plant science. However, the excellent performance of deep learning needs to be established on massive numbers of samples. In the field of plant science and biology, it is not easy to obtain a large amount of labeled data. The emergence of few-shot learning solves this problem. It imitates the ability of humans’ rapid learning and can learn a new task with only a small number of labeled samples, which greatly reduces the time cost and financial resources. At present, the advanced few-shot learning methods are mainly divided into four categories based on: data augmentation, metric learning, external memory, and parameter optimization, solving the over-fitting problem from different viewpoints. This review comprehensively expounds on few-shot learning in smart agriculture, introduces the definition of few-shot learning, four kinds of learning methods, the publicly available datasets for few-shot learning, various applications in smart agriculture, and the challenges in smart agriculture in future development.
Entanglement-enhanced optical gyroscope
Fiber optic gyroscopes (FOG) based on the Sagnac effect are a valuable tool in sensing and navigation and enable accurate measurements in applications ranging from spacecraft and aircraft to self-driving vehicles such as autonomous cars. As with any classical optical sensors, the ultimate performance of these devices is bounded by the shot-noise limit (SNL). Quantum-enhanced interferometry allows us to overcome this limit using non-classical states of light. Here, we report on an entangled-photon gyroscope that uses path-entangled NOON-states (N = 2) to provide super-resolution and phase sensitivity beyond the shot-noise limit.
M-RRFS: A Memory-Based Robust Region Feature Synthesizer for Zero-Shot Object Detection
With the goal to detect both the object categories appearing in the training phase and those never have been observed before testing, zero-shot object detection (ZSD) becomes a challenging yet anticipated task in the community. Current approaches tackle this problem by drawing on the feature synthesis techniques used in the zero-shot image classification (ZSC) task without delving into the inherent problems of ZSD. In this paper, we analyze the out-standing challenges that ZSD presents compared with ZSC—severe intra-class variation, complex category co-occurrence, open test scenario, and reveal their interference to the region feature synthesis process. In view of this, we propose a novel memory-based robust region feature synthesizer (M-RRFS) for ZSD, which is equipped with the Intra-class Semantic Diverging (IntraSD), the Inter-class Structure Preserving (InterSP), and the Cross-Domain Contrast Enhancing (CrossCE) mechanisms to overcome the inadequate intra-class diversity, insufficient inter-class separability, and weak inter-domain contrast problems. Moreover, when designing the whole learning framework, we develop an asynchronous memory container (AMC) to explore the cross-domain relationship between the seen class domain and unseen class domain to reduce the overlap between the distributions of them. Based on AMC, a memory-assisted ZSD inference process is also proposed to further boost the prediction accuracy. To evaluate the proposed approach, comprehensive experiments on MS-COCO, PASCAL VOC, ILSVRC and DIOR datasets are conducted, and superior performances have been achieved. Notably, we achieve new state-of-the-art performances on MS-COCO dataset, i.e., 64.0%, 60.9% and 55.5% Recall@100 with IoU =0.4,0.5,0.6 respectively, and 15.1% mAp with IoU=0.5, under the 48/17 category split setting. Meanwhile, experiments on the DIOR dataset actually build the earliest benchmark for evaluating zero-shot object detection performance on remote sensing images. https://github.com/HPL123/M-RRFS.
Attribute Prototype Network for Any-Shot Learning
Any-shot image classification allows to recognize novel classes with only a few or even zero samples. For the task of zero-shot learning, visual attributes have been shown to play an important role, while in the few-shot regime, the effect of attributes is under-explored. To better transfer attribute-based knowledge from seen to unseen classes, we argue that an image representation with integrated attribute localization ability would be beneficial for any-shot, i.e. zero-shot and few-shot, image classification tasks. To this end, we propose a novel representation learning framework that jointly learns discriminative global and local features using only class-level attributes. While a visual-semantic embedding layer learns global features, local features are learned through an attribute prototype network that simultaneously regresses and decorrelates attributes from intermediate features. Furthermore, we introduce a zoom-in module that localizes and crops the informative regions to encourage the network to learn informative features explicitly. We show that our locality augmented image representations achieve a new state-of-the-art on challenging benchmarks, i.e. CUB, AWA2, and SUN. As an additional benefit, our model points to the visual evidence of the attributes in an image, confirming the improved attribute localization ability of our image representation. The attribute localization is evaluated quantitatively with ground truth part annotations, qualitatively with visualizations, and through well-designed user studies.