Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
47 result(s) for "key feature point"
Sort by:
Region-wise landmarks-based feature extraction employing SIFT, SURF, and ORB feature descriptors to recognize Monozygotic twins from 2D/3D Facial Images
Background In computer vision and image processing, face recognition is increasingly popular field of research that identifies similar faces in a picture and assigns a suitable label. It is one of the desired detection techniques employed in forensics for criminal identification. Methods This study explores face recognition system for monozygotic twins utilizing three widely recognized feature descriptor algorithms: Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), and Oriented Fast and Rotated BRIEF (ORB)—with region-specific facial landmarks. These landmarks were extracted from 468 points detected through the MediaPipe framework, which enables simultaneous recognition of multiple faces. Quantitative similarity metrics t served as inputs for four classification methods: Support Vector Machine (SVM), eXtreme Gradient Boost (XGBoost), Light Gradient Boost Machine (LGBM), and Nearest Centroid (NC). The effectiveness of these algorithms was tested and validated using challenging ND Twins and 3D TEC datasets, the most difficult data sets for 2D and 3D face recognition research at Notre Dame University. Results Testing with Notre Dame University’s challenging ND Twins and 3D TEC datasets revealed significant performance differences. Results demonstrated that 2D facial images achieved notably higher recognition accuracy than 3D images. The 2D images produced accuracy of 88% (SVM), 83% (LGBM), 83% (XGBoost), and 79% (NC). In contrast, the 3D TEC dataset yielded a lower accuracy r of 74%, 72%, 72%, and 70%, with the same classifiers. Conclusion The hybrid feature extraction approach proved most effective, with maximum accuracy rates reaching 88% for 2D facial images and 74% for 3D facial images. This work contributes significantly to forensic science by enhancing the reliability of facial recognition systems when confronted with indistinguishable facial characteristics of monozygotic twins.
Ship Spatiotemporal Key Feature Point Online Extraction Based on AIS Multi-Sensor Data Using an Improved Sliding Window Algorithm
Large volumes of automatic identification system (AIS) data provide new ideas and methods for ship data mining and navigation behavior pattern analysis. However, large volumes of big data have low unit values, resulting in the need for large-scale computing, storage, and display. Learning efficiency is low and learning direction is blind and untargeted. Therefore, key feature point (KFP) extraction from the ship trajectory plays an important role in fields such as ship navigation behavior analysis and big data mining. In this paper, we propose a ship spatiotemporal KFP online extraction algorithm that is applied to AIS trajectory data. The sliding window algorithm is modified for application to ship navigation angle deviation, position deviation, and the spatiotemporal characteristics of AIS data. Next, in order to facilitate the subsequent use of the algorithm, a recommended threshold range for the corresponding two parameters is discussed. Finally, the performance of the proposed method is compared with that of the Douglas–Peucker (DP) algorithm to assess its feature extraction accuracy and operational efficiency. The results show that the proposed improved sliding window algorithm can be applied to rapidly and easily extract the KFPs from AIS trajectory data. This ability provides significant benefits for ship traffic flow and navigational behavior learning.
An Autonomous Mobile Measurement Method for Key Feature Points in Complex Aircraft Assembly Scenes
Large-scale measurement of key feature points (KFPs) on an aircraft is essential for coordinated movement of locators, which is critical to the final assembly accuracy. Due to the large number and wide distribution of KFPs as well as line-of-sight occlusion, network measurement of laser trackers (LTs) is required. Existing approaches rely on operational experience for the configuration of stations, sequences and station transitions of LTs, which compromises both efficiency and automation capability. To tackle this challenge, this article presents an autonomous mobile measurement method for KFPs in complex scenes of aircraft assembly, incorporating path self-planning and self-positioning capabilities, thereby substantially diminishing temporal expenditure. Firstly, a simultaneous self-planning method of measurement stations and tasks is proposed to determine the minimum number of stations, optimal locations, and their specific KFPs at each station. Secondly, considering obstacles and turning time, a path planning model of mobile LTs combining coarse and fine localization is established to realize automatic station transitions. Finally, an optimal sequence of series of KFPs with a wide spatial distribution is generated to minimize total distance. Aircraft component assembly experiments validated the method, cutting measurement error by 37% and total measurement time by over 50%.
Visual concept learning system based on lexical elements and feature key points conjunction
Subject of Research. The paper deals withthe process of visual concept building based on two unlabeled sources of information (visual and textual). Method. Visual concept-based learning is carried out with image patterns and lexical elements simultaneous conjunction. Concept-based learning consists of two basic stages: early learning acquisition (primary learning) and lexical-semantic learning (secondary learning). In early learning acquisition stage the visual concept dictionary is created providing background for the next stage. The lexical-semantic learning makes two sources timeline analysis and extracts features in both information channels. Feature vectors are formed by extraction of separated information units in both channels. Mutual information between two sources describes visual concepts building criteria. Main Results. Visual concept-based learning system has been developed; it uses video data with subtitles. The results of research have shown principal ability of visual concepts building by our system. Practical Relevance.Recommended application area of described system is an object detection, image retrieval and automatic building of visual concept-based data tasks.
Optimal Extraction Method of Feature Points in Key Frame Image of Mobile Network Animation
In order to effectively extract the feature points of mobile network animation images and accurately reflect the main content of the video, an optimization method to extract the feature points of key frame images of mobile network animation is proposed. Firstly, the key frames are selected according to the content change degree of the animation video. The scale invariant feature transformation algorithm is used to describe the feature points of the key frame image of the animation video. The local feature points of the image are estimated by the constraint optimization method to realize the optimization extraction of the feature points of the key frame image of the mobile network animation. The efficiency of feature points extraction is analyzed from the number and effectiveness of feature points extraction, time-consuming and similarity invariance of feature points. The experimental results show that the proposed method has excellent adaptability, and can effectively extract feature points of mobile network animation image.
Human–nature connectedness and other relational values are negatively affected by landscape simplification: insights from Lower Saxony, Germany
Landscape simplification is a worldwide phenomenon that impacts biodiversity in agricultural landscapes. Humans benefit greatly from nature’s contributions to people in both material and immaterial ways, yet landscape simplification can undermine these contributions. Landscape simplification can have negative consequences, for example, for human–nature connectedness and other relational values. Major and rapid land-use change, together with a declining appreciation of nature by individuals and societies, in turn, could cause a downward spiral of disconnections. Our empirical research combined a comprehensive assessment of five dimensions of human–nature connectedness with the lens of relational values to assess how these are influenced by landscape simplification. Focusing on two rural landscapes with differing agricultural development in Lower Saxony (Germany), we conducted 34 problem-centred interviews. We found that landscape simplification, especially if rapid, negatively influenced human–nature connectedness and particular relational values such as social relations, social cohesion or cultural identity. We postulate that human–nature connectedness might have a balancing influence on preserving relational values, buffering negative impacts of landscape simplification. Losing connections to nature could potentially foster conflicts among actors with different values. We conclude that combining the notions of human–nature connectedness and relational values can generate valuable insights and may help to uncover new ways to foster sustainability.
Matching Algorithm for 3D Point Cloud Recognition and Registration Based on Multi-Statistics Histogram Descriptors
Establishing an effective local feature descriptor and using an accurate key point matching algorithm are two crucial tasks in recognizing and registering on the 3D point cloud. Because the descriptors need to keep enough descriptive ability against the effect of noise, occlusion, and incomplete regions in the point cloud, a suitable key point matching algorithm can get more precise matched pairs. To obtain an effective descriptor, this paper proposes a Multi-Statistics Histogram Descriptor (MSHD) that combines spatial distribution and geometric attributes features. Furthermore, based on deep learning, we developed a new key point matching algorithm that could identify more corresponding point pairs than the existing methods. Our method is evaluated based on Stanford 3D dataset and four real component point cloud dataset from the train bottom. The experimental results demonstrate the superiority of MSHD because its descriptive ability and robustness to noise and mesh resolution are greater than those of carefully selected baselines (e.g., FPFH, SHOT, RoPS, and SpinImage descriptors). Importantly, it has been confirmed that the error of rotation and translation matrix is much smaller based on our key point matching algorithm, and the precise corresponding point pairs can be captured, resulting in enhanced recognition and registration for three-dimensional surface matching.
An improved YOLOv8n-IRP model for natural rubber tree tapping surface detection and tapping key point positioning
Aiming at the problem that lightweight algorithm models are difficult to accurately detect and locate tapping surfaces and tapping key points in complex rubber forest environments, this paper proposes an improved YOLOv8n-IRP model based on the YOLOv8n-Pose. First, the receptive field attention mechanism is introduced into the backbone network to enhance the feature extraction ability of the tapping surface. Secondly, the AFPN structure is used to reduce the loss and degradation of the low-level and high-level feature information. Finally, this paper designs a dual-branch key point detection head to improve the screening ability of key point features in the tapping surface. In the detection performance comparison experiment, the YOLOv8n-IRP improves the D_mAP50 and P_mAP50 by 1.4% and 2.3%, respectively, over the original model while achieving an average detection success rate of 87% in the variable illumination test, which demonstrates enhanced robustness. In the positioning performance comparison experiment, the YOLOv8n-IRP achieves an overall better localization performance than YOLOv8n-Pose and YOLOv5n-Pose, realizing an average Euclidean distance error of less than 40 pixels. In summary, YOLOv8n-IRP shows excellent detection and positioning performance, which not only provides a new method for the key point localization of the rubber-tapping robot but also provides technical support for the unmanned rubber-tapping operation of the intelligent rubber-tapping robot.
Deep atrous context convolution generative adversarial network with corner key point extracted feature for nuts classification
Deep learning-based nut classification has emerged as a viable way to automate the detection and categorization of different nut varieties in the food processing and agriculture sectors. Conventional techniques for classifying nuts mostly rely on manually created characteristics like texture, color, shape, or edges. These characteristics frequently fall short of capturing the image’s complete complexity, particularly when nuts show tiny visual variances. This research proposes Deep Atrous Context Convolution Generative Adversarial Network (DAC-GAN) model that categorize the 8 classes of nuts like brazil nuts, cashew, peanut, pecan nut, pistachio, chest nut, macadamia and Walnut. This research uses Common Nut KAGGLE dataset with 4,000 nuts images of 8 nuts classes. The DAC-GAN approach overcomes the difficulties of having limited labelled data for nut classification tasks by employing DCGANs’ ability to produce high-quality, synthetic nut images to supplement the dataset. The DCGAN comprises of a discriminator and a generator block. The discriminator block develops the ability to differentiate between synthetic and real images, while the generator block generates realistic nut images from random noise. The real images along with the DCGAN generated images are processed with feature filtering methods to extract the Corner Key Points Featured (CKPF) nuts images. To further enhance the feature selection, the CKPF edges are extracted from the image that provides unique, geometrically distinctive critical corners to further process for representative learning. To proceed with the effective feature extraction and model learning, the CKPF nuts images are processed with atrous convolution that capture the intricate details by expanding the receptive field without losing resolution. The novelty of this work exists by appending the filtration and atrous convolution that acquire the spatial data features from the nut’s images at various resolutions. Atrous convolution was refined by appending the pre-context and post-context block that add the image level information to the features. The effectiveness of the DAC-GAN model was validated with the traditional augmented dataset with all existing filtering images and CNN models. Implementation outcome shows that DAC-GAN found to exhibit high accuracy of 99.83% towards the nuts type classification. The superiority of the DAC-GAN method over traditional approaches is demonstrated by extensive experiments on augmented and DCGAN generated datasets, which achieve higher classification accuracy and generalization across a variety of nut type categorization. The outcome demonstrates that the DCGAN together with atrous convolution have the potential to be an effective tool for automating nut sorting in food industry.
FPIRST: Fatigue Driving Recognition Method Based on Feature Parameter Images and a Residual Swin Transformer
Fatigue driving is a serious threat to road safety, which is why accurately identifying fatigue driving behavior and warning drivers in time are of great significance in improving traffic safety. However, accurately recognizing fatigue driving is still challenging due to large intra-class variations in facial expression, continuity of behaviors, and illumination conditions. A fatigue driving recognition method based on feature parameter images and a residual Swin Transformer is proposed in this paper. First, the face region is detected through spatial pyramid pooling and a multi-scale feature output module. Then, a multi-scale facial landmark detector is used to locate 23 key points on the face. The aspect ratios of the eyes and mouth are calculated based on the coordinates of these key points, and a feature parameter matrix for fatigue driving recognition is obtained. Finally, the feature parameter matrix is converted into an image, and the residual Swin Transformer network is presented to recognize fatigue driving. Experimental results on the HNUFD dataset show that the proposed method achieves an accuracy of 96.512%, thus outperforming state-of-the-art methods.