Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
173 result(s) for "skeleton extraction"
Sort by:
Fall Detection Based on Key Points of Human-Skeleton Using OpenPose
According to statistics, falls are the primary cause of injury or death for the elderly over 65 years old. About 30% of the elderly over 65 years old fall every year. Along with the increase in the elderly fall accidents each year, it is urgent to find a fast and effective fall detection method to help the elderly fall.The reason for falling is that the center of gravity of the human body is not stable or symmetry breaking, and the body cannot keep balance. To solve the above problem, in this paper, we propose an approach for reorganization of accidental falls based on the symmetry principle. We extract the skeleton information of the human body by OpenPose and identify the fall through three critical parameters: speed of descent at the center of the hip joint, the human body centerline angle with the ground, and width-to-height ratio of the human body external rectangular. Unlike previous studies that have just investigated falling behavior, we consider the standing up of people after falls. This method has 97% success rate to recognize the fall down behavior.
A Review: Point Cloud-Based 3D Human Joints Estimation
Joint estimation of the human body is suitable for many fields such as human–computer interaction, autonomous driving, video analysis and virtual reality. Although many depth-based researches have been classified and generalized in previous review or survey papers, the point cloud-based pose estimation of human body is still difficult due to the disorder and rotation invariance of the point cloud. In this review, we summarize the recent development on the point cloud-based pose estimation of the human body. The existing works are divided into three categories based on their working principles, including template-based method, feature-based method and machine learning-based method. Especially, the significant works are highlighted with a detailed introduction to analyze their characteristics and limitations. The widely used datasets in the field are summarized, and quantitative comparisons are provided for the representative methods. Moreover, this review helps further understand the pertinent applications in many frontier research directions. Finally, we conclude the challenges involved and problems to be solved in future researches.
An Accurate Skeleton Extraction Approach From 3D Point Clouds of Maize Plants
Accurate and high-throughput determination of plant morphological traits is essential for phenotyping studies. Nowadays, there are many approaches to acquire high-quality three-dimensional (3D) point clouds of plants. However, it is difficult to estimate phenotyping parameters accurately of the whole growth stages of maize plants using these 3D point clouds. In this paper, an accurate skeleton extraction approach was proposed to bridge the gap between 3D point cloud and phenotyping traits estimation of maize plants. The algorithm first uses point cloud clustering and color difference denoising to reduce the noise of the input point clouds. Next, the Laplacian contraction algorithm is applied to shrink the points. Then the key points representing the skeleton of the plant are selected through adaptive sampling, and neighboring points are connected to form a plant skeleton composed of semantic organs. Finally, deviation skeleton points to the input point cloud are calibrated by building a step forward local coordinate along the tangent direction of the original points. The proposed approach successfully generates accurately extracted skeleton from 3D point cloud and helps to estimate phenotyping parameters with high precision of maize plants. Experimental verification of the skeleton extraction process, tested using three cultivars and different growth stages maize, demonstrates that the extracted matches the input point cloud well. Compared with 3D digitizing data-derived morphological parameters, the NRMSE of leaf length, leaf inclination angle, leaf top length, leaf azimuthal angle, leaf growth height, and plant height, estimated using the extracted plant skeleton, are 5.27, 8.37, 5.12, 4.42, 1.53, and 0.83%, respectively, which could meet the needs of phenotyping analysis. The time required to process a single maize plant is below 100 s. The proposed approach may play an important role in further maize research and applications, such as genotype-to-phenotype study, geometric reconstruction, functional structural maize modeling, and dynamic growth animation.
A Lightweight Subgraph-Based Deep Learning Approach for Fall Recognition
Falls pose a great danger to social development, especially to the elderly population. When a fall occurs, the body’s center of gravity moves from a high position to a low position, and the magnitude of change varies among body parts. Most existing fall recognition methods based on deep learning have not yet considered the differences between the movement and the change in amplitude of each body part. Besides, some problems exist such as complicated design, slow detection speed, and lack of timeliness. To alleviate these problems, a lightweight subgraph-based deep learning method utilizing skeleton information for fall recognition is proposed in this paper. The skeleton information of the human body is extracted by OpenPose, and an end-to-end lightweight subgraph-based network is designed. Sub-graph division and sub-graph attention modules are introduced to add a larger perceptual field while maintaining its lightweight characteristics. A multi-scale temporal convolution module is also designed to extract and fuse multi-scale temporal features, which enriches the feature representation. The proposed method is evaluated on a partial fall dataset collected in NTU and on two public datasets, and outperforms existing methods. It indicates that the proposed method is accurate and lightweight, which means it is suitable for real-time detection and rapid response to falls.
Fall Detection Method for Infrared Videos Based on Spatial-Temporal Graph Convolutional Network
The timely detection of falls and alerting medical aid is critical for health monitoring in elderly individuals living alone. This paper mainly focuses on issues such as poor adaptability, privacy infringement, and low recognition accuracy associated with traditional visual sensor-based fall detection. We propose an infrared video-based fall detection method utilizing spatial-temporal graph convolutional networks (ST-GCNs) to address these challenges. Our method used fine-tuned AlphaPose to extract 2D human skeleton sequences from infrared videos. Subsequently, the skeleton data was represented in Cartesian and polar coordinates and processed through a two-stream ST-GCN to recognize fall behaviors promptly. To enhance the network’s recognition capability for fall actions, we improved the adjacency matrix of graph convolutional units and introduced multi-scale temporal graph convolution units. To facilitate practical deployment, we optimized time window and network depth of the ST-GCN, striking a balance between model accuracy and speed. The experimental results on a proprietary infrared human action recognition dataset demonstrated that our proposed algorithm accurately identifies fall behaviors with the highest accuracy of 96%. Moreover, our algorithm performed robustly, identifying falls in both near-infrared and thermal-infrared videos.
Constraint-Based Optimized Human Skeleton Extraction from Single-Depth Camera
As a cutting-edge research topic in computer vision and graphics for decades, human skeleton extraction from single-depth camera remains challenging due to possibly occurring occlusions of different body parts, huge appearance variations, and sensor noise. In this paper, we propose to incorporate human skeleton length conservation and symmetry priors as well as temporal constraints to enhance the consistency and continuity for the estimated skeleton of a moving human body. Given an initial estimation of the skeleton joint positions provided per frame by the Kinect SDK or Nuitrack SDK, which do not follow the aforementioned priors and can prone to errors, our framework improves the accuracy of these pose estimates based on the length and symmetry constraints. In addition, our method is device-independent and can be integrated into skeleton extraction SDKs for refinement, allowing the detection of outliers within the initial joint location estimates and predicting new joint location estimates following the temporal observations. The experimental results demonstrate the effectiveness and robustness of our approach in several cases.
An Automatic Tree Skeleton Extraction Approach Based on Multi-View Slicing Using Terrestrial LiDAR Scans Data
Effective 3D tree reconstruction based on point clouds from terrestrial Light Detection and Ranging (LiDAR) scans (TLS) has been widely recognized as a critical technology in forestry and ecology modeling. The major advantages of using TLS lie in its rapidly and automatically capturing tree information at millimeter level, providing massive high-density data. In addition, TLS 3D tree reconstruction allows for occlusions and complex structures from the derived point cloud of trees to be obtained. In this paper, an automatic tree skeleton extraction approach based on multi-view slicing is proposed to improve the TLS 3D tree reconstruction, which borrowed the idea from the medical imaging technology of X-ray computed tomography. Firstly, we extracted the precise trunk center and then cut the point cloud of the tree into slices. Next, the skeleton from each slice was generated using the kernel mean shift and principal component analysis algorithms. Accordingly, these isolated skeletons were smoothed and morphologically synthetized. Finally, the validation in point clouds of two trees acquired from multi-view TLS further demonstrated the potential of the proposed framework in efficiently dealing with TLS point cloud data.
A Self-Adaptive Strip Pooling Network for Segmenting the Kidney Glomerular Basement Membrane
Accurate semantic segmentation and automatic thickness measurement of the glomerular basement membrane (GBM) can aid pathologists in carrying out subsequent pathological diagnoses. The GBM has a complex ultrastructure and irregular shape, which makes it difficult to segment accurately. We found that the shape of the GBM is striped, so we proposed an RSP model to extract both the strip and square features of the GBM. Additionally, grayscale images of the GBM are similar to those of surrounding tissues, and the contrast is low. We added an edge attention mechanism to further improve the quality of segmentation. Moreover, we revised the pixel-level loss function to consider the tissues around the GBM and locate the GBM as a doctor would, i.e., by using the tissues as the reference object. Ablation experiments with each module showed that SSPNet can better segment the GBM. The proposed method was also compared with the existing medical semantic segmentation model. The experimental results showed that the proposed method can obtain high-precision segmentation results for the GBM and completely segment the target. Finally, the thickness of the GBM was calculated using a skeleton extraction method to provide quantitative data for expert diagnosis.
A Method for Tomato Plant Stem and Leaf Segmentation and Phenotypic Extraction Based on Skeleton Extraction and Supervoxel Clustering
To address the current problem of the difficulty of extracting the phenotypic parameters of tomato plants in a non-destructive and accurate way, we proposed a method of stem and leaf segmentation and phenotypic extraction of tomato plants based on skeleton extraction and supervoxel clustering. To carry out growth and cultivation experiments on tomato plants in a solar greenhouse, we obtained multi-view image sequences of the tomato plants to construct three-dimensional models of the plant. We used Laplace’s skeleton extraction algorithm to extract the skeleton of the point cloud after removing the noise points using a multi-filtering algorithm, and, based on the plant skeleton, searched for the highest point path, height constraints, and radius constraints to separate the stem from the leaf. At the same time, a supervoxel segmentation method based on Euclidean distance was used to segment each leaf. We extracted a total of six phenotypic parameters of the plant: height, stem diameter, leaf angle, leaf length, leaf width and leaf area, using the segmented organs, which are important for the phenotype. The results showed that the average accuracy, average recall and average F1 scores of the stem and leaf segmentation were 0.88, 0.80 and 0.84, and the segmentation indexes were better than the other four segmentation algorithms; the coefficients of determination between the measurement values of the phenotypic parameters and the real values were 0.97, 0.84, 0.88, 0.94, 0.92 and 0.93; and the root-mean-square errors were 2.17 cm, 0.346 cm, 5.65°, 3.18 cm, 2.99 cm and 8.79 cm2. The measurement values of the proposed method had a strong correlation with the actual values, which could satisfy the requirements of daily production and provide technical support for the extraction of high-throughput phenotypic parameters of tomato plants in solar greenhouses.
A crack detection and quantification method using matched filter and photograph reconstruction
Crack detection is a critical task for bridge maintenance and management. While popular deep learning algorithms have shown promise, their reliance on large, high-quality training datasets, which are often unavailable in engineering practice, limits their applicability. By contrast, traditional digital image processing methods offer low computational costs and strong interpretability, making continued research in this area highly valuable. This study proposes an automatic crack detection and quantification approach based on digital image processing combined with unmanned aerial vehicle (UAV) flight parameters. First, the characteristics of the bridge images collected by the UAVs were thoroughly analyzed. An enhanced matched-filter algorithm was designed to achieve crack segmentation. Morphological methods were employed to extract the skeletons of the segmented cracks, enabling the calculation of actual crack lengths. Finally, a 3D model was constructed by integrating the detection results with the image-shooting parameters. This 3D model, annotated with detected cracks, provides an intuitive and comprehensive representation of bridge damage, facilitating informed decision making in maintenance planning and resource allocation. To verify the accuracy of the enhanced matched filter algorithm, it was compared with other digital image processing methods on public datasets, achieving average results of 97.9% for Pixel Accuracy (PA), 72.5% for the F1-score, and 58.1% for Intersection over Union (Iou) across three typical sub-datasets. Moreover, the proposed methodologies were successfully applied to an arch bridge with an error of only 2%, thereby demonstrating their applicability to real-world scenarios.