Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
81 result(s) for "Qingyi Gu"
Sort by:
A Novel Dynamic Light-Section 3D Reconstruction Method for Wide-Range Sensing
Existing galvanometer-based laser-scanning systems are challenging to apply in multi-scale 3D reconstruction because of the difficulty in achieving a balance between a high reconstruction accuracy and a wide reconstruction range. This paper presents a novel method that synchronizes laser scanning by switching the field-of-view (FOV) of a camera using multi-galvanometers. Beyond the advanced hardware setup, we establish a comprehensive geometric model of the system by modeling dynamic camera, dynamic laser, and their combined interaction. Furthermore, since existing calibration methods mainly focus on either dynamic lasers or dynamic cameras and have certain limitations, we propose a novel high-precision and flexible calibration method by constructing an error model and minimizing the objective function. The performance of the proposed method was evaluated by scanning standard components. The results show that the proposed 3D reconstruction system achieves an accuracy of 0.3 mm when the measurement range is extended to 1100 mm × 1300 mm × 650 mm. This demonstrates that for meter-scale reconstruction ranges, a sub-millimeter measurement accuracy is achieved, indicating that the proposed method realizes multi-scale 3D reconstruction and simultaneously allows for high-precision and wide-range 3D reconstruction in industrial applications.
An FPGA-Based Ultra-High-Speed Object Detection Algorithm with Multi-Frame Information Fusion
An ultra-high-speed algorithm based on Histogram of Oriented Gradient (HOG) and Support Vector Machine (SVM) for hardware implementation at 10,000 frames per second (FPS) under complex backgrounds is proposed for object detection. The algorithm is implemented on the field-programmable gate array (FPGA) in the high-speed-vision platform, in which 64 pixels are input per clock cycle. The high pixel parallelism of the vision platform limits its performance, as it is difficult to reduce the strides between detection windows below 16 pixels, thus introduce non-negligible deviation of object detection. In addition, limited by the transmission bandwidth, only one frame in every four frames can be transmitted to PC for post-processing, that is, 75% image information is wasted. To overcome the mentioned problem, a multi-frame information fusion model is proposed in this paper. Image data and synchronization signals are first regenerated according to image frame numbers. The maximum HOG feature value and corresponding coordinates of each frame are stored in the bottom of the image with that of adjacent frames’. The compensated ones will be obtained through information fusion with the confidence of continuous frames. Several experiments are conducted to demonstrate the performance of the proposed algorithm. As the evaluation result shows, the deviation is reduced with our proposed method compared with the existing one.
CellNet: A Lightweight Model towards Accurate LOC-Based High-Speed Cell Detection
Label-free cell separation and sorting in a microfluidic system, an essential technique for modern cancer diagnosis, resulted in high-throughput single-cell analysis becoming a reality. However, designing an efficient cell detection model is challenging. Traditional cell detection methods are subject to occlusion boundaries and weak textures, resulting in poor performance. Modern detection models based on convolutional neural networks (CNNs) have achieved promising results at the cost of a large number of both parameters and floating point operations (FLOPs). In this work, we present a lightweight, yet powerful cell detection model named CellNet, which includes two efficient modules, CellConv blocks and the h-swish nonlinearity function. CellConv is proposed as an effective feature extractor as a substitute to computationally expensive convolutional layers, whereas the h-swish function is introduced to increase the nonlinearity of the compact model. To boost the prediction and localization ability of the detection model, we re-designed the model’s multi-task loss function. In comparison with other efficient object detection methods, our approach achieved state-of-the-art 98.70% mean average precision (mAP) on our custom sea urchin embryos dataset with only 0.08 M parameters and 0.10 B FLOPs, reducing the size of the model by 39.5× and the computational cost by 4.6×. We deployed CellNet on different platforms to verify its efficiency. The inference speed on a graphics processing unit (GPU) was 500.0 fps compared with 87.7 fps on a CPU. Additionally, CellNet is 769.5-times smaller and 420 fps faster than YOLOv3. Extensive experimental results demonstrate that CellNet can achieve an excellent efficiency/accuracy trade-off on resource-constrained platforms.
The complete chloroplast genome of Vaccinium oxycoccos (Ericaceae)
Vaccinium species have great significance as fruit crops due to their economic and food values. Here we report the chloroplast genome of V. oxycoccos. The chloroplast genome of V. oxycoccos was 177,088 bp in length with a GC content of 36.74%. LSC, SSC, and IR regions were 104,139 bp, 3031 bp, and 34,959 bp in length, respectively. The chloroplast genome contained 105 different genes, including 73 protein-coding genes, 4 rRNA genes, and 28 tRNA genes. The phylogenetic analysis indicated that V. oxycoccos was closely related to V. microcarpum in the family Ericaceae. This chloroplast genome not only enriches the genome information of Vaccinium, but also will be useful in the evolution study of the family Ericaceae.
Genome-wide epigenetic dynamics during postnatal skeletal muscle growth in Hu sheep
Hypertrophy and fiber transformation are two prominent features of postnatal skeletal muscle development. However, the role of epigenetic modifications is less understood. ATAC-seq, whole genome bisulfite sequencing, and RNA-seq were applied to investigate the epigenetic dynamics of muscle in Hu sheep at 3 days, 3 months, 6 months, and 12 months after birth. All 6865 differentially expressed genes were assigned into three distinct tendencies, highlighting the balanced protein synthesis, accumulated immune activities, and restrained cell division in postnatal development. We identified 3742 differentially accessible regions and 11799 differentially methylated regions that were associated with muscle-development-related pathways in certain stages, like D3-M6. Transcription factor network analysis, based on genomic loci with high chromatin accessibility and low methylation, showed that ARID5B , MYOG , and ENO1 were associated with muscle hypertrophy, while NR1D1 , FADS1 , ZFP36L2 , and SLC25A1 were associated with muscle fiber transformation. Taken together, these results suggest that DNA methylation and chromatin accessibility contributed toward regulating the growth and fiber transformation of postnatal skeletal muscle in Hu sheep. Multi-omic profiling of postnatal muscle development in Hu sheep from 3 days to 12 months of age highlights the epigenetic factors involved in regulating skeletal muscle growth.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time.
500-fps face tracking system
In this paper, we propose a high-speed vision system that can be applied to real-time face tracking at 500 fps using GPU acceleration of a boosting-based face tracking algorithm. By assuming a small image displacement between frames, which is a property of high-frame rate vision, we develop an improved boosting-based face tracking algorithm for fast face tracking by enhancing the Viola–Jones face detector. In the improved algorithm, face detection can be efficiently accelerated by reducing the number of window searches for Haar-like features, and the tracked face pattern can be localized pixel-wise even when the window is sparsely scanned for a larger face pattern by introducing skin color extraction in the boosting-based face detector. The improved boosting-based face tracking algorithm is implemented on a GPU-based high-speed vision platform, and face tracking can be executed in real time at 500 fps for an 8-bit color image of 512 × 512 pixels. In order to verify the effectiveness of the developed face tracking system, we install it on a two-axis mechanical active vision system and perform several experiments for tracking face patterns.
Modeling and Calibration of Active Thermal-Infrared Visual System for Industrial HMI
In the industrial application of the human-machine interface (HMI), thermal-infrared cameras can detect objects that are limited by visible-spectrum cameras. It can receive the energy radiated by a target through an infrared detector and obtain the thermal image corresponding to the heat distribution field on the target surface. Because of its special imaging principle, a thermal-infrared camera is not affected by the light source when imaging. Compared to visible-spectrum cameras, thermal imaging cameras can better detect defects with poor contrast but temperature differences or internal defects in products. Therefore, it can be used in many specific industrial inspection applications. However, thermal-infrared imaging has the phenomenon of thermal diffusion, which leads to noisy thermal infrared images and limits its applications in high precision industrial environments. In this paper, we proposed a high precision measurement system for industrial HMI based on thermal-infrared vision. The accurate measurement model of the system was established to deal with the problems caused by imaging noise. The experiments conducted suggest that the proposed model and calibration method is valid for the active thermal-infrared visual system and achieves high precision measurements.
Fast object detection based on binary deep convolution neural networks
In this study, a fast object detection algorithm based on binary deep convolution neural networks (CNNs) is proposed. Convolution kernels of different sizes are used to predict classes and bounding boxes of multi-scale objects directly in the last feature map of a deep CNN. In this way, rapid object detection with acceptable precision loss is achieved. In addition, binary quantisation for weight values and input data of each layer is used to squeeze the networks for faster object detection. Compared to full-precision convolution, the proposed binary deep CNNs for object detection results in 62 times faster convolutional operations and 32 times memory saving in theory, what's more, the proposed method is easy to be implemented in embedded computing systems because of the binary operation for convolution and low memory requirement. Experimental results on Pascal VOC2007 validate the effectiveness of the authors’ proposed method.
Blink-Spot Projection Method for Fast Three-Dimensional Shape Measurement
Blink-spot projection method We present a blink-spot projection method for observing moving three-dimensional (3D) scenes. The proposed method can reduce the synchronization errors of the sequential structured light illumination, which are caused by multiple light patterns projected with different timings when fast-moving objects are observed. In our method, a series of spot array patterns, whose spot sizes change at different timings corresponding to their identification (ID) number, is projected onto scenes to be measured by a high-speed projector. Based on simultaneous and robust frame-to-frame tracking of the projected spots using their ID numbers, the 3D shape of the measuring scene can be obtained without misalignments, even when there are fast movements in the camera view. We implemented our method with a high-frame-rate projector-camera system that can process 512 × 512 pixel images in real-time at 500 fps to track and recognize 16 × 16 spots in the images. Its effectiveness was demonstrated through several 3D shape measurements when the 3D module was mounted on a fast-moving six-degrees-of-freedom manipulator.