Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
86
result(s) for
"Shi, Boxin"
Sort by:
Active Printed Materials for Complex Self-Evolving Deformations
2014
We propose a new design of complex self-evolving structures that vary over time due to environmental interaction. In conventional 3D printing systems, materials are meant to be stable rather than active and fabricated models are designed and printed as static objects. Here, we introduce a novel approach for simulating and fabricating self-evolving structures that transform into a predetermined shape, changing property and function after fabrication. The new locally coordinated bending primitives combine into a single system, allowing for a global deformation which can stretch, fold and bend given environmental stimulus.
Journal Article
Face Image Reflection Removal
by
Ling-Yu, Duan
,
Wan Renjie
,
Shi Boxin
in
Face recognition
,
Image transmission
,
Object recognition
2021
Face images captured through glass are usually contaminated by reflections. The low-transmitted reflections make the reflection removal more challenging than for general scenes because important facial features would be completely occluded. In this paper, we propose and solve the face image reflection removal problem. We recover the important facial structures by incorporating inpainting ideas into a guided reflection removal framework, which takes two images as the input and considers various face-specific priors. We use a newly collected face reflection image dataset to train our model and compare with state-of-the-art methods. The proposed method shows advantages in estimating reflection-free face images for improving face recognition.
Journal Article
Multispectral Photometric Stereo for Spatially-Varying Spectral Reflectances
2022
Multispectral photometric stereo (MPS) aims at recovering the surface normal of a scene measured under multiple light sources with different wavelengths. While it opens up a capability of a single-shot measurement of surface normal, the problem has been known ill-posed. To make the problem well-posed, existing MPS methods rely on restrictive assumptions, such as shape prior, surfaces having a monochromatic with uniform albedo. This paper alleviates these restrictive assumptions in existing methods. We show that the problem becomes well-posed for surfaces with uniform chromaticity but spatially-varying albedos based on our new formulation. Specifically, if at least three (or two) scene points share the same chromaticity, the proposed method uniquely recovers their surface normals with the illumination of no less than four (or five) spectral lights in a closed-form. In addition, we show that a more general setting of spatially-varying both chromaticities and albedos can become well-posed if the light spectra and camera spectral sensitivity are calibrated. For this general setting, we derive a unique and closed-form solution for MPS using the linear bases extracted from a spectral reflectance database. Experiments on both synthetic and real captured data with spatially-varying reflectance demonstrate the effectiveness of our method and show the potential applicability for multispectral heritage preservation.
Journal Article
Recent Advances in Time-Sensitive Network Configuration Management: A Literature Review
by
Shi, Boxin
,
Tu, Xiaodong
,
Wu, Bin
in
Actuators
,
Computer architecture
,
Configuration management
2023
At present, many network applications are seeking to implement Time-Sensitive Network (TSN) technology, which not only furnishes communication transmission services that are deterministic, low-latency, highly dependable, and have ample bandwidth, but also enables unified configuration management, permitting different network types to function under a single management system. These characteristics enable it to be widely used in many fields such as industrial sensor and actuator networks, in-vehicle networks, data center networks, and edge computing. Nonetheless, TSN’s configuration management faces numerous difficulties and challenges related to network deployment, automated operation, and maintenance, as well as real-time and safety assurance, rendering it exceedingly intricate. In recent years, some studies have been conducted on TSN configuration management, encompassing various aspects such as system design, key technologies for configuration management, protocol enhancement, and application development. Nevertheless, there is a dearth of systematic summaries of these studies. Hence, this article aims to provide a comprehensive overview of TSN configuration management. Drawing upon more than 70 relevant publications and the pertinent standards established by the IEEE 802.1 TSN working group, we first introduce the system architecture of TSN configuration management from a macro perspective and then explore specific technical details. Additionally, we demonstrate its application scenarios through practical cases and finally highlight the challenges and future research directions. We aspire to provide a comprehensive reference for peers and new researchers interested in TSN configuration management.
Journal Article
GelLight: Illumination Design, Modeling, and Optimization for Camera‐Based Tactile Sensor
2025
Camera‐based tactile sensors attract the attention of the robotics communities by the high‐density tactile perception, in which image quality and reconstruction accuracy are significantly determined by the illumination design. However, the influence of illumination has not yet been systematically analyzed, and most existing sensors adopt empirical design and subjective evaluation to determine the light configuration. Herein, a photometric stereo‐based modeling, optimization, and evaluation system is proposed to explore the best illumination for typical camera‐based tactile sensors. First, this article constructs a tactile benchmark dataset, simulates the contact deformation of elastomer surface, rendering the tactile imaging under various illuminations, and constructs a metrics system to evaluate the performance. Then, the relationship between reconstruct accuracy and illumination direction distribution on the benchmark is depicted, and the best illumination is optimized. The optimized sensor is fabricated and evaluated by standard metrology experiments, which exhibits high reconstruction accuracy and convincingly demonstrates the effectiveness of the proposed design and optimization approach. Furthermore, intensive experiments are conducted on diverse objects, which additionally indicate the generality and adaptability of the designed sensor. Herein, the illumination design can simplify and improve the performance of camera‐based tactile sensors. A systematic modeling and optimization method for camera‐based tactile sensor illumination is proposed, which significantly improves the tactile performance and achieves high geometry reconstruction accuracy. The performance is validated by intensive simulation and real‐world experiments. Besides, this work also provides simulation tools, benchmark dataset, as well as the evaluation metrics and methods for camera‐based tactile sensors.
Journal Article
Spectral Representation via Data-Guided Sparsity for Hyperspectral Image Super-Resolution
2019
Hyperspectral imaging is capable of acquiring the rich spectral information of scenes and has great potential for understanding the characteristics of different materials in many applications ranging from remote sensing to medical imaging. However, due to hardware limitations, the existed hyper-/multi-spectral imaging devices usually cannot obtain high spatial resolution. This study aims to generate a high resolution hyperspectral image according to the available low resolution hyperspectral and high resolution RGB images. We propose a novel hyperspectral image superresolution method via non-negative sparse representation of reflectance spectra with a data guided sparsity constraint. The proposed method firstly learns the hyperspectral dictionary from the low resolution hyperspectral image and then transforms it into the RGB one with the camera response function, which is decided by the physical property of the RGB imaging camera. Given the RGB vector and the RGB dictionary, the sparse representation of each pixel in the high resolution image is calculated with the guidance of a sparsity map, which measures pixel material purity. The sparsity map is generated by analyzing the local content similarity of a focused pixel in the available high resolution RGB image and quantifying the spectral mixing degree motivated by the fact that the pixel spectrum of a pure material should have sparse representation of the spectral dictionary. Since the proposed method adaptively adjusts the sparsity in the spectral representation based on the local content of the available high resolution RGB image, it can produce more robust spectral representation for recovering the target high resolution hyperspectral image. Comprehensive experiments on two public hyperspectral datasets and three real remote sensing images validate that the proposed method achieves promising performances compared to the existing state-of-the-art methods.
Journal Article
Depth Sensing Using Geometrically Constrained Polarization Normals
by
Taamazyan, Vage
,
Kadambi, Achuta
,
Shi, Boxin
in
3-D technology
,
Artificial Intelligence
,
Computer Imaging
2017
Analyzing the polarimetric properties of reflected light is a potential source of shape information. However, it is well-known that polarimetric information contains fundamental shape ambiguities, leading to an underconstrained problem of recovering 3D geometry. To address this problem, we use additional geometric information, from coarse depth maps, to constrain the shape information from polarization cues. Our main contribution is a framework that combines surface normals from polarization (hereafter polarization normals) with an aligned depth map. The additional geometric constraints are used to mitigate physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We believe our work may have practical implications for optical engineering, demonstrating a new option for state-of-the-art 3D reconstruction.
Journal Article
NormAttention-PSN: A High-frequency Region Enhanced Photometric Stereo Network with Normalized Attention
2022
Photometric stereo aims to recover the surface normals of a 3D object from various shading cues, establishing the relationship between two-dimensional images and the object geometry. Traditional methods usually adopt simplified reflectance models to approximate the non-Lambertian surface properties, while recently, photometric stereo based on deep learning has been widely used to deal with non-Lambertian surfaces. However, previous studies are limited in dealing with high-frequency surface regions, i.e., regions with rapid shape variations, such as crinkles, edges, etc., resulted in blurry reconstructions. To alleviate this problem, we present a normalized attention-weighted photometric stereo network, namely NormAttention-PSN, to improve surface orientation prediction, especially for those complicated structures. In order to address these challenges, in this paper, we (1) present an attention-weighted loss to produce better surface reconstructions, which applies a higher weight to the detail-preserving gradient loss in high-frequency areas, (2) adopt a double-gate normalization method for non-Lambertian surfaces, to explicitly distinguish whether the high-frequency representation is stimulated by surface structure or spatially varying reflectance, and (3) adopt a parallel high-resolution structure to generate deep features that can maintain the high-resolution details of surface normals. Extensive experiments on public benchmark data sets show that the proposed NormAttention-PSN significantly outperforms traditional calibrated photometric stereo algorithms and state-of-the-art deep learning-based methods.
Journal Article
Deblurring Low-Light Images with Events
2023
Modern image-based deblurring methods usually show degenerate performance in low-light conditions since the images often contain most of the poorly visible dark regions and a few saturated bright regions, making the amount of effective features that can be extracted for deblurring limited. In contrast, event cameras can trigger events with a very high dynamic range and low latency, which hardly suffer from saturation and naturally encode dense temporal information about motion. However, in low-light conditions existing event-based deblurring methods would become less robust since the events triggered in dark regions are often severely contaminated by noise, leading to inaccurate reconstruction of the corresponding intensity values. Besides, since they directly adopt the event-based double integral model to perform pixel-wise reconstruction, they can only handle low-resolution grayscale active pixel sensor images provided by the DAVIS camera, which cannot meet the requirement of daily photography. In this paper, to apply events to deblurring low-light images robustly, we propose a unified two-stage framework along with a motion-aware neural network tailored to it, reconstructing the sharp image under the guidance of high-fidelity motion clues extracted from events. Besides, we build an RGB-DAVIS hybrid camera system to demonstrate that our method has the ability to deblur high-resolution RGB images due to the natural advantages of our two-stage framework. Experimental results show our method achieves state-of-the-art performance on both synthetic and real-world images.
Journal Article
Light Flickering Guided Reflection Removal
2024
When photographing through a piece of glass, reflections usually degrade the quality of captured images or videos. In this paper, by exploiting periodically varying light flickering, we investigate the problem of removing strong reflections from contaminated image sequences or videos with a unified capturing setup. We propose a learning-based method that utilizes short-term and long-term observations of mixture videos to exploit one-side contextual clues in fluctuant components and brightness-consistent clues in consistent components for achieving layer separation and flickering removal, respectively. A dataset containing synthetic and real mixture videos with light flickering is built for network training and testing. The effectiveness of the proposed method is demonstrated by the comprehensive evaluation on synthetic and real data, the application for video flickering removal, and the exploratory experiment on high-speed scenes.
Journal Article