Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
5,165 result(s) for "grasping"
Sort by:
Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review
This paper presents a comprehensive survey on vision-based robotic grasping. We conclude three key tasks during vision-based robotic grasping, which are object localization, object pose estimation and grasp estimation. In detail, the object localization task contains object localization without classification, object detection and object instance segmentation. This task provides the regions of the target object in the input data. The object pose estimation task mainly refers to estimating the 6D object pose and includes correspondence-based methods, template-based methods and voting-based methods, which affords the generation of grasp poses for known objects. The grasp estimation task includes 2D planar grasp methods and 6DoF grasp methods, where the former is constrained to grasp from one direction. These three tasks could accomplish the robotic grasping with different combinations. Lots of object pose estimation methods need not object localization, and they conduct object localization and object pose estimation jointly. Lots of grasp estimation methods need not object localization and object pose estimation, and they conduct grasp estimation in an end-to-end manner. Both traditional methods and latest deep learning-based methods based on the RGB-D image inputs are reviewed elaborately in this survey. Related datasets and comparisons between state-of-the-art methods are summarized as well. In addition, challenges about vision-based robotic grasping and future directions in addressing these challenges are also pointed out.
Integrated linkage-driven dexterous anthropomorphic robotic hand
Robotic hands perform several amazing functions similar to the human hands, thereby offering high flexibility in terms of the tasks performed. However, developing integrated hands without additional actuation parts while maintaining important functions such as human-level dexterity and grasping force is challenging. The actuation parts make it difficult to integrate these hands into existing robotic arms, thus limiting their applicability. Based on a linkage-driven mechanism, an integrated linkage-driven dexterous anthropomorphic robotic hand called ILDA hand, which integrates all the components required for actuation and sensing and possesses high dexterity, is developed. It has the following features: 15-degree-of-freedom (20 joints), a fingertip force of 34N, compact size (maximum length: 218 mm) without additional parts, low weight of 1.1 kg, and tactile sensing capabilities. Actual manipulation tasks involving tools used in everyday life are performed with the hand mounted on a commercial robot arm. Though robotic hands capable of adaptive grasping have been developed, realizing integrated hands with higher degree of freedom (DOF) movement and technology compatibility remains a challenge. Here, the authors report integrated linkage-driven robotic hand with improved design and performance.
Real-time grasping strategies using event camera
Robotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.
Efficient leather spreading operations by dual-arm robotic systems
To achieve precise grasping and spreading of irregular sheet-like soft objects (such as leather) by robots, this study addresses several challenges, including the irregularity of leather edges and the ambiguity of feature recognition points. To tackle these issues, this paper proposes an innovative method that involves alternately grasping the lowest point twice and using planar techniques to effectively spread the leather. We improved the YOLOV8 algorithm by incorporating the BIFPN network structure and the WIOU loss function, and trained a dedicated dataset for the lowest grasping points and planar grasping points, thereby achieving high-precision recognition. Additionally, we determined the optimal posture for grasping the lowest point and constructed an experimental platform, successfully conducting multiple rounds of leather grasping and spreading experiments with a success rate of 72%. Through an in-depth analysis of the failed experiments, this study reveals the limitations of the current methods and provides valuable guidance for future research.
Active entanglement enables stochastic, topological grasping
Grasping, in both biological and engineered mechanisms, can be highly sensitive to the gripper and object morphology, as well as perception and motion planning. Here, we circumvent the need for feedback or precise planning by using an array of fluidically actuated slender hollow elastomeric filaments to actively entangle with objects that vary in geometric and topological complexity. The resulting stochastic interactions enable a unique soft and conformable grasping strategy across a range of target objects that vary in size, weight, and shape.We experimentally evaluate the grasping performance of our strategy and use a computational framework for the collective mechanics of flexible filaments in contact with complex objects to explain our findings. Overall, our study highlights how active collective entanglement of a filament array via an uncontrolled, spatially distributed scheme provides options for soft, adaptable grasping.
Optimizing Contact Force on an Apple Picking Robot End-Effector
The quality of apple picking affects the sales of apples, and the grasping force of the end effector of an apple picking robot is very important for apple picking. It is easy to cause apple damage due to excessive contact force, or when the contact force is too small to grasp the apple. However, the current research lacks an analysis of the minimum stable grasping force of apples. Therefore, in order to realize the stable grasping of apples by the end-effector of a picking robot and reduce fruit damage, this study first analyzes the grasping stability of the end-effector based on the force closure theory, and comprehensively considers the force closure constraints, nonlinear friction cone constraints and the introduction of torque constraints. Next, the constraint conditions are processed using an obstacle function, and a penalty factor is introduced to construct an optimization model of the contact force distribution of the end-effector. Then, the improved Newton method is used to grasp and solve the contact force distribution model. Under the premise of selecting the penalty factor, the optimal contact force of grasping an apple is determined using a method of numerical example simulation analysis, and the validity of the solution is verified. In order to verify the reliability of the contact force distribution optimization model, the practical significance of the method for apple grasping is verified in an actual grasping experiment. The actual experiment shows that the method can provide the minimum stable grasping force to the end-effector to achieve stable grasping.
Learning the signatures of the human grasp using a scalable tactile glove
Humans can feel, weigh and grasp diverse objects, and simultaneously infer their material properties while applying the right amount of force—a challenging set of tasks for a modern robot 1 . Mechanoreceptor networks that provide sensory feedback and enable the dexterity of the human grasp 2 remain difficult to replicate in robots. Whereas computer-vision-based robot grasping strategies 3 – 5 have progressed substantially with the abundance of visual data and emerging machine-learning tools, there are as yet no equivalent sensing platforms and large-scale datasets with which to probe the use of the tactile information that humans rely on when grasping objects. Studying the mechanics of how humans grasp objects will complement vision-based robotic object handling. Importantly, the inability to record and analyse tactile signals currently limits our understanding of the role of tactile information in the human grasp itself—for example, how tactile maps are used to identify objects and infer their properties is unknown 6 . Here we use a scalable tactile glove and deep convolutional neural networks to show that sensors uniformly distributed over the hand can be used to identify individual objects, estimate their weight and explore the typical tactile patterns that emerge while grasping objects. The sensor array (548 sensors) is assembled on a knitted glove, and consists of a piezoresistive film connected by a network of conductive thread electrodes that are passively probed. Using a low-cost (about US$10) scalable tactile glove sensor array, we record a large-scale tactile dataset with 135,000 frames, each covering the full hand, while interacting with 26 different objects. This set of interactions with different objects reveals the key correspondences between different regions of a human hand while it is manipulating objects. Insights from the tactile signatures of the human grasp—through the lens of an artificial analogue of the natural mechanoreceptor network—can thus aid the future design of prosthetics 7 , robot grasping tools and human–robot interactions 1 , 8 – 10 . Tactile patterns obtained from a scalable sensor-embedded glove and deep convolutional neural networks help to explain how the human hand can identify and grasp individual objects and estimate their weights.
DM-VLP-Grasp: Diffusion Model-Based Grasp Planning with Visual-Language Pretraining for Unknown Object Manipulation
This paper proposes an unknown object grasping algorithm (DM-VLP-Grasp) based on diffusion model and visual language pre-training, aiming to improve the grasping performance of robots in complex environments. By improving the visual language pre-training model, the image and text information are integrated to accurately extract the object grasping features; the diffusion model is used to generate a reliable grasping strategy, and efficient grasping is achieved through iterative optimization. On a self-built dataset containing 8000 samples, the results show that the grasping success rate of DM-VLPGrasp reaches 93.6%, and the single strategy generation time is 0.78 seconds, showing high stability and computational efficiency. The grasping stability is measured by the root mean square value (RMS) of the object shaking amplitude and the grasping force fluctuation range, both of which show excellent performance. The experimental results verify the effectiveness and innovation of the algorithm in the unknown object grasping task, and provide a new solution for robot automated grasping.
Multimodal tactile sensing fused with vision for dexterous robotic housekeeping
As robots are increasingly participating in our daily lives, the quests to mimic human abilities have driven the advancements of robotic multimodal senses. However, current perceptual technologies still unsatisfied robotic needs for home tasks/environments, particularly facing great challenges in multisensory integration and fusion, rapid response capability, and highly sensitive perception. Here, we report a flexible tactile sensor utilizing thin-film thermistors to implement multimodal perceptions of pressure, temperature, matter thermal property, texture, and slippage. Notably, the tactile sensor is endowed with an ultrasensitive (0.05 mm/s) and ultrafast (4 ms) slip sensing that is indispensable for dexterous and reliable grasping control to avoid crushing fragile objects or dropping slippery objects. We further propose and develop a robotic tactile-visual fusion architecture that seamlessly encompasses multimodal sensations from the bottom level to robotic decision-making at the top level. A series of intelligent grasping strategies with rapid slip feedback control and a tactile-visual fusion recognition strategy ensure dexterous robotic grasping and accurate recognition of daily objects, handling various challenging tasks, for instance grabbing a paper cup containing liquid. Furthermore, we showcase a robotic desktop-cleaning task, the robot autonomously accomplishes multi-item sorting and cleaning desktop, demonstrating its promising potential for smart housekeeping. The authors report a multimodal tactile sensor with perceptions of pressure, temperature, material thermal property, texture, slippage, and a robotic decision-making tactile-visual fusion architecture which allows robots for dexterous housekeeping.
A two-stage grasp detection method for sequential robotic grasping in stacking scenarios
Dexterous grasping is essential for the fine manipulation tasks of intelligent robots; however, its application in stacking scenarios remains a challenge. In this study, we aimed to propose a two-phase approach for grasp detection of sequential robotic grasping, specifically for application in stacking scenarios. In the initial phase, a rotated-YOLOv3 (R-YOLOv3) model was designed to efficiently detect the category and position of the top-layer object, facilitating the detection of stacked objects. Subsequently, a stacked scenario dataset with only the top-level objects annotated was built for training and testing the R-YOLOv3 network. In the next phase, a G-ResNet50 model was developed to enhance grasping accuracy by finding the most suitable pose for grasping the uppermost object in various stacking scenarios. Ultimately, a robot was directed to successfully execute the task of sequentially grasping the stacked objects. The proposed methodology demonstrated the average grasping prediction success rate of 96.60% as observed in the Cornell grasping dataset. The results of the 280 real-world grasping experiments, conducted in stacked scenarios, revealed that the robot achieved a maximum grasping success rate of 95.00%, with an average handling grasping success rate of 83.93%. The experimental findings demonstrated the efficacy and competitiveness of the proposed approach in successfully executing grasping tasks within complex multi-object stacked environments.