Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
3 result(s) for "RGB-D visual grasping"
Sort by:
Design, Modeling, Self-Calibration and Grasping Method for Modular Cable-Driven Parallel Robots
Cable-driven parallel robots (CDPRs) are attractive for large-space manipulation because of their lightweight structure, large workspace, and reconfigurability. However, existing systems still face three practical challenges: limited modularity of the mechanical architecture, repeated calibration after reconfiguration, and insufficient integration between visual perception and grasp execution. To address these issues, this paper presents a modular cable-driven parallel robot (MCDPR), together with its kinematic modeling, vision-based self-calibration, and visual grasping methods. First, a modular mechanical architecture is developed in which the drive, sensing, and cable-guiding functions are integrated to support rapid assembly/disassembly, convenient debugging, and cable anti-slack operation. Second, a pulley-considered multilayer kinematic model is established, and a vision-based self-calibration method is proposed to identify the structural parameters after assembly using onboard sensing and AprilTag observations, thereby reducing the number of recalibrations required during robot operation after reconfiguration. Third, a vision-guided bin-picking method is developed by combining RGB-D perception, coordinate transformation, and the calibrated robot model. Simulation and prototype experiments are conducted to validate the proposed system. A software/hardware combined validation framework is established, in which the CoppeliaSim-based simulation and the hardware prototype are used together to verify the proposed design and methods. In simulation, self-calibration reduces the Euclidean grasping position error from 0.371 mm to 0.048 mm and the orientation error from 0.071° to 0.004°. In experiments, the relative position error is reduced by 58.33% after self-calibration.
Collaborative Viewpoint Adjusting and Grasping via Deep Reinforcement Learning in Clutter Scenes
For the robotic grasping of randomly stacked objects in a cluttered environment, the active multiple viewpoints method can improve grasping performance by improving the environment perception ability. However, in many scenes, it is redundant to always use multiple viewpoints for grasping detection, which will reduce the robot’s grasping efficiency. To improve the robot’s grasping performance, we present a Viewpoint Adjusting and Grasping Synergy (VAGS) strategy based on deep reinforcement learning which coordinates the viewpoint adjusting and grasping directly. For the training efficiency of VAGS, we propose a Dynamic Action Exploration Space (DAES) method based on ε-greedy to reduce the training time. To address the sparse reward problem in reinforcement learning, a reward function is created to evaluate the impact of adjusting the camera pose on the grasping performance. According to experimental findings in simulation and the real world, the VAGS method can improve grasping success and scene clearing rate. Compared with only direct grasping, our proposed strategy increases the grasping success rate and the scene clearing rate by 10.49% and 11%.
Vision-Based Reinforcement Learning for Robotic Grasping of Moving Objects on a Conveyor
This study introduces an autonomous framework for grasping moving objects on a conveyor belt, enabling unsupervised detection, grasping, and categorization. The work focuses on two common object shapes—cylindrical cans and rectangular cartons—transported at a constant speed of 3–7 cm/s on the conveyor, emulating typical scenarios. The proposed framework combines a vision-based neural network for object detection, a target localization algorithm, and a deep reinforcement learning model for robotic control. Specifically, a YOLO-based neural network was employed to detect the 2D position of target objects. These positions are then converted to 3D coordinates, followed by pose estimation and error correction. A Proximal Policy Optimization (PPO) algorithm was then used to provide continuous control decisions for the robotic arm. A tailored reinforcement learning environment was developed using the Gymnasium interface. Training and validation were conducted on a 7-degree-of-freedom (7-DOF) robotic arm model in the PyBullet physics simulation engine. By leveraging transfer learning and curriculum learning strategies, the robotic agent effectively learned to grasp multiple categories of moving objects. Simulation experiments and randomized trials show that the proposed method enables the 7-DOF robotic arm to consistently grasp conveyor belt objects, achieving an approximately 80% success rate at conveyor speeds of 0.03–0.07 m/s. These results demonstrate the potential of the framework for deployment in automated handling applications.