Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,918 result(s) for "robotic hand"
Sort by:
Layer jamming-based soft robotic hand with variable stiffness for compliant and effective grasping
A novel variable stiffness soft robotic hand (SRH) consists of three pieces of layer jamming structure (LJS) is proposed. The mechanism is driven by the motor-based tendon along the surface of the pieces that connect to individual gas channel. Each LJS is optimised by adhering a thin layer of hot melt adhesive and overlapping the spring steel sheet as inner layer material. It can be switched between rigid and compliant independently. The structures of variable stiffness and tendon-driven lead to various deformation poses. Then the control system of SRH and the performance analysis of the LJS are introduced. Finally, the experiments are implemented to prove the superiority of the proposed LJS and the demonstrations show that the designed robotic hand has multiple configurations to successfully grasp various objects.
Page Turning Using Assistive Robot with Low-Degree-of-Freedom Hand
This paper proposes a page-turning strategy using an assistive robot that has a low-degree-of-freedom robotic hand. The robotic hand is based on human object handling characteristics, which significantly reduces the number of fingers and joints required to handle various objects. The robotic hand has right and left planar fingers that can transform their shape to handle various objects. To turn a page, the robot uses the planar fingers to push the surface of the page and then rotates the fingers. The design concept, mechanism, sensor system, strategy for page turning, and control system of the robotic hand are presented. The experimental results show that the robot can turn pages using the proposed method; however, it sometimes failed to turn the page when the robotic hand height was too low and too close to the book because the rotation of the fingers was stopped by the book. When the hand detects excessive force during page turning, the control system changes the shape of the fingers and releases the force from the book. The experimental results show the effectiveness of the control system.
Learning synergies based in-hand manipulation with reward shaping
In-hand manipulation is a fundamental ability for multi-fingered robotic hands that interact with their environments. Owing to the high dimensionality of robotic hands and intermittent contact dynamics, effectively programming a robotic hand for in-hand manipulations is still a challenging problem. To address this challenge, this work employs deep reinforcement learning (DRL) algorithm to learn in-hand manipulations for multi-fingered robotic hands. A reward-shaping method is proposed to assist the learning of in-hand manipulation. The synergy of robotic hand postures is analysed to build a low-dimensional hand posture space. Two additional rewards are designed based on both the analysis of hand synergies and its learning history. The two additional rewards cooperating with an extrinsic reward are used to assist the in-hand manipulation learning. Three value functions are trained jointly with respect to their reward functions. Then they cooperate to optimise a control policy for in-hand manipulation. The reward shaping not only improves the exploration efficiency of the DRL algorithm but also provides a way to incorporate domain knowledge. The performance of the proposed learning method is evaluated with object rotation tasks. Experimental results demonstrated that the proposed learning method enables multi-fingered robotic hands to learn in-hand manipulation effectively.
Fuzzy logic expert system for selecting robotic hands using kinematic parameters
Industry 4.0 is the current industrial revolution and robotics is an important factor for carrying out high dexterity manipulations. However, mechatronic systems are far from human capabilities and sophisticated robotic hands are highly priced. This paper describes a Fuzzy Logic Expert System (FLES) to map kinematic parameters from robotic hand features to the level of dexterity. The final goal is to obtain the adequate robotic hand that can do ranges of specific tasks according to the level of dexterity required. The FLES uses important kinematic parameters of the human hand/robotic hand: number of fingers, number of Degrees of Freedom (DoF), and number of contacts that grasping involves. As a result, several robotic hands are evaluated using the FLES to determine the type of dexterity task that corresponds to each robotic hand.
Development of a Two-Finger Haptic Robotic Hand with Novel Stiffness Detection and Impedance Control
Haptic hands and grippers, designed to enable skillful object manipulation, are pivotal for high-precision interaction with environments. These technologies are particularly vital in fields such as minimally invasive surgery, where they enhance surgical accuracy and tactile feedback: in the development of advanced prosthetic limbs, offering users improved functionality and a more natural sense of touch, and within industrial automation and manufacturing, they contribute to more efficient, safe, and flexible production processes. This paper presents the development of a two-finger robotic hand that employs simple yet precise strategies to manipulate objects without damaging or dropping them. Our innovative approach fused force-sensitive resistor (FSR) sensors with the average current of servomotors to enhance both the speed and accuracy of grasping. Therefore, we aim to create a grasping mechanism that is more dexterous than grippers and less complex than robotic hands. To achieve this goal, we designed a two-finger robotic hand with two degrees of freedom on each finger; an FSR was integrated into each fingertip to enable object categorization and the detection of the initial contact. Subsequently, servomotor currents were monitored continuously to implement impedance control and maintain the grasp of objects in a wide range of stiffness. The proposed hand categorized objects’ stiffness upon initial contact and exerted accurate force by fusing FSR and the motor currents. An experimental test was conducted using a Yale–CMU–Berkeley (YCB) object set consisted of a foam ball, an empty soda can, an apple, a glass cup, a plastic cup, and a small milk packet. The robotic hand successfully picked up these objects from a table and sat them down without inflicting any damage or dropping them midway. Our results represent a significant step forward in developing haptic robotic hands with advanced object perception and manipulation capabilities.
Advanced biomimetic robotic hand with EMG lifelong learning and recognition
The design and implementation of a suitable robotic hand for a toddler-sized humanoid robot is a challenging task. The main purpose of this work is to optimize the design of an anthropomorphic robotic hand and control it by using surface electromyographic (sEMG) signals. Isolation forest backward particle swarm optimization is used to optimize the robotic hand. The fitness function is defined by thumb opposability and the ability to grasp objects based on grasp taxonomy. Learning without forgetting (LWF) is adopted to train sEMG signal data sequentially, and the consequently learned model is used as an ensemble to control the optimized robotic hand. Webots is adopted to simulate the scenario of grasping objects to optimize the design of the hand. The optimized robotic hand is compared with two robotic hands, and the highest fitness values in the simulator and real world are obtained. Three different sEMG inputs, namely, raw data, bandpass, and discrete wavelet transformed bandpass, are compared in LWF, and the structure of neural networks is considered. The final LWF model is successfully applied to a real-world system to manipulate a robotic hand via hand gesture classification in real time.
Grasp quality measures: review and performance
The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures.
A review of robotic grasp detection technology
In order to complete many complex operations and attain more general-purpose utility, robotic grasp is a necessary skill to master. As the most common essential action of robots in factory and daily life environments, robotic autonomous grasping has a wide range of application prospects and has received much attention from researchers in the past decade. However, the accurate grasp of arbitrary objects in unstructured environments is still a research challenge that has not yet been completely overcome. A complete robotic grasp system usually involves three aspects: grasp detection, grasp planning, and control subsystem. As the first step, identifying the location of the object and generating the grasp pose is the premise of successful grasp, which is conducive to planning the subsequent grasp path and the realization of the entire grasp action. Therefore, this paper conducts a literature review focusing on grasp detection technology and concludes two significant aspects: the analytic and data-driven methods. According to the previous grasp experience of the target object, this paper divides the data-driven methods into the grasp of known and unknown objects. Then it describes in detail the typical grasp detection methods and related characteristics of each classification in the grasp of unknown objects. Finally, current research status and potential research directions in this field are discussed to provide some reference for related research.
An Embedded, Multi-Modal Sensor System for Scalable Robotic and Prosthetic Hand Fingers
Grasping and manipulation with anthropomorphic robotic and prosthetic hands presents a scientific challenge regarding mechanical design, sensor system, and control. Apart from the mechanical design of such hands, embedding sensors needed for closed-loop control of grasping tasks remains a hard problem due to limited space and required high level of integration of different components. In this paper we present a scalable design model of artificial fingers, which combines mechanical design and embedded electronics with a sophisticated multi-modal sensor system consisting of sensors for sensing normal and shear force, distance, acceleration, temperature, and joint angles. The design is fully parametric, allowing automated scaling of the fingers to arbitrary dimensions in the human hand spectrum. To this end, the electronic parts are composed of interchangeable modules that facilitate the mechanical scaling of the fingers and are fully enclosed by the mechanical parts of the finger. The resulting design model allows deriving freely scalable and multimodally sensorised fingers for robotic and prosthetic hands. Four physical demonstrators are assembled and tested to evaluate the approach.
Design and fabrication of a moving robotic glove system
This paper presents the research, design, and manufacture of a robotic hand to control movement with a glove. The moving glove-controlled robotic hand is based on two main parts: the hand mechanism and the control circuit. The control glove unit includes an Arduino nRF24l01 microcontroller module and five flex sensors for five fingers. These sensors are used to collect data about the curvature of each finger. Then those data will be received by the Arduino microcontroller and sent by the nRF24l01 module. The hand's microcontroller will process that information and control five servo motors so that the five fingers of the robotic hand are moved. The result of this research is to produce a robotic hand that accurately simulates the curvature of a user's finger and mimics the motion of a glove well. Moreover, the robot hand can grip objects of different sizes (from 0.1 to 1 kg) and shapes, from which this robot helps users easily manipulate objects.