Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
20,467 result(s) for "Visual control"
Sort by:
Visual Servoing Using Sliding-Mode Control with Dynamic Compensation for UAVs’ Tracking of Moving Targets
An Image-Based Visual Servoing Control (IBVS) structure for target tracking by Unmanned Aerial Vehicles (UAVs) is presented. The scheme contains two stages. The first one is a sliding-model controller (SMC) that allows one to track a target with a UAV; the control strategy is designed in the function of the image. The proposed SMC control strategy is commonly used in control systems that present high non-linearities and that are always exposed to external disturbances; these disturbances can be caused by environmental conditions or induced by the estimation of the position and/or velocity of the target to be tracked. In the second instance, a controller is placed to compensate the UAV dynamics; this is a controller that allows one to compensate the velocity errors that are produced by the dynamic effects of the UAV. In addition, the corresponding stability analysis of the sliding mode-based visual servo controller and the sliding mode dynamic compensation control is presented. The proposed control scheme employs the kinematics and dynamics of the robot by presenting a cascade control based on the same control strategy. In order to evaluate the proposed scheme for tracking moving targets, experimental tests are carried out in a semi-structured working environment with the hexarotor-type aerial robot. For detection and image processing, the Opencv C++ library is used; the data are published in an ROS topic at a frequency of 50 Hz. The robot controller is implemented in the mathematical software Matlab.
Enhancing Visual Feedback Control through Early Fusion Deep Learning
A visual servoing system is a type of control system used in robotics that employs visual feedback to guide the movement of a robot or a camera to achieve a desired task. This problem is addressed using deep models that receive a visual representation of the current and desired scene, to compute the control input. The focus is on early fusion, which consists of using additional information integrated into the neural input array. In this context, we discuss how ready-to-use information can be directly obtained from the current and desired scenes, to facilitate the learning process. Inspired by some of the most effective traditional visual servoing techniques, we introduce early fusion based on image moments and provide an extensive analysis of approaches based on image moments, region-based segmentation, and feature points. These techniques are applied stand-alone or in combination, to allow obtaining maps with different levels of detail. The role of the extra maps is experimentally investigated for scenes with different layouts. The results show that early fusion facilitates a more accurate approximation of the linear and angular camera velocities, in order to control the movement of a 6-degree-of-freedom robot from a current configuration to a desired one. The best results were obtained for the extra maps providing details of low and medium levels.
Stereo Visual Servoing Control of a Soft Endoscope for Upper Gastrointestinal Endoscopic Submucosal Dissection
Quickly and accurately completing endoscopic submucosal dissection (ESD) operations within narrow lumens is currently challenging because of the environment’s high flexibility, invisible collision, and natural tissue motion. This paper proposes a novel stereo visual servoing control for a dual-segment robotic endoscope (DSRE) for ESD surgery. Departing from conventional monocular-based methods, our DSRE leverages stereoscopic imaging to rapidly extract precise depth data, enabling quicker controller convergence and enhanced surgical accuracy. The system’s dual-segment configuration enables agile maneuverability around lesions, while its compliant structure ensures adaptability within the surgical environment. The implemented stereo visual servo controller uses image features for real-time feedback and dynamically updates gain coefficients, facilitating rapid convergence to the target. In visual servoing experiments, the controller demonstrated strong performance across various tasks. Even when subjected to unknown external forces, the controller maintained robust performance in target tracking. The feasibility and effectiveness of the DSRE were further verified through ex vivo experiments. We posit that this novel system holds significant potential for clinical application in ESD surgeries.
Predictive visual control network for occlusion solution in human-following robot
PurposeThe purpose of this paper is to propose a new video prediction-based methodology to solve the manufactural occlusion problem, which causes the loss of input images and uncertain controller parameters for the robot visual servo control.Design/methodology/approachThis paper has put forward a method that can simultaneously generate images and controller parameter increments. Then, this paper also introduced target segmentation and designed a new comprehensive loss. Finally, this paper combines offline training to generate images and online training to generate controller parameter increments.FindingsThe data set experiments to prove that this method is better than the other four methods, and it can better restore the occluded situation of the human body in six manufactural scenarios. The simulation experiment proves that it can simultaneously generate image and controller parameter variations to improve the position accuracy of tracking under occlusions in manufacture.Originality/valueThe proposed method can effectively solve the occlusion problem in visual servo control.
Adaptive visual servoing control for an underwater soft robot
PurposeSoft robotics, regarded as a new research branch of robotics, has generated increasing interests in this decade and has demonstrated its outperformance in addressing safety issues when cooperating with human beings. However, there is still lack of accurate close-loop control because of the difficulty in acquiring feedback information and accurately modeling the system, especially in interactive environments. To this end, this paper aims to improve the controllability of the soft robot working in specific underwater environment. The system dynamics, which takes complicated hydrodynamics into account, is solved using Kane’s method. The dynamics-based adaptive visual servoing controller is proposed to realize accurate sensorimotor control.Design/methodology/approachThis paper presents an image-based visual servoing control scheme for a cable-driven soft robot with a fixed camera observing the motions. The intrinsic and extrinsic parameters of the camera can be adapted online so that tedious camera calibration work can be eliminated. It is acknowledged that kinematics-based control can be only applied into tasks in the free space and has limitation in accelerating the motion speed of robot arms. That is, one must consider the unneglectable interaction effects generated from the environment and objectives when operating soft robots in such interactive control tasks. To extend the application of soft robots into underwater environment, the study models system dynamics considering complicated hydrodynamic effects. With the pre-knowledge of the external effects, the performance of the robot can be further improved by adding the compensation term into the controller.FindingsThe proposed controller has theoretically proved its convergence of image error, adaptive estimation error and the stability of the dynamical system based on Lyapunov’s analysis. The authors also validate the performance of the controller in positioning control task in an underwater environment. The controller shows its capacity of rapid convergence to and accurate tracking performance of a static image target in a physical experiment.Originality/valueTo the best of the authors’ knowledge, there is no such research before that has developed dynamics-based visual servoing controller which takes into account the environment interactions. This work can thus improve the control accuracy and enhance the applicability of soft robotics when operating in complicated environments.
Retinal optic flow during natural locomotion
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker’s visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body’s trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker’s instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body’s momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.
The critical phase for visual control of human walking over complex terrain
To walk efficiently over complex terrain, humans must use vision to tailor their gait to the upcoming ground surface without interfering with the exploitation of passive mechanical forces. We propose that walkers use visual information to initialize the mechanical state of the body before the beginning of each step so the resulting ballistic trajectory of the walker’s center-of-mass will facilitate stepping on target footholds. Using a precision stepping task and synchronizing target visibility to the gait cycle, we empirically validated two predictions derived from this strategy: (1) Walkers must have information about upcoming footholds during the second half of the preceding step, and (2) foot placement is guided by information about the position of the target foothold relative to the preceding base of support. We conclude that active and passive modes of control work synergistically to allow walkers to negotiate complex terrain with efficiency, stability, and precision.
Event-Triggered Nonlinear Visual Predictive Control Strategy for Robots
This paper proposes an event-triggered nonlinear visual predictive control strategy for image-based visual servoing of robots. It involves developing a nonlinear model of the visual servoing system and designing a predictive control strategy that addresses safety, real-time performance, robustness, and smooth motion control. Field-of-view constraints ensure image feature visibility, physical constraints respect joint limits, and smooth motion constraints protect hardware from excessive stress. The event-triggered mechanism activates control laws only when necessary, reducing the computational burden of continuous control adjustments and enhancing responsiveness and efficiency. This strategy supports robustness, mitigates issues arising from local minima, and maintains system stability, providing a practical solution for real-time visual servoing tasks. Furthermore, we compare the performance of the proposed strategy against conventional and modified model predictive control strategies in various visual servoing tasks through simulations. Finally, the experiment results demonstrate the effectiveness of the strategy.
System for determining state of continuous welded track
The article considers the main disadvantages of the existing method of examining the lashes of a jointless path, in which a pedestrian bypass of the sections of the canvas and their visual control is performed. A system for monitoring the temperature stresses of jointless rail lashes laid in the path is proposed, which allows monitoring the longitudinal displacement of the track throughout the entire life cycle: from the moment the rails are welded into long-length lashes, including their laying, and throughout further operation. The system makes it possible to monitor compliance with the optimal fixing temperature of rail lashes during their laying, welding and fixing, based on data obtained with the help of sensors, to determine the actual fixing temperature in the lashes of a jointless track; to control the optimal fixing temperature of rail lashes in areas of restoration of their integrity.
Fixed-time trajectory tracking control for nonholonomic mobile robot based on visual servoing
This paper aims to discuss fixed-time tracking control problem for a nonholonomic wheeled mobile robot based on visual servoing. At first, by making use of the pinhole camera model, the robot system model with uncalibrated camera parameters is given. Then, the tracking error system between the mobile robot and desired trajectory is proposed. Thirdly, on the basis of fixed-time control theory and Lyapunov stability analysis, fixed-time tracking control laws are proposed for the mobile robot such that the robot can track the reference trajectory in a fixed time. It is well known that the convergence time for the finite-time control systems is usually dependent on the initial state of the system. However, the settling time obtained by the fixed-time control is independent of the system initial conditions and only determined by the controller parameters, which is more in line with practical application. Simulation results are given at the end.