Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
21 result(s) for "Boyraz, Pinar"
Sort by:
An Overview of Novel Actuators for Soft Robotics
In this systematic survey, an overview of non-conventional actuators particularly used in soft-robotics is presented. The review is performed by using well-defined performance criteria with a direction to identify the exemplary and potential applications. In addition to this, initial guidelines to compare the performance and applicability of these novel actuators are provided. The meta-analysis is restricted to five main types of actuators: shape memory alloys (SMAs), fluidic elastomer actuators (FEAs), shape morphing polymers (SMPs), dielectric electro-activated polymers (DEAPs), and magnetic/electro-magnetic actuators (E/MAs). In exploring and comparing the capabilities of these actuators, the focus was on eight different aspects: compliance, topology-geometry, scalability-complexity, energy efficiency, operation range, modality, controllability, and technological readiness level (TRL). The overview presented here provides a state-of-the-art summary of the advancements and can help researchers to select the most convenient soft actuators using the comprehensive comparison of the suggested quantitative and qualitative criteria.
Object manipulation with a variable-stiffness robotic mechanism using deep neural networks for visual semantics and load estimation
In recent years, the computer vision applications in the robotics have been improved to approach human-like visual perception and scene/context understanding. Following this aspiration, in this study, we explored the possibility of better object manipulation performance by connecting the visual recognition of objects to their physical attributes, such as weight and center of gravity (CoG). To develop and test this idea, an object manipulation platform is built comprising a robotic arm, a depth camera fixed at the top center of the workspace, embedded encoders in the robotic arm mechanism, and microcontrollers for position and force control. Since both the visual recognition and force estimation algorithms use deep learning principles, the test set-up was named as Deep-Table . The objects in the manipulation tests are selected from everyday life and are common to be seen on modern office desktops. The visual object localization and recognition processes are performed from two distinct branches by deep convolutional neural network architectures. We present five of the possible cases, having different levels of information availability on the object weight and CoG in the experiments. The results confirm that using our algorithm, the robotic arm can move different types of objects successfully varying from several grams (empty bottle) to around 250 g (ceramic cup) without failure or tipping. The proposed method also shows that connecting the object recognition with load estimation and contact point further improves the performance characterized by a smoother motion.
Design of a low-cost tactile robotic sleeve for autonomous endoscopes and catheters
Recent developments in medical robotics have been significant, supporting the minimally invasive operation requirements, such as smaller devices and more feedback available to surgeons. Nevertheless, the tactile feedback from a catheter or endoscopic type robotic device has been restricted mostly on the tip of the device and was not aimed to support the autonomous movement of the medical device during operation. In this work, we design a robotic sheath/sleeve with a novel and more comprehensive approach, which can function for whole body or segment-based feedback control as well as diagnostic purposes. The robotic sleeve has several types of piezo-resistive pressure and extension sensors, which are embedded at several latitudes and depths of the silicone substrate. The sleeve takes the human skin as a biological model for its structure. It has a better tactile sensation of the inner tissues in the torturous narrow channels such as cardiovascular or endoluminal tracts in human body and thus can be used to diagnose abnormalities. In addition to this capability, using the stretch sensors distributed alongside its body, the robotic sheath/sleeve can perceive the ego-motion of the robotic backbone of the catheter and can act as a position feedback device. Because of the silicone substrate, the sleeve contributes toward safety of the medical device passively by providing a compliant interface. As an active safety measure, the robotic sheath can sense blood clots or sudden turns inside a channel and by modifying the local trajectory and can prevent embolisms or tissue rupture. In the future, advanced manufacturing techniques will increase the capabilities of the tactile robotic sleeve.
A hybrid image dataset toward bridging the gap between real and simulation environments for robotics
The primary motivation of computer vision in the robotics field is to obtain a perception level that is as close as possible to human visual system. To achieve this, the inclusion of large datasets is necessary, sometimes involving less-frequent and seemingly irrelevant data to increase the system robustness. To minimize the effort and time in forming such extensive datasets from real world, the preferred method is to utilize simulation environments, replicating real-world conditions as much as possible. Following this solution path, the machine vision problems in robotics (i.e., object detection, recognition, and manipulation) often employ synthetic images in datasets and, however, do not mix them with real-world images. When the systems are trained only using the synthetic images and tested within the simulated world, the tasks requiring object recognition in robotics can be accomplished. However, the systems trained using this procedure cannot be directly used in the real-world experiments or end-user products due to the inconsistencies between real and simulation environments. Therefore, we propose a hybrid image dataset including annotated desktop objects from real and synthetic worlds (ADORESet). This hybrid dataset provides purposeful object categories with a sufficient number of real and synthetic images. ADORESet is composed of colored images with the dimension of 300×300 pixels within 30 categories. Each class has 2500 real-world images acquired from the wild web and 750 synthetic images that are generated within Gazebo simulation environment. This hybrid dataset enables researchers to implement their own algorithms for both real-world and simulation environment conditions. ADORESet is composed of fully annotated object images. The limits of objects are manually specified, and the bounding box coordinates are provided. The successor objects are also labeled to give statistical information and the likelihood about the relations of the objects within the dataset. To further demonstrate the benefits of this dataset, it is tested in object recognition tasks by fine-tuning the state-of-the-art deep convolutional neural networks such as VGGNet, InceptionV3, ResNet, and Xception. The possible combinations regarding the data types for these models are compared in terms of time, accuracy, and loss values. As a result of the conducted object recognition experiments, training with all-real images yields approximately 49% validation accuracy for simulation images. When the training is performed with all-synthetic images and validated using all-real images, the accuracy becomes lower than 10%. If the complete ADORESet is employed for training and validation, the hybrid dataset validation accuracy reaches approximately to 95%. This result proves further that including the real and synthetic images together in the training and validation sessions increases the overall system accuracy and reliability.
Enhancement of Vehicle Handling Based on Rear Suspension Geometry Using Taguchi Method
Studies have shown that the number of road accidents caused by rollover both in Europe and in Turkey is increasing [1]. Therefore, rollover related accidents became the new target of the studies in the field of vehicle dynamics research aiming for both active and passive safety systems. This paper presents a method for optimizing the rear suspension geometry using design of experiment and multibody simulation in order to reduce the risk of rollover. One of the major differences of this study from previous work is that it includes statistical Taguchi method in order to increase the safety margin. Other difference of this study from literature is that it includes all design tools such as model validation, optimization and full vehicle handling and ride comfort tests. Rollover angle of the vehicle was selected as the cost function in the optimization algorithm that also contains roll stiffness and height of the roll center. In order to form the cost function, five different geometrical factors have been selected as design variables. The ultimate aim is to minimize the cost function by increasing the roll center height and suspension roll stiffness. To run the optimization routine, a rigid rear suspension mechanism used on the 7 m bus has been modeled using Adams/Car software program. Opposite wheel travel analysis has been performed as an optimization test method in order to simulate the vehicle passing over the bump. Then, in order to reach the minimum value of the cost function, statistical Taguchi method was used to perform design of experiments (DOE). In total, 27 experiments have been performed according to the selected design variables. Therefore, in each different experiment, the roll center height and the roll stiffness were measured. Then, the cost function was calculated and recorded to compare with the future iterations. The attachment points giving minimum cost function value are expected to be the optimal coordinates for installing the suspension mechanism.
Design and Modelling of a Cable-Driven Parallel-Series Hybrid Variable Stiffness Joint Mechanism for Robotics
The robotics, particularly the humanoid research field, needs new mechanisms to meet the criteria enforced by compliance, workspace requirements, motion profile characteristics and variable stiffness using lightweight but robust designs. The mechanism proposed herein is a solution to this problem by a parallel-series hybrid mechanism. The parallel term comes from two cable-driven plates supported by a compression spring in between. Furthermore, there is a two-part concentric shaft, passing through both plates connected by a universal joint. Because of the kinematic constraints of the universal joint, the mechanism can be considered as a serial chain. The mechanism has 4 degrees of freedom (DOF) which are pitch, roll, yaw motions and translational movement in z axis for stiffness adjustment. The kinematic model is obtained to define the workspace. The helical spring is analysed by using Castigliano's Theorem and the behaviour of bending and compression characteristics are presented which are validated by using finite element analysis (FEA). Hence, the dynamic model of the mechanism is derived depending on the spring reaction forces and moments. The motion experiments are performed to validate both kinematic and dynamic models. As a result, the proposed mechanism has a potential use in robotics especially in humanoid robot joints, considering the requirements of this robotic field.
A hybrid image dataset toward bridging the gap between real and simulation environments for robotics
The primary motivation of computer vision in the robotics field is to obtain a perception level that is as close as possible to human visual system. To achieve this, the inclusion of large datasets is necessary, sometimes involving less-frequent and seemingly irrelevant data to increase the system robustness. To minimize the effort and time in forming such extensive datasets from real world, the preferred method is to utilize simulation environments, replicating real-world conditions as much as possible. Following this solution path, the machine vision problems in robotics (i.e., object detection, recognition, and manipulation) often employ synthetic images in datasets and, however, do not mix them with real-world images. When the systems are trained only using the synthetic images and tested within the simulated world, the tasks requiring object recognition in robotics can be accomplished. However, the systems trained using this procedure cannot be directly used in the real-world experiments or end-user products due to the inconsistencies between real and simulation environments. Therefore, we propose a hybrid image dataset including annotated desktop objects from real and synthetic worlds (ADORESet). This hybrid dataset provides purposeful object categories with a sufficient number of real and synthetic images. ADORESet is composed of colored images with the dimension of 300 × 300 pixels within 30 categories. Each class has 2500 real-world images acquired from the wild web and 750 synthetic images that are generated within Gazebo simulation environment. This hybrid dataset enables researchers to implement their own algorithms for both real-world and simulation environment conditions. ADORESet is composed of fully annotated object images. The limits of objects are manually specified, and the bounding box coordinates are provided. The successor objects are also labeled to give statistical information and the likelihood about the relations of the objects within the dataset. To further demonstrate the benefits of this dataset, it is tested in object recognition tasks by fine-tuning the state-of-the-art deep convolutional neural networks such as VGGNet, InceptionV3, ResNet, and Xception. The possible combinations regarding the data types for these models are compared in terms of time, accuracy, and loss values. As a result of the conducted object recognition experiments, training with all-real images yields approximately 49 % validation accuracy for simulation images. When the training is performed with all-synthetic images and validated using all-real images, the accuracy becomes lower than 10 % . If the complete ADORESet is employed for training and validation, the hybrid dataset validation accuracy reaches approximately to 95 % . This result proves further that including the real and synthetic images together in the training and validation sessions increases the overall system accuracy and reliability.
Parameter and density estimation from real-world traffic data: A kinetic compartmental approach
The main motivation of this work is to assess the validity of a LWR traffic flow model to model measurements obtained from trajectory data, and propose extensions of this model to improve it. A formulation for a discrete dynamical system is proposed aiming at reproducing the evolution in time of the density of vehicles along a road, as observed in the measurements. This system is formulated as a chemical reaction network where road cells are interpreted as compartments, the transfer of vehicles from one cell to the other is seen as a chemical reaction between adjacent compartment and the density of vehicles is seen as a concentration of reactant. Several degrees of flexibility on the parameters of this system, which basically consist of the reaction rates between the compartments, can be considered: a constant value or a function depending on time and/or space. Density measurements coming from trajectory data are then interpreted as observations of the states of this system at consecutive times. Optimal reaction rates for the system are then obtained by minimizing the discrepancy between the output of the system and the state measurements. This approach was tested both on simulated and real data, proved successful in recreating the complexity of traffic flows despite the assumptions on the flux-density relation.
Measurement of fuzz fibers on fabric surface using image analysis methods
Fuzz on the fabrics, which is the fibers protruded from the fabric surface, is very important in view of appearance quality, since it causes unpleasant appearance on the fabrics and also leads to pilling which makes fabric appearance and softness worse. However, fuzz on fabric surface is measured mostly by subjective methods (human vision) rather than objective methods. Thus, in this study, objective method using image analysis techniques has been developed for the measurement of fuzz on fabric surface. Fuzz on the fabric has also been ranked and rated by experts in order to see the reliability of the results obtained from the fuzz measurement. It was observed that correlation coefficient (r) between rating value and objective measurement value was 0.9 and this correlation coefficient value confirmed the reliability of this method.
Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics
The purpose of this study is to provide a detailed performance comparison of feature detector/descriptor methods, particularly when their various combinations are used for image-matching. The localization experiments of a mobile robot in an indoor environment are presented as a case study. In these experiments, 3090 query images and 127 dataset images were used. This study includes five methods for feature detectors (features from accelerated segment test (FAST), oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant feature transform (SIFT), and binary robust invariant scalable keypoints (BRISK)) and five other methods for feature descriptors (BRIEF, BRISK, SIFT, SURF, and ORB). These methods were used in 23 different combinations and it was possible to obtain meaningful and consistent comparison results using the performance criteria defined in this study. All of these methods were used independently and separately from each other as either feature detector or descriptor. The performance analysis shows the discriminative power of various combinations of detector and descriptor methods. The analysis is completed using five parameters: (i) accuracy, (ii) time, (iii) angle difference between keypoints, (iv) number of correct matches, and (v) distance between correctly matched keypoints. In a range of 60{\\deg}, covering five rotational pose points for our system, the FAST-SURF combination had the lowest distance and angle difference values and the highest number of matched keypoints. SIFT-SURF was the most accurate combination with a 98.41% correct classification rate. The fastest algorithm was ORB-BRIEF, with a total running time of 21,303.30 s to match 560 images captured during motion with 127 dataset images.