Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
46 result(s) for "Carfì, Alessandro"
Sort by:
Can Human-Inspired Learning Behaviour Facilitate Human–Robot Interaction?
The evolution of production systems for smart factories foresees a tight relation between human operators and robots. Specifically, when robot task reconfiguration is needed, the operator must be provided with an easy and intuitive way to do it. A useful tool for robot task reconfiguration is Programming by Demonstration (PbD). PbD allows human operators to teach a robot new tasks by showing it a number of examples. The article presents two studies investigating the role of the robot in PbD. A preliminary study compares standard PbD with human–human teaching and suggests that a collaborative robot should actively participate in the teaching process as human practitioners typically do. The main study uses a wizard of oz approach to determine the effects of having a robot actively participating in the teaching process, specifically by controlling the end-effector. The results suggest that active behaviour inspired by humans can lead to a more intuitive PbD.
IFRA: A Machine Learning-Based Instrumented Fall Risk Assessment Scale Derived from an Instrumented Timed Up and Go Test in Stroke Patients
Background/Objectives: Falls represent a major health concern for stroke survivors, necessitating effective risk assessment tools. This study proposes the Instrumented Fall Risk Assessment (IFRA) scale, a novel screening tool derived from Instrumented Timed Up and Go (ITUG) test data, designed to capture mobility measures often missed by traditional scales. Methods: We employed a two-step machine learning approach to develop the IFRA scale: first, identifying predictive mobility features from ITUG data and, second, creating a stratification strategy to classify patients into low-, medium-, or high-fall-risk categories. This study included 142 participants, who were divided into training (including synthetic cases), validation, and testing sets (comprising 22 non-fallers and 10 fallers). IFRA’s performance was compared against traditional clinical scales (e.g., standard TUG and Mini-BESTest) using Fisher’s Exact test. Results: Machine learning analysis identified specific features as key predictors, namely vertical and medio-lateral acceleration, and angular velocity during walking and sit-to-walk transitions. IFRA demonstrated a statistically significant association with fall status (Fisher’s Exact test p = 0.004) and was the only scale to assign more than half of the actual fallers to the high-risk category, outperforming the comparative clinical scales in this dataset. Conclusions: This proof-of-concept study demonstrates IFRA’s potential as an automated, complementary approach for fall risk stratification in post-stroke patients. While IFRA shows promising discriminative capability, particularly for identifying high-risk individuals, these preliminary findings require validation in larger cohorts before clinical implementation.
A Novel Method to Compute the Contact Surface Area Between an Organ and Cancer Tissue
The contact surface area (CSA) quantifies the interface between a tumor and an organ and is a key predictor of perioperative outcomes in kidney cancer. However, existing CSA computation methods rely on shape assumptions and manual annotation. We propose a novel approach using 3D reconstructions from computed tomography (CT) scans to provide an accurate CSA estimate. Our method includes a segmentation protocol and an algorithm that processes reconstructed meshes. We also provide an open-source implementation with a graphical user interface. Tested on synthetic data, the algorithm showed minimal error and was evaluated on data from 82 patients. We computed the CSA using both our approach and Hsieh’s method, which relies on subjective CT scan measurements, in a double-blind study with two radiologists of different experience levels. We assessed the correlation between our approach and the expert radiologist’s measurements, as well as the deviation of both our method and the less experienced radiologist from the expert’s values. While the mean and variance of the differences between the less experienced radiologist and the expert were lower, our method exhibited a slight deviation from the expert’s, demonstrating its reliability and consistency. These findings are further supported by the results obtained from synthetic data testing.
Gesture-based Human-Machine Interaction: Taxonomy, Problem Definition, and Analysis
The possibility for humans to interact with physical or virtual systems using gestures has been vastly explored by researchers and designers in the last twenty years to provide new and intuitive interaction modalities. Unfortunately, the literature about gestural interaction is not homogeneous, and it is characterised by a lack of shared terminology. This leads to fragmented results and makes it difficult for research activities to build on top of state-of-the-art results and approaches. The analysis in this paper aims at creating a common conceptual design framework to enforce development efforts in gesture-based human-machine interaction. The main contributions of the paper can be summarised as follows: (i) we provide a broad definition for the notion of functional gesture in human-machine interaction, (ii) we design a flexible and expandable gesture taxonomy, and (iii) we put forward a detailed problem statement for gesture-based human-machine interaction. Finally, to support our main contribution, the paper presents, and analyses 83 most pertinent articles classified on the basis of our taxonomy and problem statement.
IFRA: a machine learning-based Instrumented Fall Risk Assessment Scale derived from Instrumented Timed Up and Go test in stroke patients
Background/Objectives: Falls represent a major health concern for stroke survivors, necessitating effective risk assessment tools. This study proposes the Instrumented Fall Risk Assessment (IFRA) scale, a novel screening tool derived from Instrumented Timed Up and Go (ITUG) test data, designed to capture mobility measures often missed by traditional scales. Methods: We employed a two-step machine learning approach to develop the IFRA scale: first, identifying predictive mobility features from ITUG data and, second, creating a stratification strategy to classify patients into low-, medium-, or high-fall-risk categories. This study included 142 participants, who were divided into training (including synthetic cases), validation, and testing sets (comprising 22 non-fallers and 10 fallers). IFRA's performance was compared against traditional clinical scales (e.g., standard TUG and Mini-BESTest) using Fisher's Exact test. Results: Machine learning analysis identified specific features as key predictors, namely vertical and medio-lateral acceleration, and angular velocity during walking and sit-to-walk transitions. IFRA demonstrated a statistically significant association with fall status (Fisher's Exact test p = 0.004) and was the only scale to assign more than half of the actual fallers to the high-risk category, outperforming the comparative clinical scales in this dataset. Conclusions: This proof-of-concept study demonstrates IFRA's potential as an automated, complementary approach for fall risk stratification in post-stroke patients. While IFRA shows promising discriminative capability, particularly for identifying high-risk individuals, these preliminary findings require validation in larger cohorts before clinical implementation.
Joint Prediction of Human Motions and Actions in Human-Robot Collaboration
Fluent human--robot collaboration requires robots to continuously estimate human behaviour and anticipate future intentions. This entails reasoning jointly about \\emph{continuous movements} and \\emph{discrete actions}, which are still largely modelled in isolation. In this paper, we introduce \\textsf{MA-HERP}, a hierarchical and recursive probabilistic framework for the \\emph{joint estimation and prediction} of human movements and actions. The model combines: (i) a hierarchical representation in which movements compose into actions through admissible Allen interval relations, (ii) a unified probabilistic factorisation coupling continuous dynamics, discrete labels, and durations, and (iii) a recursive inference scheme inspired by Bayesian filtering, alternating top-down action prediction with bottom-up sensory evidence. We present a preliminary experimental evaluation based on neural models trained on musculoskeletal simulations of reaching movements, showing accurate motion prediction, robust action inference under noise, and computational performance compatible with on-line human--robot collaboration.
The Effects of Selected Object Features on a Pick-and-Place Task: a Human Multimodal Dataset
We propose a dataset to study the influence of object-specific characteristics on human pick-and-place movements and compare the quality of the motion kinematics extracted by various sensors. This dataset is also suitable for promoting a broader discussion on general learning problems in the hand-object interaction domain, such as intention recognition or motion generation with applications in the Robotics field. The dataset consists of the recordings of 15 subjects performing 80 repetitions of a pick-and-place action under various experimental conditions, for a total of 1200 pick-and-places. The data has been collected thanks to a multimodal setup composed of multiple cameras, observing the actions from different perspectives, a motion capture system, and a wrist-worn inertial measurement unit. All the objects manipulated in the experiments are identical in shape, size, and appearance but differ in weight and liquid filling, which influences the carefulness required for their handling.
From Movement Kinematics to Object Properties: Online Recognition of Human Carefulness
When manipulating objects, humans finely adapt their motions to the characteristics of what they are handling. Thus, an attentive observer can foresee hidden properties of the manipulated object, such as its weight, temperature, and even whether it requires special care in manipulation. This study is a step towards endowing a humanoid robot with this last capability. Specifically, we study how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object. We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera. Only for short movements without obstacles, carefulness recognition was insufficient. The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
Digital Twins for Human-Robot Collaboration: A Future Perspective
As collaborative robot (Cobot) adoption in many sectors grows, so does the interest in integrating digital twins in human-robot collaboration (HRC). Virtual representations of physical systems (PT) and assets, known as digital twins, can revolutionize human-robot collaboration by enabling real-time simulation, monitoring, and control. In this article, we present a review of the state-of-the-art and our perspective on the future of digital twins (DT) in human-robot collaboration. We argue that DT will be crucial in increasing the efficiency and effectiveness of these systems by presenting compelling evidence and a concise vision of the future of DT in human-robot collaboration, as well as insights into the possible advantages and challenges associated with their integration.
A Social Robot with Inner Speech for Dietary Guidance
We explore the use of inner speech as a mechanism to enhance transparency and trust in social robots for dietary advice. In humans, inner speech structures thought processes and decision-making; in robotics, it improves explainability by making reasoning explicit. This is crucial in healthcare scenarios, where trust in robotic assistants depends on both accurate recommendations and human-like dialogue, which make interactions more natural and engaging. Building on this, we developed a social robot that provides dietary advice, and we provided the architecture with inner speech capabilities to validate user input, refine reasoning, and generate clear justifications. The system integrates large language models for natural language understanding and a knowledge graph for structured dietary information. By making decisions more transparent, our approach strengthens trust and improves human-robot interaction in healthcare. We validated this by measuring the computational efficiency of our architecture and conducting a small user study, which assessed the reliability of inner speech in explaining the robot's behavior.