Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
213 result(s) for "Cherubini, Andrea"
Sort by:
Sensor-Based Control for Collaborative Robots: Fundamentals, Challenges, and Opportunities
The objective of this paper is to present a systematic review of existing sensor-based control methodologies for applications that involve direct interaction between humans and robots, in the form of either physical collaboration or safe coexistence. To this end, we first introduce the basic formulation of the sensor-servo problem, and then, present its most common approaches: vision-based, touch-based, audio-based, and distance-based control. Afterwards, we discuss and formalize the methods that integrate heterogeneous sensors at the control level. The surveyed body of literature is classified according to various factors such as: sensor type, sensor integration method, and application domain. Finally, we discuss open problems, potential applications, and future research directions.
Experimental evidence of effective human–AI collaboration in medical decision-making
Artificial Intelligence ( ai ) systems are precious support for decision-making, with many applications also in the medical domain. The interaction between md s and ai enjoys a renewed interest following the increased possibilities of deep learning devices. However, we still have limited evidence-based knowledge of the context, design, and psychological mechanisms that craft an optimal human– ai collaboration. In this multicentric study, 21 endoscopists reviewed 504 videos of lesions prospectively acquired from real colonoscopies. They were asked to provide an optical diagnosis with and without the assistance of an ai support system. Endoscopists were influenced by ai ( O R = 3.05 ), but not erratically: they followed the ai advice more when it was correct ( O R = 3.48 ) than incorrect ( O R = 1.85 ). Endoscopists achieved this outcome through a weighted integration of their and the ai opinions, considering the case-by-case estimations of the two reliabilities. This Bayesian-like rational behavior allowed the human– ai hybrid team to outperform both agents taken alone. We discuss the features of the human– ai interaction that determined this favorable outcome.
A Deep Learning Framework for Recognizing Both Static and Dynamic Gestures
Intuitive user interfaces are indispensable to interact with the human centric smart environments. In this paper, we propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing). This feature makes it suitable for inexpensive human-robot interaction in social or industrial settings. We employ a pose-driven spatial attention strategy, which guides our proposed Static and Dynamic gestures Network—StaDNet. From the image of the human upper body, we estimate his/her depth, along with the region-of-interest around his/her hands. The Convolutional Neural Network (CNN) in StaDNet is fine-tuned on a background-substituted hand gestures dataset. It is utilized to detect 10 static gestures for each hand as well as to obtain the hand image-embeddings. These are subsequently fused with the augmented pose vector and then passed to the stacked Long Short-Term Memory blocks. Thus, human-centred frame-wise information from the augmented pose vector and from the left/right hands image-embeddings are aggregated in time to predict the dynamic gestures of the performing person. In a number of experiments, we show that the proposed approach surpasses the state-of-the-art results on the large-scale Chalearn 2016 dataset. Moreover, we transfer the knowledge learned through the proposed methodology to the Praxis gestures dataset, and the obtained results also outscore the state-of-the-art on this dataset.
A Review of the Technology, Training, and Assessment Methods for the First Real-Time AI-Enhanced Medical Device for Endoscopy
Artificial intelligence (AI) has the potential to assist in endoscopy and improve decision making, particularly in situations where humans may make inconsistent judgments. The performance assessment of the medical devices operating in this context is a complex combination of bench tests, randomized controlled trials, and studies on the interaction between physicians and AI. We review the scientific evidence published about GI Genius, the first AI-powered medical device for colonoscopy to enter the market, and the device that is most widely tested by the scientific community. We provide an overview of its technical architecture, AI training and testing strategies, and regulatory path. In addition, we discuss the strengths and limitations of the current platform and its potential impact on clinical practice. The details of the algorithm architecture and the data that were used to train the AI device have been disclosed to the scientific community in the pursuit of a transparent AI. Overall, the first AI-enabled medical device for real-time video analysis represents a significant advancement in the use of AI for endoscopies and has the potential to improve the accuracy and efficiency of colonoscopy procedures.
Automatic Detection of White Matter Hyperintensities in Healthy Aging and Pathology Using Magnetic Resonance Imaging: A Review
White matter hyperintensities (WMH) are commonly seen in the brain of healthy elderly subjects and patients with several neurological and vascular disorders. A truly reliable and fully automated method for quantitative assessment of WMH on magnetic resonance imaging (MRI) has not yet been identified. In this paper, we review and compare the large number of automated approaches proposed for segmentation of WMH in the elderly and in patients with vascular risk factors. We conclude that, in order to avoid artifacts and exclude the several sources of bias that may influence the analysis, an optimal method should comprise a careful preprocessing of the images, be based on multimodal, complementary data, take into account spatial information about the lesions and correct for false positives. All these features should not exclude computational leanness and adaptability to available data.
The Sensor-Based Biomechanical Risk Assessment at the Base of the Need for Revising of Standards for Human Ergonomics
Due to the epochal changes introduced by “Industry 4.0”, it is getting harder to apply the varying approaches for biomechanical risk assessment of manual handling tasks used to prevent work-related musculoskeletal disorders (WMDs) considered within the International Standards for ergonomics. In fact, the innovative human–robot collaboration (HRC) systems are widening the number of work motor tasks that cannot be assessed. On the other hand, new sensor-based tools for biomechanical risk assessment could be used for both quantitative “direct instrumental evaluations” and “rating of standard methods”, allowing certain improvements over traditional methods. In this light, this Letter aims at detecting the need for revising the standards for human ergonomics and biomechanical risk assessment by analyzing the WMDs prevalence and incidence; additionally, the strengths and weaknesses of traditional methods listed within the International Standards for manual handling activities and the next challenges needed for their revision are considered. As a representative example, the discussion is referred to the lifting of heavy loads where the revision should include the use of sensor-based tools for biomechanical risk assessment during lifting performed with the use of exoskeletons, by more than one person (team lifting) and when the traditional methods cannot be applied. The wearability of sensing and feedback sensors in addition to human augmentation technologies allows for increasing workers’ awareness about possible risks and enhance the effectiveness and safety during the execution of in many manual handling activities.
Interdisciplinary evaluation of a robot physically collaborating with workers
Collaborative Robots—CoBots—are emerging as a promising technological aid for workers. To date, most CoBots merely share their workspace or collaborate without contact, with their human partners. We claim that robots would be much more beneficial if they physically collaborated with the worker, on high payload tasks. To move high payloads, while remaining safe, the robot should use two or more lightweight arms. In this work, we address the following question: to what extent can robots help workers in physical human-robot collaboration tasks? To find an answer, we have gathered an interdisciplinary group, spanning from an industrial end user to cognitive ergonomists, and including biomechanicians and roboticists. We drew inspiration from an industrial process realized repetitively by workers of the SME HANKAMP (Netherlands). Eleven participants replicated the process, without and with the help of a robot. During the task, we monitored the participants’ biomechanical activity. After the task, the participants completed a survey with usability and acceptability measures; seven workers of the SME completed the same survey. The results of our research are the following. First, by applying–for the first time in collaborative robotics–Potvin’s method, we show that the robot substantially reduces the participants’ muscular effort. Second: we design and present an unprecedented method for measuring the robot reliability and reproducibility in collaborative scenarios. Third: by correlating the worker’s effort with the power measured by the robot, we show that the two agents act in energetic synergy. Fourth: the participant’s increasing level of experience with robots shifts his/her focus from the robot’s overall functionality towards finer expectations. Last but not least: workers and participants are willing to work with the robot and think it is useful.
REAL-Colon: A dataset for developing real-world AI applications in colonoscopy
Detection and diagnosis of colon polyps are key to preventing colorectal cancer. Recent evidence suggests that AI-based computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems can enhance endoscopists' performance and boost colonoscopy effectiveness. However, most available public datasets primarily consist of still images or video clips, often at a down-sampled resolution, and do not accurately represent real-world colonoscopy procedures. We introduce the REAL-Colon (Real-world multi-center Endoscopy Annotated video Library) dataset: a compilation of 2.7 M native video frames from sixty full-resolution, real-world colonoscopy recordings across multiple centers. The dataset contains 350k bounding-box annotations, each created under the supervision of expert gastroenterologists. Comprehensive patient clinical data, colonoscopy acquisition information, and polyp histopathological information are also included in each video. With its unprecedented size, quality, and heterogeneity, the REAL-Colon dataset is a unique resource for researchers and developers aiming to advance AI research in colonoscopy. Its openness and transparency facilitate rigorous and reproducible research, fostering the development and benchmarking of more accurate and reliable colonoscopy-related algorithms and models.
Aging of subcortical nuclei: Microstructural, mineralization and atrophy modifications measured in vivo using MRI
In the present study, we characterized the physiological aging of deep grey matter nuclei by simultaneously measuring quantitative magnetic resonance parameters sensitive to complementary tissue characteristics (volume atrophy, iron deposition, microstructural damage) in seven different structures in 100 healthy subjects. Large age-related variations were observed in the thalamus, putamen and caudate. No significant correlations with age were observed in the hippocampus, amygdala, pallidum, or accumbens. Multiple regression analyses of advanced imaging data revealed that the best predictors of physiological aging were the mean relaxation time (T2⁎) of the putamen and the volume and mean diffusivity of the thalamus. These three parameters accounted for over 70% of the age variance in a linear model comprising 100 healthy subjects, aged from 20 to 70 years. Importantly, the statistical analyses highlighted characteristic patterns of variation for the measurements in the various structures evaluated in this study. These findings contribute in establishing a baseline for comparison with pathological changes in the basal ganglia and thalamus.
A Lyapunov-Stable Adaptive Method to Approximate Sensorimotor Models for Sensor-Based Control
In this article, we present a new scheme that approximates unknown sensorimotor models of robots by using feedback signals only. The formulation of the uncalibrated sensor-based regulation problem is first formulated, then, we develop a computational method that distributes the model estimation problem amongst multiple adaptive units that specialise in a local sensorimotor map. Different from traditional estimation algorithms, the proposed method requires little data to train and constrain it (the number of required data points can be analytically determined) and has rigorous stability properties (the conditions to satisfy Lyapunov stability are derived). Numerical simulations and experimental results are presented to validate the proposed method.