Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
130
result(s) for
"Rosenthal, Stephanie"
Sort by:
Is Someone in this Office Available to Help Me?
by
Rosenthal, Stephanie
,
Dey, Anind K.
,
Veloso, Manuela
in
Artificial Intelligence
,
Availability
,
Control
2012
Robots are increasingly autonomous in our environments, but they still must overcome limited sensing, reasoning, and actuating capabilities while completing services for humans. While some work has focused on robots that proactively request help from humans to reduce their limitations, the work often assumes that humans are supervising the robot and always available to help. In this work, we instead investigate the feasibility of asking for help from humans in the environment who benefit from its services. Unlike other human helpers that constantly monitor a robot’s progress, humans in the environment are not supervisors and a robot must proactively navigate to them to receive help. We contribute a study that shows that several of our environment occupants are willing to help our robot, but, as expected, they have constraints that limit their availability due to their own work schedules. Interestingly, the study further shows that an available human is not always in close proximity to the robot. We present an extended model that includes the availability of humans in the environment, and demonstrate how a navigation planner can incorporate this information to plan paths that increase the likelihood that a robot can find an available helper when it needs one. Finally, we discuss further opportunities for the robot to adapt and learn from the occupants over time.
Journal Article
Acquiring Accurate Human Responses to Robots’ Questions
2012
In task-oriented robot domains, a human is often designated as a supervisor to monitor the robot and correct its inferences about its state during execution. However, supervision is expensive in terms of human effort. Instead, we are interested in robots asking non-supervisors in the environment for state inference help. The challenge with asking non-supervisors for help is that they may not always understand the robot’s state or question and may respond inaccurately as a result. We identify four different types of state information that a robot can include to ground non-supervisors when it requests help—namely context around the robot, the inferred state prediction, prediction uncertainty, and feedback about the sensors used for the predicting the robot’s state. We contribute two wizard-of-oz’d user studies to test which combination of this state information increases the accuracy of non-supervisors’ responses. In the first study, we consider a block-construction task and use a toy robot to study questions regarding shape recognition. In the second study, we use our real mobile robot to study questions regarding localization. In both studies, we identify the same combination of information that increases the accuracy of responses the most. We validate that our combination results in more accurate responses than a combination that a set of HRI experts predicted would be best. Finally, we discuss the appropriateness of our found best combination of information to other task-driven robots.
Journal Article
Reports of the 2016 AAAI Workshop Program
by
Fortuna, Blaz
,
Sanner, Scott
,
Son, Tran Cao
in
Adaptive technology
,
Artificial intelligence
,
Conferences, meetings and seminars
2016
The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirtieth AAAI Conference on Artificial Intelligence (AAAI‐16) was held at the beginning of the conference, February 12–13, 2016. Workshop participants met and discussed issues with a selected focus, and the workshop provided an informal setting for active exchange among researchers, developers, and users on topics of current interest. The AAAI‐16 workshops were an excellent forum for exploring emerging approaches and task areas, for bridging the gaps between AI and other fields or between subfields of AI, for elucidating the results of exploratory research, or for critiquing existing approaches. The 15 workshops held at AAAI‐16 were Artificial Intelligence Applied to Assistive Technologies and Smart Environments (WS‐16‐01), AI, Ethics, and Society (WS‐16‐02), Artificial Intelligence for Cyber Security (WS‐16‐03), Artificial Intelligence for Smart Grids and Smart Buildings (WS‐16‐04), Beyond NP (WS‐16‐05), Computer Poker and Imperfect Information Games (WS‐16‐06), Declarative Learning Based Programming (WS‐16‐07), Expanding the Boundaries of Health Informatics Using AI (WS‐16‐08), Incentives and Trust in Electronic Communities (WS‐16‐09), Knowledge Extraction from Text (WS‐16‐10), Multiagent Interaction Without Prior Coordination (WS‐16‐11), Planning for Hybrid Systems (WS‐16‐12), Scholarly Big Data: AI Perspectives, Challenges, and Ideas (WS‐16‐13), Symbiotic Cognitive Systems (WS‐16‐14), and World Wide Web and Population Health Intelligence (WS‐16‐15).
Journal Article
Human-Centered Planning for Effective Tast Autonomy
2012
Increasingly available mobile devices (e.g., mobile robots, smart phones) are becoming more intelligent in their ability to autonomously perform tasks for users. However, when deployed in complex human environments, these devices still face many sensing, reasoning, and actuation limitations. To overcome limitations, we propose symbiotic relationships as those in which the device can request help from humans in the environment while it performs tasks for them. Because the devices are performing tasks for humans, humans have incentive to help the device complete its tasks effectively. However, they may not always be available or willing to help. We introduce human-centered planning to model and reason about humans in the environment in addition to their own state and goals to determine how to act and whether, who, and how to seek help. The thesis first contributes an understanding of what and how to model humans in the environment through user studies. We first evaluate whether attributes such as availability and interruptibility affect willingness to help. Then, we contribute to the understanding of how to ask humans for help to increase the accuracy of their responses. We show that providing humans with device context, classification prediction and uncertainty, and additional feedback all increase the accuracy of human responses to device questions. Finally, we contribute algorithms to learn these models both through surveys and online while the device is performing tasks. The thesis then introduces human-centered conditional, deliberative, and replanning algorithms that use models of humans. We contribute conditional plans that include asking actions to enable devices to perform tasks that they could not otherwise perform. We then contribute a human-centered deliberative planner for a robot to use to determine which navigational path to take that minimizes its uncertainty and maximizes the likelihood of finding available human helpers. Finally, we contribute a replanning algorithm for a robot to determine which helper to have travel to help in a particular location, such as elevators or kitchens. Through extensive experiments and deployments, in particular with a mobile service robot, this thesis shows that human-centered algorithms trade off task performance with costs of asking and interrupting human helpers increase functionality while maintaining usability.
Dissertation
Impact of Explanation on Trust of a Novel Mobile Robot
2021
One challenge with introducing robots into novel environments is misalignment between supervisor expectations and reality, which can greatly affect a user's trust and continued use of the robot. We performed an experiment to test whether the presence of an explanation of expected robot behavior affected a supervisor's trust in an autonomous robot. We measured trust both subjectively through surveys and objectively through a dual-task experiment design to capture supervisors' neglect tolerance (i.e., their willingness to perform their own task while the robot is acting autonomously). Our objective results show that explanations can help counteract the novelty effect of seeing a new robot perform in an unknown environment. Participants who received an explanation of the robot's behavior were more likely to focus on their own task at the risk of neglecting their robot supervision task during the first trials of the robot's behavior compared to those who did not receive an explanation. However, this effect diminished after seeing multiple trials, and participants who received explanations were equally trusting of the robot's behavior as those who did not receive explanations. Interestingly, participants were not able to identify their own changes in trust through their survey responses, demonstrating that the dual-task design measured subtler changes in a supervisor's trust.
SalienTrack: providing salient information for semi-automated self-tracking feedback with model explanations
2022
Self-tracking can improve people's awareness of their unhealthy behaviors and support reflection to inform behavior change. Increasingly, new technologies make tracking easier, leading to large amounts of tracked data. However, much of that information is not salient for reflection and self-awareness. To tackle this burden for reflection, we created the SalienTrack framework, which aims to 1) identify salient tracking events, 2) select the salient details of those events, 3) explain why they are informative, and 4) present the details as manually elicited or automatically shown feedback. We implemented SalienTrack in the context of nutrition tracking. To do this, we first conducted a field study to collect photo-based mobile food tracking over 1-5 weeks. We then report how we used this data to train an explainable-AI model of salience. Finally, we created interfaces to present salient information and conducted a formative user study to gain insights about how SalienTrack could be integrated into an interface for reflection. Our key contributions are the SalienTrack framework, a demonstration of its implementation for semi-automated feedback in an important and challenging self-tracking context and a discussion of the broader uses of the framework.
UAV and Service Robot Coordination for Indoor Object Search Tasks
by
Veloso, Manuela
,
Rosenthal, Stephanie
,
Konam, Sandeep
in
Algorithms
,
Desks
,
Indoor environments
2017
Our CoBot robots have successfully performed a variety of service tasks in our multi-building environment including accompanying people to meetings and delivering objects to offices due to its navigation and localization capabilities. However, they lack the capability to visually search over desks and other confined locations for an object of interest. Conversely, an inexpensive GPS-denied quadcopter platform such as the Parrot ARDrone 2.0 could perform this object search task if it had access to reasonable localization. In this paper, we propose the concept of coordination between CoBot and the Parrot ARDrone 2.0 to perform service-based object search tasks, in which CoBot localizes and navigates to the general search areas carrying the ARDrone and the ARDrone searches locally for objects. We propose a vision-based moving target navigation algorithm that enables the ARDrone to localize with respect to CoBot, search for objects, and return to the CoBot for future searches. We demonstrate our algorithm in indoor environments on several search trajectories.
Understanding Convolutional Networks with APPLE : Automatic Patch Pattern Labeling for Explanation
by
Veloso, Manuela
,
Quah, Ian
,
Rosenthal, Stephanie
in
Algorithms
,
Artificial neural networks
,
Image classification
2018
With the success of deep learning, recent efforts have been focused on analyzing how learned networks make their classifications. We are interested in analyzing the network output based on the network structure and information flow through the network layers. We contribute an algorithm for 1) analyzing a deep network to find neurons that are 'important' in terms of the network classification outcome, and 2)automatically labeling the patches of the input image that activate these important neurons. We propose several measures of importance for neurons and demonstrate that our technique can be used to gain insight into, and explain how a network decomposes an image to make its final classification.
Towards Visual Explanations for Convolutional Neural Networks via Input Resampling
by
Lengerich, Benjamin J
,
Xing, Eric P
,
Veloso, Manuela
in
Activation
,
Artificial neural networks
,
Information flow
2017
The predictive power of neural networks often costs model interpretability. Several techniques have been developed for explaining model outputs in terms of input features; however, it is difficult to translate such interpretations into actionable insight. Here, we propose a framework to analyze predictions in terms of the model's internal features by inspecting information flow through the network. Given a trained network and a test image, we select neurons by two metrics, both measured over a set of images created by perturbations to the input image: (1) magnitude of the correlation between the neuron activation and the network output and (2) precision of the neuron activation. We show that the former metric selects neurons that exert large influence over the network output while the latter metric selects neurons that activate on generalizable features. By comparing the sets of neurons selected by these two metrics, our framework suggests a way to investigate the internal attention mechanisms of convolutional neural networks.