Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
115,099
result(s) for
"Interactive systems"
Sort by:
Integrating Virtual Reality, EEG Signal, and Haptic Feedback to Support Post-Stroke Rehabilitation System
2025
One of the problems in post-stroke rehabilitation is the low level of effectiveness of the supporting system caused by a lack of feedback to the patient. This problem can be improved by increasing the immersive and interactive levels of the system using an interactive system. This study utilized a system with integrated Virtual Reality (VR), haptic feedback, and Electroencephalograph (EEG). As in the previous study, a VR system can provide immersive visual feedback to the user in real-time. Also, haptic feedback can be used to pay attention to the user without disturbing the visual feedback. A BCI system has also been used as an interactive input to stimulate the user's brain nerves. This study has been carried out to integrate these three systems to work in real time. Thus, it is a single system for post-stroke rehabilitation. The study has also found that a system that integrates VR, haptic, and BCI can meet the critical factors of an interactive system: presence, immersion, interactivity, satisfaction, and usefulness.
Journal Article
User-defined semantics for the design of IoT systems enabling smart interactive experiences
by
Malizia Alessio
,
Desolda Giuseppe
,
Lanzilotti Rosa
in
Automation
,
Cultural heritage
,
Cultural resources
2020
Automation in computing systems has always been considered a valuable solution to unburden the user. Internet of Things (IoT) technology best suits automation in different domains, such as home automation, retail, industry, and transportation, to name but a few. While these domains are strongly characterized by implicit user interaction, more recently, automation has been adopted also for the provision of interactive and immersive experiences that actively involve the users. IoT technology thus becomes the key for Smart Interactive Experiences (SIEs), i.e., immersive automated experiences created by orchestrating different devices to enable smart environments to fluidly react to the final users’ behavior. There are domains, e.g., cultural heritage, where these systems and the SIEs can support and provide several benefits. However, experts of such domains, while intrigued by the opportunity to induce SIEs, are facing tough challenges in their everyday work activities when they are required to automate and orchestrate IoT devices without the necessary coding skills. This paper presents a design approach that tries to overcome these difficulties thanks to the adoption of ontologies for defining Event-Condition-Action rules. More specifically, the approach enables domain experts to identify and specify properties of IoT devices through a user-defined semantics that, being closer to the domain experts’ background, facilitates them in automating the IoT devices behavior. We also present a study comparing three different interaction paradigms conceived to support the specification of user-defined semantics through a “transparent” use of ontologies. Based on the results of this study, we work out some lessons learned on how the proposed paradigms help domain experts express their semantics, which in turn facilitates the creation of interactive applications enabling SIEs.
Journal Article
Recent Advances in Flexible Piezoresistive Arrays: Materials, Design, and Applications
2023
Spatial distribution perception has become an important trend for flexible pressure sensors, which endows wearable health devices, bionic robots, and human–machine interactive interfaces (HMI) with more precise tactile perception capabilities. Flexible pressure sensor arrays can monitor and extract abundant health information to assist in medical detection and diagnosis. Bionic robots and HMI with higher tactile perception abilities will maximize the freedom of human hands. Flexible arrays based on piezoresistive mechanisms have been extensively researched due to the high performance of pressure-sensing properties and simple readout principles. This review summarizes multiple considerations in the design of flexible piezoresistive arrays and recent advances in their development. First, frequently used piezoresistive materials and microstructures are introduced in which various strategies to improve sensor performance are presented. Second, pressure sensor arrays with spatial distribution perception capability are discussed emphatically. Crosstalk is a particular concern for sensor arrays, where mechanical and electrical sources of crosstalk issues and the corresponding solutions are highlighted. Third, several processing methods are also introduced, classified as printing, field-assisted and laser-assisted fabrication. Next, the representative application works of flexible piezoresistive arrays are provided, including human-interactive systems, healthcare devices, and some other scenarios. Finally, outlooks on the development of piezoresistive arrays are given.
Journal Article
Explaining machine learning models with interactive natural language conversations using TalkToModel
by
Slack, Dylan
,
Krishna, Satyapriya
,
Lakkaraju, Himabindu
in
639/705/1042
,
639/705/117
,
Datasets
2023
Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability methods because they do not know which explanation to choose and how to interpret the explanation. Here we address the challenge of using explainability methods by proposing TalkToModel: an interactive dialogue system that explains ML models through natural language conversations. TalkToModel consists of three components: an adaptive dialogue engine that interprets natural language and generates meaningful responses; an execution component that constructs the explanations used in the conversation; and a conversational interface. In real-world evaluations, 73% of healthcare workers agreed they would use TalkToModel over existing systems for understanding a disease prediction model, and 85% of ML professionals agreed TalkToModel was easier to use, demonstrating that TalkToModel is highly effective for model explainability.
To ensure that a machine learning model has learned the intended features, it can be useful to have an explanation of why a specific output was given. Slack et al. have created a conversational environment, based on language models and feature importance, which can interactively explore explanations with questions asked in natural language.
Journal Article
Measuring perceived empathy in dialogue systems
2024
Dialogue systems, from Virtual Personal Assistants such as Siri, Cortana, and Alexa to state-of-the-art systems such as BlenderBot3 and ChatGPT, are already widely available, used in a variety of applications, and are increasingly part of many people’s lives. However, the task of enabling them to use empathetic language more convincingly is still an emerging research topic. Such systems generally make use of complex neural networks to learn the patterns of typical human language use, and the interactions in which the systems participate are usually mediated either via interactive text-based or speech-based interfaces. In human–human interaction, empathy has been shown to promote prosocial behaviour and improve interaction. In the context of dialogue systems, to advance the understanding of how perceptions of empathy affect interactions, it is necessary to bring greater clarity to how empathy is measured and assessed. Assessing the way dialogue systems create perceptions of empathy brings together a range of technological, psychological, and ethical considerations that merit greater scrutiny than they have received so far. However, there is currently no widely accepted evaluation method for determining the degree of empathy that any given system possesses (or, at least, appears to possess). Currently, different research teams use a variety of automated metrics, alongside different forms of subjective human assessment such as questionnaires, self-assessment measures and narrative engagement scales. This diversity of evaluation practice means that, given two DSs, it is usually impossible to determine which of them conveys the greater degree of empathy in its dialogic exchanges with human users. Acknowledging this problem, the present article provides an overview of how empathy is measured in human–human interactions and considers some of the ways it is currently measured in human–DS interactions. Finally, it introduces a novel third-person analytical framework, called the Empathy Scale for Human–Computer Communication (ESHCC), to support greater uniformity in how perceived empathy is measured during interactions with state-of-the-art DSs.
Journal Article
CTTE: support for developing and analyzing task models for interactive system design
by
Santoro, C.
,
Paterno, F.
,
Mori, G.
in
Analytical models
,
Application software
,
Computer programs
2002
While task modeling and task-based design are entering into current practice in the design of interactive software applications, there is still a lack of tools supporting the development and analysis of task models. Such tools should provide developers with ways to represent tasks, including their attributes and objects and their temporal and semantic relationships, to easily create, analyze, and modify such representations and to simulate their dynamic behavior. In this paper, we present a tool, CTTE, that provides thorough support for developing and analyzing task models of cooperative applications, which can then be used to improve the design and evaluation of interactive software applications. We discuss how we have designed this environment and report on trials of its use.
Journal Article