Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
172
result(s) for
"human-system-interaction"
Sort by:
Mobile computer usability : an organizational personality perspective
The central thesis of this book is that to understand and enhance the usability of mobile computers, we must understand the union and continuity of the user's sociological (organizational) and psychological (personal) circumstances. Union and continuity constitute relationships that are not well understood because previous researchers have not approached mobile usability from these premises to explain them. The book seeks to explain the relationship between the user's sociological and psychological circumstances into a unified epistemology of mobile usability. The book's contributions are important because the nature of mobile computers and contemporary work practices induces the increasing inclusion of the user's cognitive needs of existence (psychological frame) into the human-computer dyad that determines mobile usability. These union and continuity relationships are important for those who design, implement and manage mobile information systems in organisations and society. The contributions are also timely because mobile computing is increasingly becoming a predominant aspect of contemporary computing in organisations and society. The book's epistemology of mobile usability also suggests practical guidelines for the design, management, and implementation of mobile information systems in organisations and society.
Design of a Human-Centric Robotic System for User Support Based on Gaze Information
2025
Recent advancements in mechanization and automation have significantly transformed households and retail environments, with automated services becoming increasingly prevalent. In general, smart appliances utilizing the IoT technology have gained widespread adoption, and computerized systems, such as self-checkout machines, are now commonplace in retail settings. However, these services require users to follow specific procedures and operate the systems according to predefined capabilities, which may exclude users who are unfamiliar with the systems or who require additional support. Although robots deliver essential services efficiently, their rigid designs limits their adaptability. By contrast, human service providers can flexibly tailor services by observing a customer’s condition through visual and auditory cues. For robots to offer more inclusive and user-friendly services, they must be capable of assessing user conditions and adapting their behaviors accordingly. Therefore, this paper proposes a control support system that analyzes user gaze behavior during interactions with smart appliances to provide context-aware support. Gaze data were collected using HoloLens 2, a mixed reality device, allowing the system to deliver information tailored to the user’s gaze direction. By providing an information support service through a robot based on an analysis of the user’s gaze, the user’s level of interest in the targeted environmental objects could be confirmed. Accordingly, a service that improves convenience and is tailored to the user could be provided. Finally, we discuss the effectiveness of the proposed human-centric robotic system through experiments.
Journal Article
Designing voice user interfaces : principles of conversational experiences
\"Voice user interfaces (VUIs) are becoming all the rage today. But how do you build one that people can actually converse with? Whether you're designing a mobile app, a toy, or a device such as a home assistant, this practical book guides you through basic VUI design principles, helps you choose the right speech recognition engine, and shows you how to measure your VUI's performance and improve upon it\"--Back cover.
Can media richness and interaction act as stimulants to medical professionals’ learning persistence in MOOCs via fostering learning engagement?
2024
PurposeThe purpose of this study is to propose a research model based on the stimulus-organism-response (S-O-R) model to examine whether media richness (MR), human-system interaction (HSI) and human-human interaction (HHI) as technological feature antecedents to medical professionals’ learning engagement (LE) can affect their learning persistence (LP) in massive open online courses (MOOCs).Design/methodology/approachSample data for this study were collected from medical professionals at six university-/medical university-affiliated hospitals in Taiwan. A total of 600 questionnaires were distributed, and 309 (51.5%) usable questionnaires were analyzed using structural equation modeling in this study.FindingsThis study certified that medical professionals’ perceived MR, HSI and HHI in MOOCs positively affected their emotional LE, cognitive LE and social LE elicited by MOOCs, which together explained their LP in MOOCs. The results support all proposed hypotheses and the research model accounts for 84.1% of the variance in medical professionals’ LP in MOOCs.Originality/valueThis study uses the S-O-R model as a theoretical base to construct medical professionals’ LP in MOOCs as a series of the psychological process, which is affected by MR and interaction (i.e. HSI and HHI). Noteworthily, three psychological constructs, emotional LE, cognitive LE and social LE, are adopted to represent medical professionals’ organisms of MOOCs adoption. To date, hedonic/utilitarian concepts are more commonly adopted as organisms in prior studies using the S-O-R model and psychological constructs have received lesser attention. Hence, this study enriches the S-O-R model into an invaluable context, and this study’s contribution on the application of capturing psychological constructs for completely explaining three types of technological features as external stimuli to medical professionals’ LP in MOOCs is well-documented.
Journal Article
Personas : user focused design
People relate to other people [empathetically]; not to simplified types [stereotypes] or segments ... [This work] covers issues from interaction design within IT through issues surrounding product design, communication, and marketing.-- Project developers need to understand how users approach their product from the product's infancy. Developers should be able the user via vivid depictions [\"scenarios\"], as if they -- with their different attitudes, desires, and habits -- were already using the product. [Includes] contributions from professionals from Australia, Brazil, [etc.] presenting real-world examples of the persona method.-- Back cover.
Time Orientation Technologies in Special Education
by
García-Camino, Mercedes
,
Guillomía, Miguel Angel
,
Artigas, José Ignacio
in
accessible interfaces
,
ambient assisted living (AAL)
,
assistive technology
2019
A device to train children in time orientation has been designed, developed and evaluated. It is framed within a long-term cooperation action between university and special education school. It uses a specific cognitive accessible time display: Time left in the day is represented by a row of luminous elements initially on. Time passing is represented by turning off sequentially and gradually each luminous element every 15 min. Agenda is displayed relating time to tasks with standard pictograms for further accessibility. Notifications of tasks-to-come both for management support and anticipation to changes uses visual and auditory information. Agenda can be described in an Alternative and Augmentative Communication pictogram language already used by children, supporting individual and class activities on agenda. Validation has been performed with 16 children in 12 classrooms of four special education schools. Methodology for evaluation compares both prior and posterior assessments which are based in the International Classification of Functioning, Disability and Health (ICF) from the World Health Organization (WHO), together with observation registers. Results show consistent improvement in performances related with time orientation.
Journal Article
Data integration by two-sensors in a LEAP-based Virtual Glove for human-system interaction
by
Polsinelli Matteo
,
Theodoridou Eleni
,
Avola Danilo
in
Artificial intelligence
,
Computer vision
,
Data integration
2021
Virtual Glove (VG) is a low-cost computer vision system that utilizes two orthogonal LEAP motion sensors to provide detailed 4D hand tracking in real–time. VG can find many applications in the field of human-system interaction, such as remote control of machines or tele-rehabilitation. An innovative and efficient data-integration strategy, based on the velocity calculation, for selecting data from one of the LEAPs at each time, is proposed for VG. The position of each joint of the hand model, when obscured to a LEAP, is guessed and tends to flicker. Since VG uses two LEAP sensors, two spatial representations are available each moment for each joint: the method consists of the selection of the one with the lower velocity at each time instant. Choosing the smoother trajectory leads to VG stabilization and precision optimization, reduces occlusions (parts of the hand or handling objects obscuring other hand parts) and/or, when both sensors are seeing the same joint, reduces the number of outliers produced by hardware instabilities. The strategy is experimentally evaluated, in terms of reduction of outliers with respect to a previously used data selection strategy on VG, and results are reported and discussed. In the future, an objective test set has to be imagined, designed, and realized, also with the help of an external precise positioning equipment, to allow also quantitative and objective evaluation of the gain in precision and, maybe, of the intrinsic limitations of the proposed strategy. Moreover, advanced Artificial Intelligence-based (AI-based) real-time data integration strategies, specific for VG, will be designed and tested on the resulting dataset.
Journal Article
Effects of machine learning errors on human decision-making: manipulations of model accuracy, error types, and error importance
2024
This study addressed the cognitive impacts of providing correct and incorrect machine learning (ML) outputs in support of an object detection task. The study consisted of five experiments that manipulated the accuracy and importance of mock ML outputs. In each of the experiments, participants were given the T and L task with T-shaped targets and L-shaped distractors. They were tasked with categorizing each image as target present or target absent. In Experiment 1, they performed this task without the aid of ML outputs. In Experiments 2–5, they were shown images with bounding boxes, representing the output of an ML model. The outputs could be correct (hits and correct rejections), or they could be erroneous (false alarms and misses). Experiment 2 manipulated the overall accuracy of these mock ML outputs. Experiment 3 manipulated the proportion of different types of errors. Experiments 4 and 5 manipulated the importance of specific types of stimuli or model errors, as well as the framing of the task in terms of human or model performance. These experiments showed that model misses were consistently harder for participants to detect than model false alarms. In general, as the model’s performance increased, human performance increased as well, but in many cases the participants were more likely to overlook model errors when the model had high accuracy overall. Warning participants to be on the lookout for specific types of model errors had very little impact on their performance. Overall, our results emphasize the importance of considering human cognition when determining what level of model performance and types of model errors are acceptable for a given task.
Significance Statement
As machine learning (ML) algorithms are adopted into more and more contexts, it is important to consider how these tools impact their end users’ workflow and decision-making. In particular, it is crucial to consider how ML errors can impact human performance. Past research on automated systems shows that automation failures can be catastrophic. Yet much of the current research on ML research assumes that the algorithms will make very few errors. No algorithm will ever perform perfectly under all circumstances, but there has been relatively little research on the impact of ML errors on human decision-making. In high-consequence domains where ML stands to greatly benefit users, we must also address the potential impact of ML errors, especially if those errors are rare and difficult for users to notice. In this paper, we report a series of experiments that used tightly-controlled mock ML outputs to test the impact of ML errors on human performance in a target detection task. We found that models with low overall accuracy could improve human performance, but participants were less likely to notice ML errors when the errors were rare. Missed targets were harder for participants to detect than model false alarms, and higher proportions of misses among the ML errors led to lower human performance. Finally, warning participants to watch for missed targets did not improve their performance, although warning them about model errors in general had some benefit. Our findings have implications for future research and development of ML algorithms that can provide better support to human decision-making.
Journal Article