Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
27
result(s) for
"Tag, Benjamin"
Sort by:
Poverty Traps in Online Knowledge-Based Peer-Production Communities
2023
Online knowledge-based peer-production communities, like question and answer sites (Q&A), often rely on gamification, e.g., through reputation points, to incentivize users to contribute frequently and effectively. These gamification techniques are important for achieving the critical mass that sustains a community and enticing new users to join. However, aging communities tend to build “poverty traps” that act as barriers for new users. In this paper, we present our investigation of 32 domain communities from Stack Exchange and our analysis of how different subjects impact the development of early user advantage. Our results raise important questions about the accessibility of knowledge-based peer-production communities. We consider the analysis results in the context of changing information needs and the relevance of Q&A in the future. Our findings inform policy design for building more equitable knowledge-based peer-production communities and increasing the accessibility to existing ones.
Journal Article
Investigating mental wellbeing self-care in higher education using BERTopic modeling
by
Ali, Mahmoud
,
Tag, Benjamin
,
Oppenlaender, Jonas
in
Activities of daily living
,
BERTopic
,
Clinical Psychology
2025
Addressing the mental wellbeing of higher education students is urgent, given rising distress rates and significant help-seeking gaps. Students face various life challenges ranging from academic pressure and career concerns to global issues like climate change, all of which may negatively impact their mental wellbeing. While appropriate self-care can mitigate these challenges, understanding the strategies students use independently is key to developing accessible support. This article analyses contemporary triggers for mental distress and the corresponding self-care strategies adopted by students, based on data collected during the COVID-19 pandemic. We then discuss how these findings can inform the design of future digital mental wellbeing solutions. We conducted an online study with 810 participants, utilizing computational methods to analyse open-ended data. We present insights into prevalent challenges and self-care strategies, deriving direct implications for design. Finally, we discuss how technology designers can contribute to effective mental wellbeing solutions based on our findings.
Journal Article
Benchmarking commercial emotion detection systems using realistic distortions of facial image datasets
by
Dingler, Tilman
,
Tag, Benjamin
,
Yang, Kangning
in
Affect (Psychology)
,
Algorithms
,
Artificial Intelligence
2021
Currently, there are several widely used commercial cloud-based services that attempt to recognize an individual’s emotions based on their facial expressions. Most research into facial emotion recognition has used high-resolution, front-oriented, full-face images. However, when images are collected in naturalistic settings (e.g., using smartphone’s frontal camera), these images are likely to be far from ideal due to camera positioning, lighting conditions, and camera shake. The impact these conditions have on the accuracy of commercial emotion recognition services has not been studied in full detail. To fill this gap, we selected five prominent commercial emotion recognition systems—Amazon Rekognition, Baidu Research, Face++, Microsoft Azure, and Affectiva—and evaluated their performance via two experiments. In Experiment 1, we compared the systems’ accuracy at classifying images drawn from three standardized facial expression databases. In Experiment 2, we first identified several common scenarios (e.g., partially visible face) that can lead to poor-quality pictures during smartphone use, and manipulated the same set of images used in Experiment 1 to simulate these scenarios. We used the manipulated images to again compare the systems’ classification performance, finding that the systems varied in how well they handled manipulated images that simulate realistic image distortion. Based on our findings, we offer recommendations for developers and researchers who would like to use commercial facial emotion recognition technologies in their applications.
Journal Article
A System for Computational Assessment of Hand Hygiene Techniques
by
Dingler Tilman
,
Tag, Benjamin
,
Yang Kangning
in
Computer applications
,
Hygiene
,
Nosocomial infections
2022
The World Health Organization (WHO) recommends a six-step hand hygiene technique. Although multiple studies have reported that this technique yields inadequate skin coverage outcomes, they have relied on manual labeling that provided low-resolution estimations of skin coverage outcomes. We have developed a computational system to precisely quantify hand hygiene outcomes and provide high-resolution skin coverage visualizations, thereby improving hygiene techniques. We identified frequently untreated areas located at the dorsal side of the hands around the abductor digiti minimi and the first dorsal interosseous. We also estimated that excluding Steps 3, 6R, and 6L from the six-step hand hygiene technique leads to cumulative coverage loss of less than 1%, indicating the potential redundancy of these steps. Our study demonstrates that the six-step hand hygiene technique could be improved to reduce the untreated areas and remove potentially redundant steps. Furthermore, our system can be used to computationally validate new proposed techniques, and help optimise hand hygiene procedures.
Journal Article
Measuring Mobility and Room Occupancy in Clinical Settings: System Development and Implementation
2020
The use of location-based data in clinical settings is often limited to real-time monitoring. In this study, we aim to develop a proximity-based localization system and show how its longitudinal deployment can provide operational insights related to staff and patients' mobility and room occupancy in clinical settings. Such a streamlined data-driven approach can help in increasing the uptime of operating rooms and more broadly provide an improved understanding of facility utilization.
The aim of this study is to measure the accuracy of the system and algorithmically calculate measures of mobility and occupancy.
We developed a Bluetooth low energy, proximity-based localization system and deployed it in a hospital for 30 days. The system recorded the position of 75 people (17 patients and 55 staff) during this period. In addition, we collected ground-truth data and used them to validate system performance and accuracy. A number of analyses were conducted to estimate how people move in the hospital and where they spend their time.
Using ground-truth data, we estimated the accuracy of our system to be 96%. Using mobility trace analysis, we generated occupancy rates for different rooms in the hospital occupied by both staff and patients. We were also able to measure how much time, on average, patients spend in different rooms of the hospital. Finally, using unsupervised hierarchical clustering, we showed that the system could differentiate between staff and patients without training.
Analysis of longitudinal, location-based data can offer rich operational insights into hospital efficiency. In particular, they allow quick and consistent assessment of new strategies and protocols and provide a quantitative way to measure their effectiveness.
Journal Article
Robot-Wearable Conversation Hand-off for Navigation
by
Hosio, Simo
,
Visuri, Aku
,
Tag, Benjamin
in
Applications programs
,
Human relations
,
Indoor environments
2026
Navigating large and complex indoor environments, such as universities, airports, and hospitals, can be cognitively demanding and requires attention and effort. While mobile applications provide convenient navigation support, they occupy the user's hands and visual attention, limiting natural interaction. In this paper, we explore conversation hand-off as a method for multi-device indoor navigation, where a Conversational Agent (CA) transitions seamlessly from a stationary social robot to a wearable device. We evaluated robot-only, wearable-only, and robot-to-wearable hand-off in a university campus setting using a within-subjects design with N=24 participants. We find that conversation hand-off is experienced as engaging, even though no performance benefits were observed, and most preferred using the wearable-only system. Our findings suggest that the design of such re-embodied assistants should maintain a shared voice and state across embodiments. We demonstrate how conversational hand-offs can bridge cognitive and physical transitions, enriching human interaction with embodied AI.
When Ads Become Profiles: Uncovering the Invisible Risk of Web Advertising at Scale with LLMs
2026
Regulatory limits on explicit targeting have not eliminated algorithmic profiling on the Web, as optimisation systems still adapt ad delivery to users' private attributes. The widespread availability of powerful zero-shot multimodal Large Language Models (LLMs) has dramatically lowered the barrier for exploiting these latent signals for adversarial inference. We investigate this emerging societal risk, specifically how adversaries can now exploit these signals to reverse-engineer private attributes from ad exposure alone. We introduce a novel pipeline that leverages LLMs as adversarial inference engines to perform natural language profiling. Applying this method to a longitudinal dataset comprising over 435,000 Facebook ad impressions collected from 891 users, we conducted a large-scale study to assess the feasibility and precision of inferring private attributes from passive online ad observations. Our results demonstrate that off-the-shelf LLMs can accurately reconstruct complex user private attributes, including party preference, employment status, and education level, consistently outperforming strong census-based priors and matching or exceeding human social perception at only a fraction of the cost (223x lower) and time (52x faster) required by humans. Critically, actionable profiling is feasible even within short observation windows, indicating that prolonged tracking is not a prerequisite for a successful attack. These findings provide the first empirical evidence that ad streams serve as a high-fidelity digital footprint, enabling off-platform profiling that inherently bypasses current platform safeguards, highlighting a systemic vulnerability in the ad ecosystem and the urgent need for responsible web AI governance in the generative AI era. The code is available at https://github.com/Breezelled/when-ads-become-profiles.
Inside Out or Not: Privacy Implications of Emotional Disclosure
2024
Privacy is dynamic, sensitive, and contextual, much like our emotions. Previous studies have explored the interplay between privacy and context, privacy and emotion, and emotion and context. However, there remains a significant gap in understanding the interplay of these aspects simultaneously. In this paper, we present a preliminary study investigating the role of emotions in driving individuals' information sharing behaviour, particularly in relation to urban locations and social ties. We adopt a novel methodology that integrates context (location and time), emotion, and personal information sharing behaviour, providing a comprehensive analysis of how contextual emotions affect privacy. The emotions are assessed with both self-reporting and electrodermal activity (EDA). Our findings reveal that self-reported emotions influence personal information-sharing behaviour with distant social groups, while neutral emotions lead individuals to share less precise information with close social circles, a pattern is potentially detectable with wrist-worn EDA. Our study helps lay the foundation for personalised emotion-aware strategies to mitigate oversharing risks and enhance user privacy in the digital age.
Understanding the Effects of Interaction on Emotional Experiences in VR
2026
Virtual reality has been effectively used for eliciting emotions, yet most research focuses on the intensity of affective responses rather than on how interaction influences those experiences. To address this gap, we advance a validated VR emotion-elicitation dataset through two key extensions. First, we add a new high-arousal, high-valence scene and validate its effectiveness in a within-subject study (N=24). Second, we incorporate interactive elements into each scene, creating both interactive and non-interactive versions to examine the impact of interaction on emotional responses. We evaluate interaction through a multimodal approach combining subjective ratings and physiological signals to capture both conscious and unconscious affective responses. Our evaluation study (N=84) shows that interaction not only amplifies emotions but modulates them in context, supporting coping in negative scenes and enhancing enjoyment in positive scenes. These findings highlight the potential of scene-tailored interaction for different applications, where regulating emotions is as important as eliciting them.
AlphaPIG: The Nicest Way to Prolong Interactive Gestures in Extended Reality
2025
Mid-air gestures serve as a common interaction modality across Extended Reality (XR) applications, enhancing engagement and ownership through intuitive body movements. However, prolonged arm movements induce shoulder fatigue, known as \"Gorilla Arm Syndrome\", degrading user experience and reducing interaction duration. Although existing ergonomic techniques derived from Fitts' law (such as reducing target distance, increasing target width, and modifying control-display gain) provide some fatigue mitigation, their implementation in XR applications remains challenging due to the complex balance between user engagement and physical exertion. We present AlphaPIG, a meta-technique designed to Prolong Interactive Gestures by leveraging real-time fatigue predictions. AlphaPIG assists designers in extending and improving XR interactions by enabling automated fatigue-based interventions. Through adjustment of intervention timing and intensity decay rate, designers can explore and control the trade-off between fatigue reduction and potential effects such as decreased body ownership. We validated AlphaPIG's effectiveness through a study (N=22) implementing the widely-used Go-Go technique. Results demonstrated that AlphaPIG significantly reduces shoulder fatigue compared to non-adaptive Go-Go, while maintaining comparable perceived body ownership and agency. Based on these findings, we discuss positive and negative perceptions of the intervention. By integrating real-time fatigue prediction with adaptive intervention mechanisms, AlphaPIG constitutes a critical first step towards creating fatigue-aware applications in XR.