Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
12
result(s) for
"Gergle, Darren"
Sort by:
Sensitive Sharing on Social Media: Exploring Willingness to Disclose PrEP Usage Among Adolescent Males Who Have Sex With Males
by
Macapagal, Kathryn
,
Zheng, Weiwei
,
Moskowitz, David A.
in
Adolescents
,
Antiretroviral drugs
,
Audiences
2020
Self-presentation, the process by which people disclose information about themselves to others, is fundamental to online interaction and research on communication technology. Technology often mediates the self-presentation process by obscuring who is in the audience via constrained cues and opaque feed algorithms that govern the visibility of social media content. This can make it risky to disclose sensitive or potentially stigmatizing information about oneself, because it could fall into the wrong hands or be seen by an unsupportive audience. Still, there are times when it is socially beneficial to disclose sensitive information, such as LGBTQ+ (lesbian, gay, bisexual, transgender, queer, and others) people expressing their identities or disclosing HIV status. Decisions about sensitive disclosure, moreover, can be even more complicated in today’s social media landscape with many platforms and audiences in play, particularly for younger users who often use many platforms. We lack a good understanding, however, of how people make these decisions. This article addresses questions about sensitive disclosure on social media through a survey study of adolescent men who have sex with men and their willingness to disclose on social media the use of pre-exposure prophylaxis (PrEP), an HIV prevention medication. Results suggest that perceived platform audience composition and platform features such as ephemerality play into disclosure decisions, as well as the perceived normativity of PrEP use among peers.
Journal Article
Harnessing Context Sensing to Develop a Mobile Intervention for Depression
2011
Mobile phone sensors can be used to develop context-aware systems that automatically detect when patients require assistance. Mobile phones can also provide ecological momentary interventions that deliver tailored assistance during problematic situations. However, such approaches have not yet been used to treat major depressive disorder.
The purpose of this study was to investigate the technical feasibility, functional reliability, and patient satisfaction with Mobilyze!, a mobile phone- and Internet-based intervention including ecological momentary intervention and context sensing.
We developed a mobile phone application and supporting architecture, in which machine learning models (ie, learners) predicted patients' mood, emotions, cognitive/motivational states, activities, environmental context, and social context based on at least 38 concurrent phone sensor values (eg, global positioning system, ambient light, recent calls). The website included feedback graphs illustrating correlations between patients' self-reported states, as well as didactics and tools teaching patients behavioral activation concepts. Brief telephone calls and emails with a clinician were used to promote adherence. We enrolled 8 adults with major depressive disorder in a single-arm pilot study to receive Mobilyze! and complete clinical assessments for 8 weeks.
Promising accuracy rates (60% to 91%) were achieved by learners predicting categorical contextual states (eg, location). For states rated on scales (eg, mood), predictive capability was poor. Participants were satisfied with the phone application and improved significantly on self-reported depressive symptoms (beta(week) = -.82, P < .001, per-protocol Cohen d = 3.43) and interview measures of depressive symptoms (beta(week) = -.81, P < .001, per-protocol Cohen d = 3.55). Participants also became less likely to meet criteria for major depressive disorder diagnosis (b(week) = -.65, P = .03, per-protocol remission rate = 85.71%). Comorbid anxiety symptoms also decreased (beta(week) = -.71, P < .001, per-protocol Cohen d = 2.58).
Mobilyze! is a scalable, feasible intervention with preliminary evidence of efficacy. To our knowledge, it is the first ecological momentary intervention for unipolar depression, as well as one of the first attempts to use context sensing to identify mental health-related states. Several lessons learned regarding technical functionality, data mining, and software development process are discussed.
Clinicaltrials.gov NCT01107041; http://clinicaltrials.gov/ct2/show/NCT01107041 (Archived by WebCite at http://www.webcitation.org/60CVjPH0n).
Journal Article
Linguistic Similarity Within Centralized FLOSS Development
by
Gaughan, Matthew
,
Shaw, Aaron
,
Gergle, Darren
in
Linguistics
,
Open source software
,
Principal components analysis
2026
When free/libre and open source software (FLOSS) stewards centralize project development, they potentially undermine project sustainability and impact how contributors talk to each other. To study the relationship between steward-centralized development and contributor discussion, we compared the development of three Wikimedia platform features that the Wikimedia Foundation (WMF) built in MediaWiki. In a mixed-methods multi-case comparison, we used repository mining, linguistic style features, and principal component analysis to track MediaWiki feature development and issue discussions. Contrary to both our intuition and prior work, there were no identifiable differences in the linguistic style of WMF-affiliates and external contributors, even when feature development was guided by WMF contributions. From these results, we offer two provocations to the study of collaborative FLOSS development: (1) stewards dominate development according to their own use of specific project functionality; (2) centralized project development does not entail hierarchical language within project discussions.
Using job-shop scheduling tasks for evaluating collocated collaboration
by
Kellar, Melanie
,
Hawkey, Kirstie
,
Mandryk, Regan
in
Case studies
,
Collaboration
,
Computer Science
2008
Researchers have begun to explore tools that allow multiple users to collaborate across multiple devices in collocated environments. These tools often allow users to simultaneously place and interact with information on shared displays. Unfortunately, there is a lack of experimental tasks to evaluate the effectiveness of these tools for information coordination in such scenarios. In this article, we introduce job-shop scheduling as a task that could be used to evaluate systems and interactions within computer-supported collaboration environments. We describe properties that make the task useful, as well as evaluation measures that may be used. We also present two experiments as case studies to illustrate the breadth of scenarios in which this task may be applied. The first experiment shows the differences when users interact with different communicative gesturing schemes, while the second demonstrates the benefits of shared visual information on large displays. We close by discussing the general applicability of the tasks.
Journal Article
Discourse Processing in Technology-Mediated Environments
2018,2017
How do we decide on the best technology or device for a given task or conversational goal? What are the contexts and conditions under which certain technologies and technology use flourish? This chapter addresses these questions by considering the changes taking place across the technological landscape—with a particular focus on communication technologies—and discussing the ways in which they challenge our current understanding of discourse processes. It focuses on ideas of grounding and mutual knowledge, and on understanding how these play out in the multimodal, multi-device, and multi-audience environments that are part of our everyday communication. Successful communication—whether face-to-face or technologically mediated—relies upon jointly constructed meaning and a common ground of mutually acknowledged beliefs, goals, and perspectives. The chapter also focuses on identifying important trends in the technological sphere that affect discourse processes, centering on four important developments: the rise in multimodality and mobility, mode choice and mode switching, social and network-based affordances, and new audience forms.
Book Chapter
Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science
2022
Data science and machine learning provide indispensable techniques for understanding phenomena at scale, but the discretionary choices made when doing this work are often not recognized. Drawing from qualitative research practices, we describe how the concepts of positionality and reflexivity can be adapted to provide a framework for understanding, discussing, and disclosing the discretionary choices and subjectivity inherent to data science work. We first introduce the concepts of model positionality and computational reflexivity that can help data scientists to reflect on and communicate the social and cultural context of a model's development and use, the data annotators and their annotations, and the data scientists themselves. We then describe the unique challenges of adapting these concepts for data science work and offer annotator fingerprinting and position mining as promising solutions. Finally, we demonstrate these techniques in a case study of the development of classifiers for toxic commenting in online communities.
Beyond Words: An Experimental Study of Signaling in Crowdfunding
2024
Increasingly, crowdfunding is transforming financing for many people worldwide. Yet we know relatively little about how, why, and when funding outcomes are impacted by signaling between funders. We conduct two studies of N=500 and N=750 participants involved in crowdfunding to investigate the effect of certain characteristics of ``crowd signals'' on the decision to fund. We find that, under a variety of conditions, contributions of heterogeneous amounts arriving at varying time intervals are significantly more likely to be selected than homogeneous contribution amounts and times. The impact of signaling is strongest among participants who are susceptible to social influence. The effect is remarkably general across different project types, fundraising goals, participant interest in the projects, and participants' altruistic attitudes. Critically, the role of crowd signals in decision-making is typically unrecognized by participants. Our results underscore the fundamental nature of social signaling in crowdfunding, informing strategies for platforms, funders, and project creators.
The Agony of Opacity: Foundations for Reflective Interpretability in AI-Mediated Mental Health Support
by
Suh, Jina
,
Pendse, Sachin R
,
Kruzan, Kaylee
in
Artificial intelligence
,
Chatbots
,
Mental health
2026
Throughout history, a prevailing paradigm in mental healthcare has been one in which distressed people may receive treatment with little understanding around how their experience is perceived by their care provider, and in turn, the decisions made by their provider around how treatment will progress. Paralleling this offline model of care, people who seek mental health support from artificial intelligence (AI)-based chatbots are similarly provided little context for how their expressions of distress are processed by the model, and subsequently, any reasoning or theoretical grounding that may underlie model responses. People in severe distress who turn to AI chatbots for support thus find themselves caught between black boxes, contending with unique forms of agony that arise from these intersecting opacities. In this paper, we argue that the distinct psychological state of individuals experiencing severe mental distress uniquely necessitates a higher standard of end-user interpretability in comparison to general AI chatbot use. We propose a reflective interpretability approach to AI-mediated mental health support, which nudges users to engage in an agency-preserving and iterative process of reflection and interpretation of model outputs, towards creating meaning from interactions (rather than accepting outputs as directive instructions). Drawing on interpretability practices from four mental health fields (psychotherapy, crisis intervention, psychiatry, and care authorization), we describe concrete design approaches for reflective interpretability in AI-mediated mental health support, including role induction, prosocial advance directives, intervention titration, and well-defined mechanisms for recourse, alongside a discussion of potential risks and mitigation measures.
The Agony of Opacity: Foundations for Reflective Interpretability in AI-Mediated Mental Health Support
2025
Throughout history, a prevailing paradigm in mental healthcare has been one in which distressed people may receive treatment with little understanding around how their experience is perceived by their care provider, and in turn, the decisions made by their provider around how treatment will progress. Paralleling this offline model of care, people who seek mental health support from AI chatbots are similarly provided little context for how their expressions of distress are processed by the model, and subsequently, the logic that may underlie model responses. People in severe distress who turn to AI chatbots for support thus find themselves caught between black boxes, with unique forms of agony that arise from these intersecting opacities, including misinterpreting model outputs or attributing greater capabilities to a model than are yet possible, which has led to documented real-world harms. Building on empirical research from clinical psychology and AI safety, alongside rights-oriented frameworks from medical ethics, we describe how the distinct psychological state induced by severe distress can influence chatbot interaction patterns, and argue that this state of mind (combined with differences in how a user might perceive a chatbot compared to a care provider) uniquely necessitates a higher standard of interpretability in comparison to general AI chatbot use. Drawing inspiration from newer interpretable treatment paradigms, we then describe specific technical and interface design approaches that could be used to adapt interpretability strategies from four specific mental health fields (psychotherapy, community-based crisis intervention, psychiatry, and care authorization) to AI models, including consideration of the role of interpretability in the treatment process and tensions that may arise with greater interpretability.
\It's trained by non-disabled people\: Evaluating How Image Quality Affects Product Captioning with VLMs
2025
Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal products, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues, like blur and misframing of items, affect the accuracy of VLM-generated captions and whether resulting captions meet BLV people's information needs. Grounded in a survey with 86 BLV people, we systematically evaluate how image quality issues affect captions generated by VLMs. We show that the best model recognizes products in images with no quality issues with 98% accuracy, but drops to 75% accuracy overall when quality issues are present, worsening considerably as issues compound. We discuss the need for model evaluations that center on disabled people's experiences throughout the process and offer concrete recommendations for HCI and ML researchers to make VLMs more reliable for BLV people.