Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
251
result(s) for
"Young, Meg"
Sort by:
Municipal surveillance regulation and algorithmic accountability
2019
A wave of recent scholarship has warned about the potential for discriminatory harms of algorithmic systems, spurring an interest in algorithmic accountability and regulation. Meanwhile, parallel concerns about surveillance practices have already led to multiple successful regulatory efforts of surveillance technologies—many of which have algorithmic components. Here, we examine municipal surveillance regulation as offering lessons for algorithmic oversight. Taking the 2017 Seattle Surveillance Ordinance as our primary case study and surveying efforts across five other cities, we describe the features of existing surveillance regulation; including procedures for describing surveillance technologies in detail, requirements for public engagement, and processes for establishing acceptable uses. Although the Seattle Surveillance Ordinance was not intended to address algorithmic accountability, we find these considerations to be relevant to the law’s aim of surfacing disparate impacts of systems in use. We also find that in notable cases government employees did not identify regulated algorithmic surveillance technologies as reliant on algorithmic or machine learning systems, highlighting definitional gaps that could hinder future efforts toward algorithmic regulation. We argue that (i) finer-grained distinctions between types of information systems in the language of law and policy, and (ii) risk assessment tools integrated into their implementation would strengthen future regulatory efforts by rendering underlying algorithmic components more legible and accountable to political and community stakeholders.
Journal Article
Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents
by
Magassa, Lassana
,
Friedman, Batya
,
Young, Meg
in
Adaptive technology
,
Augmented reality
,
Automotive bodies
2019
To be successful, policy must anticipate a broad range of constituents. Yet, all too often, technology policy is written with primarily mainstream populations in mind. In this article, drawing on Value Sensitive Design and discount evaluation methods, we introduce a new method—Diverse Voices—for strengthening pre-publication technology policy documents from the perspective of underrepresented groups. Cost effective and high impact, the Diverse Voices method intervenes by soliciting input from “experiential” expert panels (i.e., members of a particular stakeholder group and/or those serving that group). We first describe the method. Then we report on two case studies demonstrating its use: one with a white paper on augmented reality technology with expert panels on people with disabilities, people who were formerly or currently incarcerated, and women; and the other with a strategy document on automated driving vehicle technologies with expert panels on youth, non-car drivers, and extremely low-income people. In both case studies, panels identified significant shortcomings in the pre-publication documents which, if addressed, would mitigate some of the disparate impact of the proposed policy recommendations on these particular stakeholder groups. Our discussion includes reflection on the method, evidence for its success, its limitations, and future directions.
Journal Article
Access, Accountability, and Ownership in Government Use of Proprietary Systems
2020
When firms contract with public agencies to provide services, they regularly assert that some subset of their work is proprietary. For instance, companies may stake a claim over how information they produce is managed and shared. At the same time, governments in the state of Washington are subject to a strongly transparent state Public Records Act, under which members of the public can request access to government information, some of which is transacted or shared between government agencies and private firms. In this dissertation, I analyze two cases of contracting relationships between the public and private sectors in which data ownership is contested: (i) the contract between the transportation agencies behind One Regional Card for All (ORCA) fare card in the Puget Sound region and their vendor Vix Technology and (ii) the would-be contract between King County Metro and Lyft Incorporated in support of a subsidized expansion of transit hub access. I report on data sharing as a site of competing claims over data control with a focus on the technical, legal, and policy factors that enabled and constrained access to it. In both cases, I situate specific dataset access requests in the context of public agencies’ objectives to advance accountability, equity, and oversight. Meanwhile, firms asserted intellectual property protections like trade secret in an attempt to prevent data access. My data collection draws on my firsthand experience as a research assistant on a project to support cross-sector data sharing at the University of Washington called the Transportation Data Collaborative, as well as additional interviews I conducted and documents I requested under the Washington Public Records Act. In its high-transparency and permissive data access construction, the Washington Public Records Act emerged as a key factor in both of my case studies, as it both afforded and constrained data sharing in moments when firms contested access on the basis of trade secret. I locate my descriptive case studies of these data flows within a broader scholarship of ambiguity and political struggle in the demarcation between “public” and “private.” Building on theory that understands these terms as always making normative (rather than descriptive) claims about the world, I explore how ownership emerged as a dominant rationale in assertions about how data should be accessed or shared. Across both cases, I observe that while data ownership is locally understood to be a means of asserting data control, in practice it is not dispositive of outcomes with respect to its access and sharing. In the conclusion, I highlight how data ownership fails as a data governance strategy and connect this finding to broader information policy debates.
Dissertation
Does big data serve policy? Not without context. An experiment with in silico social science
2023
The DARPA Ground Truth project sought to evaluate social science by constructing four varied simulated social worlds with hidden causality and unleashed teams of scientists to collect data, discover their causal structure, predict their future, and prescribe policies to create desired outcomes. This large-scale, long-term experiment of in silico social science, about which the ground truth of simulated worlds was known, but not by us, reveals the limits of contemporary quantitative social science methodology. First, problem solving without a shared ontology—in which many world characteristics remain existentially uncertain—poses strong limits to quantitative analysis even when scientists share a common task, and suggests how they could become insurmountable without it. Second, data labels biased the associations our analysts made and assumptions they employed, often away from the simulated causal processes those labels signified, suggesting limits on the degree to which analytic concepts developed in one domain may port to others. Third, the current standard for computational social science publication is a demonstration of novel causes, but this limits the relevance of models to solve problems and propose policies that benefit from the simpler and less surprising answers associated with most important causes, or the combination of all causes. Fourth, most singular quantitative methods applied on their own did not help to solve most analytical challenges, and we explored a range of established and emerging methods, including probabilistic programming, deep neural networks, systems of predictive probabilistic finite state machines, and more to achieve plausible solutions. However, despite these limitations common to the current practice of computational social science, we find on the positive side that even imperfect knowledge can be sufficient to identify robust prediction if a more pluralistic approach is applied. Applying competing approaches by distinct subteams, including at one point the vast TopCoder.com global community of problem solvers, enabled discovery of many aspects of the relevant structure underlying worlds that singular methods could not. Together, these lessons suggest how different a policy-oriented computational social science would be than the computational social science we have inherited. Computational social science that serves policy would need to endure more failure, sustain more diversity, maintain more uncertainty, and allow for more complexity than current institutions support.
Journal Article
PUSH, PULL, AND SPILL: A TRANSDISCIPLINARY CASE STUDY IN MUNICIPAL OPEN GOVERNMENT
2016
Municipal open data raises hopes and concerns. The activities of cities produce a wide array of data, data that is vastly enriched by ubiquitous computing. Municipal data is opened as it is pushed to, pulled by, and spilled to the public through online portals, requests for public records, and releases by cities and their vendors, contractors, and partners. By opening data, cities hope to raise public trust and prompt innovation. Municipal data, however, is often about the people who live, work, and travel in the city. By opening data, cities raise concern for privacy and social justice. This article presents the results of a broad empirical exploration of municipal data release in the City of Seattle. In this research, parties affected by municipal practices expressed their hopes and concerns for open data. City personnel from eight prominent departments described the reasoning, procedures, and controversies that have accompanied their release of data. All of the existing data from the online portal for the city were joined to assess the risk to privacy inherent in open data. Contracts with third parties involving sensitive or confidential data about residents of the city were examined for safeguards against the unauthorized release of data. Results suggest the need for more comprehensive measures to manage the risk latent in opening city data. Cities should maintain inventories of data assets, produce data management plans pertaining to the activities of departments, and develop governance structures to deal with issues as they arise -- centrally and amongst the various departments -- with ex ante and ex post protocols to govern the push, pull, and spill of data. In addition, cities should consider conditioned access to pushed data, conduct audits and training around public records requests, and develop standardized model contracts to protect against the spill of data by third parties.
Journal Article
Push, Pull, and Spill
2015
Municipal open data raises hopes and concerns. The activities of cities produce a wide array of data, data that is vastly enriched by ubiquitous computing. Municipal data is opened as it is pushed to, pulled by, and spilled to the public through online portals, requests for public records, and releases by cities and their vendors, contractors, and partners. By opening data, cities hope to raise public trust and prompt innovation. Municipal data, however, is often about the people who live, work, and travel in the city. By opening data, cities raise concern for privacy and social justice.
This article presents the results of a broad empirical exploration of municipal data release in the City of Seattle. In this research, parties affected by municipal practices expressed their hopes and concerns for open data. City personnel from eight prominent departments described the reasoning, procedures, and controversies that have accompanied their release of data. All of the existing data from the online portal for the city were joined to assess the risk to privacy inherent in open data. Contracts with third parties involving sensitive or confidential data about residents of the city were examined for safeguards against the unauthorized release of data.
Results suggest the need for more comprehensive measures to manage the risk latent in opening city data. Cities should maintain inventories of data assets, produce data management plans pertaining to the activities of departments, and develop governance structures to deal with issues as they arise—centrally and amongst the various departments—with ex ante and ex post protocols to govern the push, pull, and spill of data. In addition, cities should consider conditioned access to pushed data, conduct audits and training around public records requests, and develop standardized model contracts to protect against the spill of data by third parties.
Journal Article
A Human-Centered Approach to Data Privacy : Political Economy, Power, and Collective Data Subjects
2016
Researchers find weaknesses in current strategies for protecting privacy in large datasets. Many anonymized datasets are reidentifiable, and norms for offering data subjects notice and consent over emphasize individual responsibility. Based on fieldwork with data managers in the City of Seattle, I identify ways that these conventional approaches break down in practice. Drawing on work from theorists in sociocultural anthropology, I propose that a Human Centered Data Science move beyond concepts like dataset identifiability and sensitivity toward a broader ontology of who is implicated by a dataset, and new ways of anticipating how data can be combined and used.
Municipal Surveillance Regulation and Algorithmic Accountability
2019
A wave of recent scholarship has warned about the potential for discriminatory harms of algorithmic systems, spurring an interest in algorithmic accountability and regulation. Meanwhile, parallel concerns about surveillance practices have already led to multiple successful regulatory efforts of surveillance technologies—many of which have algorithmic components. Here, we examine municipal surveillance regulation as offering lessons for algorithmic oversight. Taking the 2017 Seattle Surveillance Ordinance as our primary case study and surveying efforts across five other cities, we describe the features of existing surveillance regulation; including procedures for describing surveillance technologies in detail, processes for public engagement, and processes for establishing acceptable uses. Although these surveillance-focused laws were not intended to address algorithmic accountability, we find these considerations to be relevant to the law’s aim of surfacing disparate impacts of systems in use. We also find that in notable cases, government employees did not identify regulated algorithmic surveillance technologies as reliant on algorithmic or machine learning systems, highlighting a definitional gap that could hinder future efforts toward algorithmic regulation. We argue that (i) finer-grained distinctions between types of analytic and information systems in the language of law and policy, and (ii) risk assessment tools integrated into their implementation would both strengthen future regulatory efforts by rendering underlying algorithmic components more legible and accountable to political and community stakeholders.
Defining AI in Policy versus Practice
by
Huang, Karen
,
Young, Meg
,
Krafft, P M
in
Artificial intelligence
,
Forest management
,
Human behavior
2019
Recent concern about harms of information technologies motivate consideration of regulatory action to forestall or constrain certain developments in the field of artificial intelligence (AI). However, definitional ambiguity hampers the possibility of conversation about this urgent topic of public concern. Legal and regulatory interventions require agreed-upon definitions, but consensus around a definition of AI has been elusive, especially in policy conversations. With an eye towards practical working definitions and a broader understanding of positions on these issues, we survey experts and review published policy documents to examine researcher and policy-maker conceptions of AI. We find that while AI researchers favor definitions of AI that emphasize technical functionality, policy-makers instead use definitions that compare systems to human thinking and behavior. We point out that definitions adhering closely to the functionality of AI systems are more inclusive of technologies in use today, whereas definitions that emphasize human-like capabilities are most applicable to hypothetical future technologies. As a result of this gap, ethical and regulatory efforts may overemphasize concern about future technologies at the expense of pressing issues with existing deployed technologies.
An Algorithmic Equity Toolkit for Technology Audits by Community Advocates and Activists
by
Guetler, Vivian
,
Bernease Herman
,
Dailey, Dharma
in
Accountability
,
Algorithms
,
Artificial intelligence
2019
A wave of recent scholarship documenting the discriminatory harms of algorithmic systems has spurred widespread interest in algorithmic accountability and regulation. Yet effective accountability and regulation is stymied by a persistent lack of resources supporting public understanding of algorithms and artificial intelligence. Through interactions with a US-based civil rights organization and their coalition of community organizations, we identify a need for (i) heuristics that aid stakeholders in distinguishing between types of analytic and information systems in lay language, and (ii) risk assessment tools for such systems that begin by making algorithms more legible. The present work delivers a toolkit to achieve these aims. This paper both presents the Algorithmic Equity Toolkit (AEKit) Equity as an artifact, and details how our participatory process shaped its design. Our work fits within human-computer interaction scholarship as a demonstration of the value of HCI methods and approaches to problems in the area of algorithmic transparency and accountability.