Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
944
result(s) for
"AUTOMATED DECISION MAKING"
Sort by:
The Challenges of AI in Administrative Law and the Need for Specific Legal Remedies: Analysis of Polish Regulations and Practice
by
Jakubek-Lalik, Jowanka
in
ADM, administrative law, administrative proceedings, AI, automated decision making, Polish public administration
2024
Odločanje v upravnih postopkih se sooča s številnimi novimi izzivi. Javni organi odkrivajo prednosti sistemov umetne inteligence (UI) za izboljšanje učinkovitosti in natančnosti upravnih postopkov, ob tem pa se pojavljajo nove dileme, zlasti v zvezi z odgovornostjo, varstvom podatkov in splošnimi načeli upravnega prava. Koristi UI je zato treba presojati skupaj s povezanimi tveganji in grožnjami ter uvesti ustrezna sredstva za nadzor in spremljanje.Uporaba UI narašča tudi v poljski javni upravi, prav tako zanimanje za poenostavitev upravnih postopkov in avtomatizacijo izdajanja upravnih odločb. Vendar je treba ta razvoj skrbno spremljati, zlasti z vidika pravic državljanov in morebitnih napak, ki so drugačne od tistih v klasičnih, neavtomatiziranih upravnih postopkih.Namen: Prispevek obravnava izzive uvajanja orodij UI v upravno pravo in postopke ter potrebo po posebnih pravnih sredstvih. Postavlja se vprašanje, ali so pravna sredstva enaka kot v klasičnih upravnih postopkih in ali instrumenti iz obstoječe zakonodaje zadostujejo za zagotovitev ustreznega varstva pravic državljanov?Uporabljena metodologija vključuje analizo zakonodaje in politik, študijo praktičnih primerov ter proučitev spoznanj iz razprave na konferenci EGPA 2024.Ugotovitve se osredotočajo na analizo obstoječe zakonodaje tako z vidika njene uporabe kot praktičnega izvajanja, zlasti v luči uporabe UI v javni upravi. Najpomembnejši vidik je povezava med uporabo orodij UI in morebitno potrebo po oblikovanju novih ali prilagoditvi obstoječih pravnih sredstev na vseh področjih javne uprave, s posebnim poudarkom na izzivih avtomatiziranega odločanja.Praktične posledice zadevajo nove izzive, ki jih UI predstavlja za odločanje v upravnih postopkih. S praktičnimi primeri prispevek obravnava tudi, v kolikšni meri je treba pravna sredstva prilagoditi orodjem UI in kakšen je njihov vpliv na človekove pravice, kar bi zahtevalo ustrezne zaščitne ukrepe. Te posledice niso pomembne le s pravnega vidika, temveč tudi za pravne strokovnjake in javno upravo kot celoto.Izvirnost in vrednost prispevka je v razpravi o izzivih upravnih postopkov in pravnih sredstev v dobi UI. Ta tema je vsekakor aktualna, saj bo uporaba UI nedvomno zaznamovala prihodnost postopkov in drugih dejavnosti javne uprave. There are many new challenges to the classic approach to decision-making in administrative law. Public authorities are discovering the potential of AI systems to improve the efficiency and accuracy of administrative proceedings. However, automated decision-making (ADM) and AI-supported decision-making create new dilemmas, especially in relation to accountability, data protection, and general principles of administrative law. The benefits of AI should therefore be assessed together with the associated risks and threats, with adequate means for control and supervision. The use of AI tools is growing also in the Polish public administration, as is interest in simplifying administrative proceedings and automating the issuance of administrative decisions. However, these trends should be carefully monitored, especially from the perspective of citizens’ rights and potential errors that may differ from the classical, non-automated administrative proceedings.Purpose: This article examines the challenges of introducing AI tools into administrative law and proceedings, as well as the need for specific legal remedies. It questions whether the remedies are the same as in traditional administrative proceedings and whether the instruments provided in existing legislation suffice to ensure adequate protection of citizens’ rights?The methodology used includes an analysis of the legislation and policies, desk research on practical examples, and insights from discussion at the EGPA 2024 Conference.The findings focus on the analysis of existing legislation both in terms of its applicability and practical implementation, especially in light of AI use in public administration. The most important aspect is the link between the use of AI tools and the potential need to design new or adapt existing legal remedies in both imperious and non-imperious domains of public administration, with a special focus on ADM challenges.Practical implications address the new challenges AI poses to decision-making in administrative law. Through practical examples, it also discusses to what extent legal remedies should be tailored to AI tools and how human rights might be affected, necessitating protective measures. These implications are important not only from a legal standpoint, but also for legal practitioners and the public administration as a whole.Originality and value of the article lie in the discussion on the challenges of administrative proceedings and legal remedies in the era of AI. This topic is both highly relevant and timely, as the use of AI will undoubtedly shape the future of public administration proceedings and other activities.
Journal Article
Algorithmic Discrimination From the Perspective of Human Dignity
2024
Applications of artificial intelligence, algorithmic differentiation, and automated decision‐making systems aim to improve the efficiency of decision‐making for differentiating persons. However, they may also pose new risks to fundamental rights, including the risk of discrimination and potential violations of human dignity. Anti‐discrimination law is not only based on the principles of justice and equal treatment but also aims to ensure the free development of one’s personality and the protection of human dignity. This article examines developments in AI and algorithmic differentiation from the perspective of human dignity. Problems addressed include the expansion of the reach of algorithmic decisions, the potential for serious, systematic, or structural discrimination, the phenomenon of statistical discrimination and the treatment of persons not as individuals, deficits in the regulation of automated decisions and informed consent, the creation and use of comprehensive and personality‐constituting personal and group profiles, and the increase in structural dominance.
Journal Article
The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision‐Making
by
Larsson, Stefan
,
White, James Merricks
,
Ingram Bogusz, Claire
in
Accountability
,
ADM and risks of discrimination
,
AI and accountability
2024
Extant literature points to how the risk of discrimination is intrinsic to AI systems owing to the dependence on training data and the difficulty of post hoc algorithmic auditing. Transparency and auditability limitations are problematic both for companies’ prevention efforts and for government oversight, both in terms of how artificial intelligence (AI) systems function and how large‐scale digital platforms support recruitment processes. This article explores the risks and users’ understandings of discrimination when using AI and automated decision‐making (ADM) in worker recruitment. We rely on data in the form of 110 completed questionnaires with representatives from 10 of the 50 largest recruitment agencies in Sweden and representatives from 100 Swedish companies with more than 100 employees (“major employers”). In this study, we made use of an open definition of AI to accommodate differences in knowledge and opinion around how AI and ADM are understood by the respondents. The study shows a significant difference between direct and indirect AI and ADM use, which has implications for recruiters’ awareness of the potential for bias or discrimination in recruitment. All of those surveyed made use of large digital platforms like Facebook and LinkedIn for their recruitment, leading to concerns around transparency and accountability—not least because most respondents did not explicitly consider this to be AI or ADM use. We discuss the implications of direct and indirect use in recruitment in Sweden, primarily in terms of transparency and the allocation of accountability for bias and discrimination during recruitment processes.
Journal Article
The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications
by
Rudolf, Grega
,
Kovač, Polonca
in
Accountability
,
Administrative law
,
administrative law, administrative procedures, artificial intelligence, automated decision-making, good administration, legal principles, personal data protection
2024
Namen: Vključevanje umetne inteligence v avtomatizirano odločanje je prelomen trenutek za javno upravo. Prispevek obravnava uvedbo sistemov avtomatiziranega odločanja v upravne postopke, zlasti njihov vpliv na varstvo osebnih podatkov in temeljna načela upravnega prava.Zasnova/metodologija/pristop: Študija s pomočjo deskriptivne, normativne in dogmatske metode proučuje nedavne zakonodajne pobude in analizira izbrane primere uporabe avtomatiziranega odločanja v Sloveniji in tujini. Ob tem podrobneje analizira odločitev Sodišča EU iz leta 2023 v zadevi Schufa. S kombinacijo teoretičnih vidikov in praktičnih spoznanj študija ponuja primerjalno analizo evropskega in slovenskega zakonodajnega okvira.Ugotovitve: Prispevek presoja vpliv avtomatiziranega odločanja na ključna načela upravnega prava in varstva podatkov ter osvetljuje zakonodajne, organizacijske in tehnološke spremembe, potrebne za zagotovitev skladnosti avtomatiziranega določanja z obstoječo zakonodajo.Akademski doprinos k znanosti: Prispevek ponuja dragocene smernice za upravnopravno stroko in tako izboljšuje razumevanje uvajanja tehnologij avtomatiziranega odločanja v upravno prakso. Njegove ugotovitve so oblikovalcem politik in zakonodajalcem lahko v pomoč pri oblikovanju predpisov, ki vključujejo prednosti umetne inteligence, hkrati pa zagotavljajo, da so ti sistemi ustrezno nadzorovani.Raziskovalne/praktične/družbene posledice: Uvedba avtomatiziranega odločanja mora biti usklajena s pravnimi načeli, da se ohranijo preglednost, odgovornost in varstvo temeljnih pravic. Prispevek poudarja, da je pomembno ne le razumeti pravne posledice, temveč tudi zagotoviti, da tehnologije avtomatiziranega odločanja spoštujejo standarde dobrega upravljanja.Izvirnost/vrednost: Študija premika meje uveljavljenih pravnih okvirov in odpira kritična vprašanja o tem, kako temeljna načela upravnega prava in varstva podatkov prilagoditi novim tehnologijam. Umetno inteligenco vsekakor velja izkoristiti za povečanje učinkovitosti, hkrati pa je treba zagotoviti, da inovacije spoštujejo pravice posameznikov, varujejo javni interes ter podpirajo standarde dobre uprave in dobrega upravljanja. Purpose: The integration of artificial intelligence (AI) in automated decision-making (ADM) represents a transformative moment in public administration. This paper explores the incorporation of ADM systems into administrative procedures, focusing on their impact on personal data protection and the fundamental principles underpinning administrative law.Design/Methodology/Approach: Utilising a combination of descriptive, normative, and doctrinal research methods, the study draws on recent regulatory initiatives, analyses selected ADM use cases in Slovenia and abroad, and closely examines the 2023 Schufa case decided by the Court of Justice of the European Union (CJEU). By combining theoretical perspectives with practical insights, the research provides a comparative analysis within the context of EU and Slovenian legal frameworks.Findings: The study assesses how ADM systems interact with, and potentially reshape, key principles of administrative and data protection law. It presents a clear picture of the legislative, organisational, and technological changes required to ensure that ADM systems align with existing legal frameworks.Academic Contribution to the Field: By offering valuable guidance for public administration professionals, the paper enhances the understanding of implementing ADM technologies in administrative practice. Its insights assist policymakers and legislators in crafting regulations that embrace the benefits of AI while ensuring these systems are subject to proper oversight.Research/Practical/Social Implications: The deployment of ADM systems must align with legal principles to maintain transparency, accountability, and the protection of fundamental rights. This paper highlights the importance of not only understanding the legal implications but also ensuring that ADM technologies uphold standards of good governance.Originality/Value: This research extends the boundaries of established legal frameworks and raises critical questions about how core principles of administrative and data protection law can adapt to new technologies. The challenge lies in leveraging AI to increase efficiency while ensuring these innovations respect individual rights, safeguard the public interest, and uphold standards of good administration and governance.
Journal Article
The “black box” at work
2020
An oversized reliance on big data-driven algorithmic decision-making systems, coupled with a lack of critical inquiry regarding such systems, combine to create the paradoxical “black box” at work. The “black box” simultaneously demands a higher level of transparency from the worker in regard to data collection, while shrouding the decision-making in secrecy, making employer decisions even more opaque to the worker. To access employment, the worker is commanded to divulge highly personal information, and when hired, must submit further still to algorithmic processes of evaluations which will make authoritative claims as to the workers’ productivity. Furthermore, in and out of the workplace, the worker is governed by an invisible data-created leash deploying wearable technology to collect intimate worker data. At all stages, the worker is confronted with a lack of transparency, accountability, or explanation as to the inner workings or even the logic of the “black box” at work. This data revolution of the workplace is alarming for several reasons: (1) the “black box at work” not only serves to conceal disparities in hiring, but could also allow for a level of “data-laundering” that beggars any notion of equal opportunity in employment and (2) there exists, the danger of a “mission creep” attitude to data collection that allows for pervasive surveillance, contributing to the erosion of both the personhood and autonomy of workers. Thus, the “black box at work” not only enables worker domination in the workplace, it deprives the worker of Rawlsian justice.
Journal Article
In AI we trust? Perceptions about automated decision-making by artificial intelligence
by
Natali, Helberger
,
Araujo, Theo
,
Kruikemeier Sanne
in
Algorithms
,
Artificial intelligence
,
Automation
2020
Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed.
Journal Article
MEASURING ALGORITHMIC FAIRNESS
2020
Algorithmic decision making is both increasingly common and increasingly controversial. Critics worry that algorithmic tools are not transparent, accountable, or fair. Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it requires. Unfortunately, we do not. The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension. Two types of measures stand out. According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups-blacks and whites, for example. According to the other, algorithmic fairness requires that the algorithm produce the same percentage of false positives or false negatives for each of the groups at issue. Unfortunately, there is often no way to achieve parity in both these dimensions. This fact has led to a pressing question. Which type of measure should we prioritize and why?
This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal. Equal predictive accuracy ensures that a score means the same thing for each group at issue. As such, it relates to what one ought to believe about a scored individual. Because questions of fairness usually relate to action, not belief, this measure is ill-suited as a measure of fairness. This is the Article's conceptual contribution. Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure. While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists. This is the Article's normative contribution. Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that anti-discrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article's third contribution is to show that the law poses less of a barrier than many assume.
Journal Article
The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems
2022
This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain why this systemic exclusion is of moral concern and to offer a solution to address it.
Journal Article
Towards Transparency by Design for Artificial Intelligence
by
Fosch-Villaronga, Eduard
,
Lutz, Christoph
,
Tamò-Larrieux, Aurelia
in
Artificial Intelligence
,
Automation
,
Biomedical Engineering and Bioengineering
2020
In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making (ADM) environments. With the rise of artificial intelligence (AI) and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different promises that struggle to be realized in concrete applications. Indeed, the complexity of transparency for ADM shows tension between transparency as a normative ideal and its translation to practical application. To address this tension, we first conduct a review of transparency, analyzing its challenges and limitations concerning automated decision-making practices. We then look at the lessons learned from the development of Privacy by Design, as a basis for developing the Transparency by Design principles. Finally, we propose a set of nine principles to cover relevant contextual, technical, informational, and stakeholder-sensitive considerations. Transparency by Design is a model that helps organizations design transparent AI systems, by integrating these principles in a step-by-step manner and as an ex-ante value, not as an afterthought.
Journal Article
The black box problem revisited. Real and imaginary challenges for automated legal decision making
by
Jakubiec, Marek
,
Furman, Michał
,
Kucharzyk, Bartłomiej
in
Algorithms
,
Artificial intelligence
,
Automation
2024
This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We argue further that contrary to often defended claims the opacity issue is not a genuine problem. We also dismiss the justification problem. Further, we describe the tensions involved in the strangeness and unpredictability problems and suggest some ways to alleviate them.
Journal Article