Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
2 The factive assumption: reframing epistemic justification for opaque AI in clinical medicine
by
Wells, Sarah Scriven
in
Epistemology
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
2 The factive assumption: reframing epistemic justification for opaque AI in clinical medicine
by
Wells, Sarah Scriven
in
Epistemology
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
2 The factive assumption: reframing epistemic justification for opaque AI in clinical medicine
Journal Article
2 The factive assumption: reframing epistemic justification for opaque AI in clinical medicine
2025
Request Book From Autostore
and Choose the Collection Method
Overview
IntroductionThe deployment of opaque machine learning (ML) models in clinical settings raises a critical problem of epistemic justification. Standard approaches like explainability often fail, as they rely on a ”factive assumption” requiring access to a model’s internal mechanisms. This paper challenges this assumption, investigating an alternative foundation for justification.Materials and MethodsThis paper employs conceptual analysis, drawing on epistemology and explainable AI (xAI). We critique access-based justification via Gettier-style problems and synthesise Catherine Elgin’s non-factive theory of understanding with recent work on clinical AI. This involves analysing critiques of intelligibility (Fleisher, 2022) and the proposal for interactive Toy Surrogate Models (TSMs) (Páez, 2024).ResultsThe factive assumption sets an unattainable standard, leading to fragile, artefactual understanding. Treating post-hoc explainability tools as analogous to scientific idealisations is shown to be flawed. A non-factive account of understanding—the ability to ”grasp” and reason with model outputs via tools like TSMs—provides a more robust epistemic warrant. Justification is thus relocated from model fidelity to the clinician’s structured, counterfactual reasoning within norm-governed practices.ConclusionFactive standards of explainability are neither attainable nor necessary for justifying opaque clinical AI. A more philosophically sustainable approach grounds justification in non-factive, practice-based understanding. This framework reorients responsibility from the model to the clinician’s cognitive engagement, aligning the use of AI with existing professional norms for managing uncertainty.
Publisher
BMJ Publishing Group LTD
Subject
This website uses cookies to ensure you get the best experience on our website.