Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
37
result(s) for
"AI service focus"
Sort by:
Using AI and big data analytics to support entrepreneurial decisions in the digital economy
2025
Despite extensive research on AI’s theoretical benefits in entrepreneurship, few studies compare machine learning models’ effectiveness using real-world data or address challenges like model interpretability and overfitting. This study investigates how AI-driven big data analytics enhances entrepreneurial decision-making in the digital economy by evaluating four machine learning models—Decision Trees, Random Forest, Gradient Boosting, and Histogram-Based Gradient Boosting—to predict AI service focus. The results reveal that Gradient Boosting outperformed others with a testing R² of 0.9914, identifying company reputation and location as the most influential predictors of AI adoption. These findings challenge assumptions about organizational size’s role in digitalization, emphasizing the strategic value of brand and geography. Key limitations include overfitting in Decision Trees and Random Forest, and reliance on static datasets that constrain real-time adaptability. The results demonstrate AI’s potential to reduce uncertainty in entrepreneurial strategy, offering actionable insights for market entry and investment decisions. Future research should incorporate real-time data streams and hybrid AI-human frameworks to improve generalizability.
Journal Article
Bridging the AI-Literacy Gap in Health Care: Qualitative Analysis of the Flanders Case Study
by
De Vos, Maarten
,
Chatzichristos, Christos
,
De Backere, Femke
in
Adult
,
Applications of AI
,
Artificial Intelligence
2025
Building on the assertion that nearly every clinician will eventually use artificial intelligence (AI), this study provides a triangulated qualitative analysis of the requirements, challenges, and prospects for integrating AI into routine health care practice. This skills gap contributes to cautious and uneven adoption across clinical settings. Despite advancements, many health care professionals report a self-perceived lack of proficiency in comprehending, critically evaluating, and ethically deploying AI tools, which contributes to cautious adoption in clinical settings.
While addressing key research questions, the study investigates the necessary prerequisites, barriers, and opportunities for AI adoption and specific training priorities that medical staff require. The study is uniquely focused on the health care workforce, moving beyond the predominant emphasis in the literature on medical students.
Situated in Flanders, Belgium, a recognized innovation leader but with moderate lifelong learning participation, this research combines 15 semistructured expert interviews, a regional survey of 134 health care professionals, and 3 co-interpretive focus groups with 39 stakeholders, all conducted in 2024.
The results expose small generational and mainly occupational divides. For instance, 85.07% (114/134) of survey respondents expressed interest in introductory AI courses tailored to health care, while 80% (107/134) of them sought practical, job-relevant AI skills. However, only 13.8% (19/134) of clinicians felt that their training adequately prepared them for AI integration. Notably, younger professionals (<30 years of age) were most eager to engage with AI but also expressed greater concern about job displacement, while older professionals (>50 years of age) prioritized reducing administrative burden. Physicians and dentists reported higher self-assessed AI knowledge, whereas nurses and physiotherapists showed the lowest familiarity. The survey also revealed differences in preferred learning formats, with doctors favoring flexible, asynchronous learning and nurses emphasizing the need for accredited, employer-supported training during work hours. Ethics, though emphasized in academic literature, ranked low in training interest among most practitioners, except for younger and palliative care professionals. Focus group participants confirmed the need for clear regulatory guidance and access to accredited, practically oriented training. A significant insight was that nurses often lacked institutional support and funding for training, despite their pivotal role in AI-enabled workflows.
Taken together, these findings indicate that a one-size-fits-all approach to AI education in health care is unlikely to be effective. By triangulating insights across research stages, this study highlights the need for occupation-specific, accessible, and accredited AI training programs that bridge gaps in digital literacy and align with practical clinical priorities. The qualitative insights obtained can inform policy and training priorities in light of the European Union (EU) AI literacy mandates, while highlighting persistent gaps in workforce preparation.
Journal Article
Professionals’ responses to the introduction of AI innovations in radiology and their implications for future adoption: a qualitative study
by
Stavropoulou, Charitini
,
Scarbrough, Harry
,
Baker, Adrian
in
Adoption
,
AI innovation
,
Artificial intelligence
2021
Background
Artificial Intelligence (AI) innovations in radiology offer a potential solution to the increasing demand for imaging tests and the ongoing workforce crisis. Crucial to their adoption is the involvement of different professional groups, namely radiologists and radiographers, who work interdependently but whose perceptions and responses towards AI may differ. We aim to explore the knowledge, awareness and attitudes towards AI amongst professional groups in radiology, and to analyse the implications for the future adoption of these technologies into practice.
Methods
We conducted 18 semi-structured interviews with 12 radiologists and 6 radiographers from four breast units in National Health Services (NHS) organisations and one focus group with 8 radiographers from a fifth NHS breast unit, between 2018 and 2020.
Results
We found that radiographers and radiologists vary with respect to their awareness and knowledge around AI. Through their professional networks, conference attendance, and contacts with industry developers, radiologists receive more information and acquire more knowledge of the potential applications of AI. Radiographers instead rely more on localized personal networks for information. Our results also show that although both groups believe AI innovations offer a potential solution to workforce shortages, they differ significantly regarding the impact they believe it will have on their professional roles. Radiologists believe AI has the potential to take on more repetitive tasks and allow them to focus on more interesting and challenging work. They are less concerned that AI technology might constrain their professional role and autonomy. Radiographers showed greater concern about the potential impact that AI technology could have on their roles and skills development. They were less confident of their ability to respond positively to the potential risks and opportunities posed by AI technology.
Conclusions
In summary, our findings suggest that professional responses to AI are linked to existing work roles, but are also mediated by differences in knowledge and attitudes attributable to inter-professional differences in status and identity. These findings question broad-brush assertions about the future deskilling impact of AI which neglect the need for AI innovations in healthcare to be integrated into existing work processes subject to high levels of professional autonomy.
Journal Article
Exploring Gender Bias in AI for Personalized Medicine: Focus Group Study With Trans Community Members
by
Buslón, Nataly
,
Rios, Oriol
,
Perera del Rosario, Simón
in
Adult
,
Analysis
,
Artificial Intelligence
2025
This paper explores the perception and application of artificial intelligence (AI) for personalized medicine within the trans community, an often-overlooked demographic in the broader scope of precision medicine. Despite growing advancements in AI-driven health care solutions, little research has been dedicated to understanding how these technologies can be tailored to meet the unique health care needs of trans individuals. Addressing this gap is crucial for ensuring that precision medicine is genuinely inclusive and effective for all populations.
This study aimed to identify the specific challenges, obstacles, and potential solutions associated with the deployment of AI technologies in the development of personalized medicine for trans people. This research emphasizes a trans-inclusive and multidisciplinary perspective, highlighting the importance of cultural competence and community engagement in the design and implementation of AI-driven health care solutions.
A communicative methodology was applied in this study, prioritizing the active involvement of end-users and stakeholders through egalitarian dialogue that recognizes and values cultural intelligence. The methodological design included iterative consultations with trans community representatives to cocreate the research workflow and adapt data collection instruments accordingly. This participatory approach ensured that the perspectives and lived experiences of trans individuals were integral to the research process. Data collection was conducted through 3 focus groups with 16 trans adults, aimed at discussing the challenges, risks, and transformative potential of AI in precision medicine.
Analysis of the focus group discussions revealed several critical barriers impacting the integration of AI in personalized medicine for trans people, including concerns around data privacy, biases in algorithmic decision-making, and the lack of tailored health care data reflective of trans experiences. Participants expressed apprehensions about potential misdiagnoses or inappropriate treatments due to cisnormative data models. However, they also identified opportunities for AI to enhance health care outcomes, advocating for community-led data collection initiatives and improved algorithmic transparency. Proposed solutions included enhancing datasets with trans-specific health markers, incorporating community voices in AI development processes, and prioritizing ethical frameworks that respect gender diversity.
This study underscores the necessity for a trans-inclusive approach to precision medicine, facilitated by AI technologies that are sensitive to the health care needs and lived realities of trans people. By addressing the identified challenges and adopting community-driven solutions, AI has the potential to bridge existing health care gaps and improve the quality of life for trans individuals. This research contributes to the growing discourse on equitable health care innovation, calling for more inclusive AI design practices that extend the benefits of precision medicine to marginalized communities.
Journal Article
Nurse Researchers’ Experiences and Perceptions of Generative AI: Qualitative Semistructured Interview Study
by
Jin, Shuai
,
Tong, Ling
,
Xiao, Qian
in
Adoption and Change Management of eHealth Systems
,
Adult
,
AI Governance and Policy
2025
With the rapid development and iteration of generative artificial intelligence, the growing popularity of such groundbreaking tools among nurse researchers, represented by ChatGPT (OpenAI), is receiving passionate debate and intrigue. Although there has been qualitative research on generative artificial intelligence in other fields, little is known about the experiences and perceptions of nurse researchers; this study seeks to report on the topic.
This study aimed to describe the experiences and perceptions of generative artificial intelligence among Chinese nurse researchers, as well as provide a reference for the application of generative artificial intelligence in nursing research in the future.
Semistructured interviews were used to collect data in this qualitative study. Researchers mainly conducted interviews on the cognition, experience, and future expectations of nurse researchers regarding the use of generative artificial intelligence. Twenty-seven nurse researchers were included in the study. Through purposive sampling and snowball sampling, there were 7 nursing faculty researchers, 10 nursing graduate students, and 10 clinical nurse researchers. Data were analyzed using inductive content analysis.
Five themes and 12 subthemes were categorized from 27 original interview documents as follows: (1) diverse reflections on human-machine symbiosis, which includes the interplay between substitution and assistance, researchers shaping the potential of generative artificial intelligence, and acceptance of generative artificial intelligence with alacrity; (2) multiple factors of the usage experience, including individual characteristics and various usage scenarios; (3) research paradigm reshaping in the infancy stage, which involves full-process groundbreaking assistive tools and emergence of new research paths; (4) application risks of generative artificial intelligence, including intrinsic limitations of generative artificial intelligence and academic integrity and medical ethics; and (5) the co-improvement of technology and literacy, which concerns reinforcement needs for generative artificial intelligence literacy, development of nursing research generative artificial intelligence and urgent need for artificial intelligence-generated content detection tools. In this context, the first 4 themes form the rocket of the human-machine symbiosis journey. Only when humans fully leverage the advantages of machines (generative artificial intelligence) and overcome their shortcomings can this human-machine symbiosis journey reach the correct future direction (fifth theme).
This study explored the experiences and perceptions of nurse researchers interacting with generative artificial intelligence, which was a \"symbiotic journey\" full of twists and turns, and provides a reference and basis for achieving harmonious coexistence between nurse researchers and generative artificial intelligence in the future. Nurse researchers, policy makers, and application developers can use the conclusions of this study to further promote the application of generative artificial intelligence in nursing research, policy making, and product development.
Journal Article
Developing a Typology of Women's Attitudes Towards AI Use in the BreastScreen Programme—A Qualitative Study With BreastScreen Victoria Clients
2025
Background There is growing scientific evidence supporting the potential of artificial intelligence (AI) to enhance breast cancer screening by improving the accuracy and efficiency of mammography interpretation. Aligned with this, several empirical studies, predominantly quantitative, have explored lay women's perceptions of AI in breast screening, often framing attitudes in binary terms—positive or negative. This approach can overlook the complexity and nuance of women's views. Aim This article aims to unpack that complexity by developing a typology of women's attitudes towards the use of AI in the breast screening service. It builds on Birkland's (2019) information and communication technology (ICT) user typology among older adults and further explores the relationship between the attitude types and varying levels of AI acceptability. Method Adopting an interpretative qualitative research approach, we conducted a combination of focus groups, paired interviews and one‐on‐one interviews with 26 women who had participated in the BreastScreen programme in Victoria, Australia. Data were thematically analysed using inductive coding. Findings The analysis identified four attitude types—Enthusiast, Practicalist, Traditionalist and Guardian. Each type reflected unique motivations and experiences that shaped each participant's acceptance and rejection of AI. Most participants were classified as either Enthusiasts or Practicalists, indicating a generally high or moderate level of AI acceptance. Enthusiasts viewed AI as an exciting and necessary progression, and Practicalists valued its practical utility as a useful tool. Both groups shared the belief that AI represents the future of healthcare, underpinned by technological advancement. Traditionalists, on the other hand, expressed a preference for the status quo, advocating for the exclusive role of human doctors. Guardians typically had higher levels of AI knowledge and advocated for a cautious approach, citing social and ethical concerns about AI integration. Conclusion The typology illustrates that the BreastScreen Victoria clients' attitudes towards AI are more nuanced and dynamic than a simple positive–negative dichotomy. Recognising these perspectives is critical for designing AI implementation strategies that are sensitive to the needs and concerns of care recipients. Patient or Public Contribution This study was shaped by extensive stakeholder engagement with BreastScreen Victoria and its consumer representatives from the outset. Research materials were collaboratively developed and reviewed, ensuring the study design was fit‐for‐purpose.
Journal Article
Ethical Knowledge, Challenges, and Institutional Strategies Among Medical AI Developers and Researchers: Focus Group Study
by
Fantus, Sophia
,
Wang, Tianci
,
Li, Jinxu
in
Alzheimer's disease
,
Artificial intelligence
,
Artificial Intelligence - ethics
2026
As artificial intelligence (AI) becomes increasingly embedded in clinical decision-making and preventive care, it is urgent to address ethical concerns such as bias, privacy, and transparency to protect clinician and patient populations. Although prior research has examined the perspectives of medical AI stakeholders, including clinicians, patients, and health system leaders, far less is known about how medical AI developers and researchers understand and engage with ethical challenges as they develop AI tools. This gap is consequential because developers' ethical awareness, decision-making, and institutional environments influence how AI tools are conceptualized and deployed in practice. Thus, it is essential to understand how developers perceive these issues and what supports they identify as necessary for ethical AI development.
The objectives of the study were twofold: (1) to examine medical AI developers' and researchers' knowledge, attitudes, and experiences with AI ethics; and (2) to identify recommendations to enhance and strengthen interpersonal and institutional ethics-focused training and support.
We conducted 2 semistructured focus groups (60-90 minutes each) in 2024 with 13 AI developers and researchers affiliated with 5 US-based academic institutions. Participants' work spanned a wide variety of medical AI applications, including Alzheimer disease prediction, clinical imaging, electronic health records analysis, digital health, counseling and behavioral health, and genotype-phenotype modeling. Focus groups were conducted via Microsoft Teams, recorded, and transcribed verbatim. We applied conventional qualitative content analysis to inductively identify emerging concepts, categories, and themes. Coding was performed independently by 3 researchers, with consensus reached through iterative team meetings.
The analysis identified four key themes: (1) AI ethics knowledge acquisition: participants reported learning about ethics informally through peer-reviewed literature, reviewer feedback, social media, and mentorship rather than through structured training; (2) ethical encounters: participants described recurring ethical challenges related to data bias, patient privacy, generative AI use, commercialization pressures, and a tendency for research environments to prioritize model accuracy over ethical reflection; (3) reflections on ethical implications: participants expressed concern about downstream effects on patient care and clinician autonomy, and model generalizability, noting that rapid technological innovation outpaces regulatory and evaluative processes; and (4) strategies to mitigate ethical concerns: recommendations included clearer institutional guidelines, ethics checklists, interdisciplinary collaboration, multi-institutional data sharing, enhanced institutional review board support, and the inclusion of bioethicists as members of the AI research team.
Medical AI developers and researchers recognize significant ethical challenges in their work but lack structured training, resources, and institutional mechanisms to address them. Findings of this study underscore the need for institutions to consider embedding ethics into research processes through practical tools, mentorship, and interdisciplinary partnerships. Strengthening these supports is essential to preparing the next generation of developers to design and deploy ethical AI in health care.
Journal Article
Using Generative AI to Co-Design Digital Mental Health Interventions With Adolescents in Rural South Africa: Qualitative Thematic Analysis of Participatory Workshops
by
Moffett, Bianca
,
Makhubela, Princess
,
Nkuna, Tamera
in
Adolescent
,
Anxiety
,
Artificial Intelligence
2025
Digital mental health interventions (DMHIs) offer a scalable approach to address adolescent depression and anxiety. User-centered coproduction can optimize acceptability and engagement, but it is often resource-intensive. Advances in generative artificial intelligence (GenAI) create new opportunities for involving adolescents in co-design, yet research on its feasibility and acceptability, particularly in low-resource settings, remains underexplored.
This study aimed to explore adolescents' experiences and perspectives of using GenAI to co-design stories, images, and music for the Kuamsha app (Sea Monster), a gamified DMHI that teaches behavioral activation through interactive narratives and peer support.
Overall, 2 participatory workshops and focus group discussions were conducted with 23 adolescents (aged 15-19 years) in rural Mpumalanga, South Africa. Participants were guided to use 3 GenAI tools-ChatGPT (OpenAI), text-to-story; MidJourney (MidJourney Inc), text-to-image; and Soundful (Soundful Inc), music generation-to create digital content. Data were audio-recorded, translated, transcribed, and triangulated with the facilitator's observation notes. Thematic analysis was used to explore key themes.
Almost all participants (22/23, 96%) had no prior exposure to GenAI. The majority (20/23, 87%) described the creative process as enjoyable and engaging, with most (21/23, 91%) reporting that creating music improved their mood. Adolescents expressed autonomy and ownership of the process, with more than half (14/23, 61%) personalizing outputs to reflect their identities and aspirations. All participants (23/23, 100%) preferred artificial intelligence (AI)-generated images over the cartoon-like illustrations of the Kuamsha app, and most (19/23, 83%) preferred AI-generated music. Story preferences were more mixed, with about a quarter of participants (6/23, 26%) recalling that Kuamsha's narratives contained embedded lessons that were not integrated into the ChatGPT outputs. Most adolescents (18/23, 78%) required support with prompt construction, and more than half (13/23, 57%) noted cultural biases in AI outputs, particularly in images. Most participants (17/23, 74%) expressed interest in using AI for schoolwork and creative projects, while a minority (6/23, 26%) preferred to limit use to personal applications. Concerns about fairness and the displacement of human creativity were also raised.
GenAI shows promise for enhancing adolescent engagement in the coproduction of DMHIs and enabling culturally relevant and personalized content. However, reliance on human support and persistent algorithmic biases remain limitations. Further research should explore the integration of therapeutic principles into AI-generated media and strategies to mitigate bias.
Journal Article
A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare
by
Barda, Amie J.
,
Hochheiser, Harry
,
Horvat, Christopher M.
in
Analysis
,
Artificial intelligence
,
Child
2020
Background
There is an increasing interest in clinical prediction tools that can achieve high prediction accuracy and provide explanations of the factors leading to increased risk of adverse outcomes. However, approaches to explaining complex machine learning (ML) models are rarely informed by end-user needs and user evaluations of model interpretability are lacking in the healthcare domain. We used extended revisions of previously-published theoretical frameworks to propose a framework for the design of user-centered displays of explanations. This new framework served as the basis for qualitative inquiries and design review sessions with critical care nurses and physicians that informed the design of a user-centered explanation display for an ML-based prediction tool.
Methods
We used our framework to propose explanation displays for predictions from a
p
ediatric
i
ntensive
c
are
u
nit (PICU) in-hospital mortality risk model. Proposed displays were based on a model-agnostic, instance-level explanation approach based on feature influence, as determined by Shapley values. Focus group sessions solicited critical care provider feedback on the proposed displays, which were then revised accordingly.
Results
The proposed displays were perceived as useful tools in assessing model predictions. However, specific explanation goals and information needs varied by clinical role and level of predictive modeling knowledge. Providers preferred explanation displays that required less information processing effort and could support the information needs of a variety of users. Providing supporting information to assist in interpretation was seen as critical for fostering provider understanding and acceptance of the predictions and explanations. The user-centered explanation display for the PICU in-hospital mortality risk model incorporated elements from the initial displays along with enhancements suggested by providers.
Conclusions
We proposed a framework for the design of user-centered displays of explanations for ML models. We used the proposed framework to motivate the design of a user-centered display of an explanation for predictions from a PICU in-hospital mortality risk model. Positive feedback from focus group participants provides preliminary support for the use of model-agnostic, instance-level explanations of feature influence as an approach to understand ML model predictions in healthcare and advances the discussion on how to effectively communicate ML model information to healthcare providers.
Journal Article
Centering Public Perceptions on Translating AI Into Clinical Practice: Patient and Public Involvement and Engagement Consultation Focus Group Study
by
Shah, Sudhir
,
Stavropoulou, Charitini
,
Lammons, William
in
Advisors
,
Applied research
,
Artificial intelligence
2023
Artificial intelligence (AI) is widely considered to be the new technical advancement capable of a large-scale modernization of health care. Considering AI’s potential impact on the clinician-patient relationship, health care provision, and health care systems more widely, patients and the wider public should be a part of the development, implementation, and embedding of AI applications in health care. Failing to establish patient and public engagement and involvement (PPIE) can limit AI’s impact. This study aims to (1) understand patients’ and the public’s perceived benefits and challenges for AI and (2) clarify how to best conduct PPIE in projects on translating AI into clinical practice, given public perceptions of AI. We conducted this qualitative PPIE focus-group consultation in the United Kingdom. A total of 17 public collaborators representing 7 National Institute of Health and Care Research Applied Research Collaborations across England participated in 1 of 3 web-based semistructured focus group discussions. We explored public collaborators’ understandings, experiences, and perceptions of AI applications in health care. Transcripts were coanalyzed iteratively with 2 public coauthors using thematic analysis. We identified 3 primary deductive themes with 7 corresponding inductive subthemes. Primary theme 1, advantages of implementing AI in health care, had 2 subthemes: system improvements and improve quality of patient care and shared decision-making. Primary theme 2, challenges of implementing AI in health care, had 3 subthemes: challenges with security, bias, and access; public misunderstanding of AI; and lack of human touch in care and decision-making. Primary theme 3, recommendations on PPIE for AI in health care, had 2 subthemes: experience, empowerment, and raising awareness; and acknowledging and supporting diversity in PPIE.
Journal Article