Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
527
result(s) for
"Generative artificial intelligence impact"
Sort by:
Generative artificial intelligence in manufacturing: opportunities for actualizing Industry 5.0 sustainability goals
by
Vilkas, Mantas
,
Amran, Azlan
,
Iranmanesh, Mohammad
in
Advanced manufacturing technologies
,
Artificial intelligence
,
Collaboration
2024
PurposeThis study offers practical insights into how generative artificial intelligence (AI) can enhance responsible manufacturing within the context of Industry 5.0. It explores how manufacturers can strategically maximize the potential benefits of generative AI through a synergistic approach.Design/methodology/approachThe study developed a strategic roadmap by employing a mixed qualitative-quantitative research method involving case studies, interviews and interpretive structural modeling (ISM). This roadmap visualizes and elucidates the mechanisms through which generative AI can contribute to advancing the sustainability goals of Industry 5.0.FindingsGenerative AI has demonstrated the capability to promote various sustainability objectives within Industry 5.0 through ten distinct functions. These multifaceted functions address multiple facets of manufacturing, ranging from providing data-driven production insights to enhancing the resilience of manufacturing operations.Practical implicationsWhile each identified generative AI function independently contributes to responsible manufacturing under Industry 5.0, leveraging them individually is a viable strategy. However, they synergistically enhance each other when systematically employed in a specific order. Manufacturers are advised to strategically leverage these functions, drawing on their complementarities to maximize their benefits.Originality/valueThis study pioneers by providing early practical insights into how generative AI enhances the sustainability performance of manufacturers within the Industry 5.0 framework. The proposed strategic roadmap suggests prioritization orders, guiding manufacturers in decision-making processes regarding where and for what purpose to integrate generative AI.
Journal Article
Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence
2023
The advent of generative artificial intelligence (AI) offers transformative potential in the field of education. The study explores three main areas: (1) How did ChatGPT answer questions related to science education? (2) What are some ways educators could utilise ChatGPT in their science pedagogy? and (3) How has ChatGPT been utilised in this study, and what are my reflections about its use as a research tool? This exploratory research applies a self-study methodology to investigate the technology. Impressively, ChatGPT’s output often aligned with key themes in the research. However, as it currently stands, ChatGPT runs the risk of positioning itself as the ultimate epistemic authority, where a single truth is assumed without a proper grounding in evidence or presented with sufficient qualifications. Key ethical concerns associated with AI include its potential environmental impact, issues related to content moderation, and the risk of copyright infringement. It is important for educators to model responsible use of ChatGPT, prioritise critical thinking, and be clear about expectations. ChatGPT is likely to be a useful tool for educators designing science units, rubrics, and quizzes. Educators should critically evaluate any AI-generated resource and adapt it to their specific teaching contexts. ChatGPT was used as a research tool for assistance with editing and to experiment with making the research narrative clearer. The intention of the paper is to act as a catalyst for a broader conversation about the use of generative AI in science education.
Journal Article
Generative artificial intelligence, human creativity, and art
2024
Abstract
Recent artificial intelligence (AI) tools have demonstrated the ability to produce outputs traditionally considered creative. One such system is text-to-image generative AI (e.g. Midjourney, Stable Diffusion, DALL-E), which automates humans’ artistic execution to generate digital artworks. Utilizing a dataset of over 4 million artworks from more than 50,000 unique users, our research shows that over time, text-to-image AI significantly enhances human creative productivity by 25% and increases the value as measured by the likelihood of receiving a favorite per view by 50%. While peak artwork Content Novelty, defined as focal subject matter and relations, increases over time, average Content Novelty declines, suggesting an expanding but inefficient idea space. Additionally, there is a consistent reduction in both peak and average Visual Novelty, captured by pixel-level stylistic elements. Importantly, AI-assisted artists who can successfully explore more novel ideas, regardless of their prior originality, may produce artworks that their peers evaluate more favorably. Lastly, AI adoption decreased value capture (favorites earned) concentration among adopters. The results suggest that ideation and filtering are likely necessary skills in the text-to-image process, thus giving rise to “generative synesthesia”—the harmonious blending of human exploration and AI exploitation to discover new creative workflows.
Journal Article
The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing
by
Tang, Arthur
,
Tam, Wilson
,
Li, Kin‐Kit
in
Academic writing
,
Artificial Intelligence
,
Authorship
2024
The integration of generative artificial intelligence (AI) into academic research writing has revolutionized the field, offering powerful tools like ChatGPT and Bard to aid researchers in content generation and idea enhancement. We explore the current state of transparency regarding generative AI use in nursing academic research journals, emphasizing the need for explicitly declaring the use of generative AI by authors in the manuscript. Out of 125 nursing studies journals, 37.6% required explicit statements about generative AI use in their authors' guidelines. No significant differences in impact factors or journal categories were found between journals with and without such requirement. A similar evaluation of medicine, general and internal journals showed a lower percentage (14.5%) including the information about generative AI usage. Declaring generative AI tool usage is crucial for maintaining the transparency and credibility in academic writing. Additionally, extending the requirement for AI usage declarations to journal reviewers can enhance the quality of peer review and combat predatory journals in the academic publishing landscape. Our study highlights the need for active participation from nursing researchers in discussions surrounding standardization of generative AI declaration in academic research writing.
Journal Article
Factuality challenges in the era of large language models and opportunities for fact-checking
by
Ciampaglia, Giovanni Luca
,
Chakraborty, Tanmoy
,
DiResta, Renee
in
4014/4009
,
639/705/117
,
Access to information
2024
The emergence of tools based on large language models (LLMs), such as OpenAI’s ChatGPT and Google’s Gemini, has garnered immense public attention owing to their advanced natural language generation capabilities. These remarkably natural-sounding tools have the potential to be highly useful for various tasks. However, they also tend to produce false, erroneous or misleading content—commonly referred to as hallucinations. Moreover, LLMs can be misused to generate convincing, yet false, content and profiles on a large scale, posing a substantial societal challenge by potentially deceiving users and spreading inaccurate information. This makes fact-checking increasingly important. Despite their issues with factual accuracy, LLMs have shown proficiency in various subtasks that support fact-checking, which is essential to ensure factually accurate responses. In light of these concerns, we explore issues related to factuality in LLMs and their impact on fact-checking. We identify key challenges, imminent threats and possible solutions to these factuality issues. We also thoroughly examine these challenges, existing solutions and potential prospects for fact-checking. By analysing the factuality constraints within LLMs and their impact on fact-checking, we aim to contribute to a path towards maintaining accuracy at a time of confluence of generative artificial intelligence and misinformation.
Large language models (LLMs) present challenges, including a tendency to produce false or misleading content and the potential to create misinformation or disinformation. Augenstein and colleagues explore issues related to factuality in LLMs and their impact on fact-checking.
Journal Article
How will AI text generation and processing impact sustainability reporting? Critical analysis, a conceptual framework and avenues for future research
by
Dimes, Ruth
,
Molinari, Matteo
,
de Villiers, Charl
in
Accounting
,
Annual reports
,
Artificial intelligence
2024
Purpose
The ability of generative artificial intelligence (AI) tools such as ChatGPT to produce convincing, human-like text has major implications for the future of corporate reporting, including sustainability reporting. As the importance of sustainability reporting continues to grow, this study aims to critically analyse the benefits and pitfalls of automated text generation and processing.
Design/methodology/approach
This study develops a conceptual framework to delineate the field, assess the implications and form the basis for the generation of research questions. This study uses Alvesson and Deetz’s critical framework, considering insight (a review of literature and practice in the field), critique (consideration of the influences on the production and use of non-financial information and the implications for assurers of such information) and transformative redefinition (considering the implications of generative AI for sustainability reporting and proposing a research agenda).
Findings
This study highlights the implications of generative AI for sustainability accounting, reporting, assurance and report usage, including the risk of AI facilitating greenwashing, and the importance of more research on the use of AI for these matters.
Practical implications
The paper highlights to stakeholders the implications of AI for all aspects of sustainability reporting, including accounting, reporting, assurance and usage of reports.
Social implications
The implications of AI need to be understood in society, which this paper facilitates.
Originality/value
This study critically analyses the potential use of AI for sustainability reporting, construct a conceptual framework to delineate the field and develop a research agenda.
Journal Article
How to Bell the Cat? A Theoretical Review of Generative Artificial Intelligence towards Digital Disruption in All Walks of Life
by
Mondal, Subhra
,
Vrana, Vasiliki G.
,
Das, Subhankar
in
Artificial intelligence
,
Bard AI
,
business
2023
Generative Artificial Intelligence (GAI) has brought revolutionary changes to the world, enabling businesses to create new experiences by combining virtual and physical worlds. As the use of GAI grows along with the Metaverse, it is explored by academics, researchers, and industry communities for its endless possibilities. From ChatGPT by OpenAI to Bard AI by Google, GAI is a leading technology in physical and virtual business platforms. This paper focuses on GAI’s economic and societal impact and the challenges it poses. Businesses must rethink their operations and strategies to create hybrid physical and virtual experiences using GAI. This study proposes a framework that can help business managers develop effective strategies to enhance their operations. It analyzes the initial applications of GAI in multiple sectors to promote the development of future customer solutions and explores how GAI can help businesses create new value propositions and experiences for their customers, and the possibilities of digital communication and information technology. A research agenda is proposed for developing GAI for business management to enhance organizational efficiency. The results highlight a healthy conversation on the potential of GAI in various business sectors to improve customer experience.
Journal Article
Generative AI’s environmental costs are soaring — and mostly secret
2024
First-of-its-kind US bill would address the environmental costs of the technology, but there’s a long way to go.
First-of-its-kind US bill would address the environmental costs of the technology, but there’s a long way to go.
Journal Article
Generative AI models should include detection mechanisms as a condition for public release
by
Chatila, Raja
,
Pedreschi, Dino
,
Eyers, David
in
Artificial intelligence
,
Generative artificial intelligence
,
Legislation
2023
The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.
Journal Article
Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review
by
Ahmad, Sohail
,
Bhutta, Sadia Muzaffar
,
Ansari, Aisha Naz
in
Analysis
,
Book publishing
,
Chatbots
2024
The recent development of AI Chatbot – specifically ChatGPT - has gained dramatic attention from users as evident by ongoing discussion among the education fraternity. We argue that prior to making any conclusion, it is important to understand how ChatGPT is being used in higher education across the globe. This paper makes a significant contribution by systematically reviewing the global literature on the use of ChatGPT in higher education using PRISMA guidelines. We included 69 studies in the analysis based on inclusion and exclusion criteria. We presented the scope of published literature in three aspects: (i) contextual, (ii) methodological, and (iii) disciplinary. Most of the studies have been carried out in HICs (
n
= 53; 77%) representing the field of higher education (
n
= 37; 54%) without specifying the discipline, while only a few studies were based on empirical data (
n
= 19; 27%). The findings based on included studies reveal that ChatGPT serves as a convenient tool to assist teachers, students, and researchers in various tasks. While the specific uses vary, the underlying motivation remains consistent: seeking personal benefits and reducing academic burdens. Teachers use it for personal and professional learning and resource generation while students use it as personal tutors for various learning purposes. However, concerns related to accuracy, reliability, academic integrity, and potential negative effects on cognitive and social development were consistently highlighted in many studies. To address these concerns, we have proposed a comprehensive framework for universities along with directions for future research in higher education as an optimal response.
Journal Article