Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
58 result(s) for "Generative AI impact"
Sort by:
Generative artificial intelligence in manufacturing: opportunities for actualizing Industry 5.0 sustainability goals
PurposeThis study offers practical insights into how generative artificial intelligence (AI) can enhance responsible manufacturing within the context of Industry 5.0. It explores how manufacturers can strategically maximize the potential benefits of generative AI through a synergistic approach.Design/methodology/approachThe study developed a strategic roadmap by employing a mixed qualitative-quantitative research method involving case studies, interviews and interpretive structural modeling (ISM). This roadmap visualizes and elucidates the mechanisms through which generative AI can contribute to advancing the sustainability goals of Industry 5.0.FindingsGenerative AI has demonstrated the capability to promote various sustainability objectives within Industry 5.0 through ten distinct functions. These multifaceted functions address multiple facets of manufacturing, ranging from providing data-driven production insights to enhancing the resilience of manufacturing operations.Practical implicationsWhile each identified generative AI function independently contributes to responsible manufacturing under Industry 5.0, leveraging them individually is a viable strategy. However, they synergistically enhance each other when systematically employed in a specific order. Manufacturers are advised to strategically leverage these functions, drawing on their complementarities to maximize their benefits.Originality/valueThis study pioneers by providing early practical insights into how generative AI enhances the sustainability performance of manufacturers within the Industry 5.0 framework. The proposed strategic roadmap suggests prioritization orders, guiding manufacturers in decision-making processes regarding where and for what purpose to integrate generative AI.
Generative artificial intelligence, human creativity, and art
Abstract Recent artificial intelligence (AI) tools have demonstrated the ability to produce outputs traditionally considered creative. One such system is text-to-image generative AI (e.g. Midjourney, Stable Diffusion, DALL-E), which automates humans’ artistic execution to generate digital artworks. Utilizing a dataset of over 4 million artworks from more than 50,000 unique users, our research shows that over time, text-to-image AI significantly enhances human creative productivity by 25% and increases the value as measured by the likelihood of receiving a favorite per view by 50%. While peak artwork Content Novelty, defined as focal subject matter and relations, increases over time, average Content Novelty declines, suggesting an expanding but inefficient idea space. Additionally, there is a consistent reduction in both peak and average Visual Novelty, captured by pixel-level stylistic elements. Importantly, AI-assisted artists who can successfully explore more novel ideas, regardless of their prior originality, may produce artworks that their peers evaluate more favorably. Lastly, AI adoption decreased value capture (favorites earned) concentration among adopters. The results suggest that ideation and filtering are likely necessary skills in the text-to-image process, thus giving rise to “generative synesthesia”—the harmonious blending of human exploration and AI exploitation to discover new creative workflows.
Generative AI models should include detection mechanisms as a condition for public release
The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.
How to Bell the Cat? A Theoretical Review of Generative Artificial Intelligence towards Digital Disruption in All Walks of Life
Generative Artificial Intelligence (GAI) has brought revolutionary changes to the world, enabling businesses to create new experiences by combining virtual and physical worlds. As the use of GAI grows along with the Metaverse, it is explored by academics, researchers, and industry communities for its endless possibilities. From ChatGPT by OpenAI to Bard AI by Google, GAI is a leading technology in physical and virtual business platforms. This paper focuses on GAI’s economic and societal impact and the challenges it poses. Businesses must rethink their operations and strategies to create hybrid physical and virtual experiences using GAI. This study proposes a framework that can help business managers develop effective strategies to enhance their operations. It analyzes the initial applications of GAI in multiple sectors to promote the development of future customer solutions and explores how GAI can help businesses create new value propositions and experiences for their customers, and the possibilities of digital communication and information technology. A research agenda is proposed for developing GAI for business management to enhance organizational efficiency. The results highlight a healthy conversation on the potential of GAI in various business sectors to improve customer experience.
From Google Gemini to OpenAI Q (Q-Star): A Survey on Reshaping the Generative Artificial Intelligence (AI) Research Landscape
This comprehensive survey explored the evolving landscape of generative Artificial Intelligence (AI), with a specific focus on the recent technological breakthroughs and the gathering advancements toward possible Artificial General Intelligence (AGI). It critically examined the current state and future trajectory of generative AI, exploring how innovations in developing actionable and multimodal AI agents with the ability scale their “thinking” in solving complex reasoning tasks are reshaping research priorities and applications across various domains, while the survey also offers an impact analysis on the generative AI research taxonomy. This work has assessed the computational challenges, scalability, and real-world implications of these technologies while highlighting their potential in driving significant progress in fields like healthcare, finance, and education. Our study also addressed the emerging academic challenges posed by the proliferation of both AI-themed and AI-generated preprints, examining their impact on the peer-review process and scholarly communication. The study highlighted the importance of incorporating ethical and human-centric methods in AI development, ensuring alignment with societal norms and welfare, and outlined a strategy for future AI research that focuses on a balanced and conscientious use of generative AI as its capabilities continue to scale.
ChatGPT: The End of Online Exam Integrity?
This study addresses the significant challenge posed by the use of Large Language Models (LLMs) such as ChatGPT on the integrity of online examinations, focusing on how these models can undermine academic honesty by demonstrating their latent and advanced reasoning capabilities. An iterative self-reflective strategy was developed for invoking critical thinking and higher-order reasoning in LLMs when responding to complex multimodal exam questions involving both visual and textual data. The proposed strategy was demonstrated and evaluated on real exam questions by subject experts and the performance of ChatGPT (GPT-4) with vision was estimated on an additional dataset of 600 text descriptions of multimodal exam questions. The results indicate that the proposed self-reflective strategy can invoke latent multi-hop reasoning capabilities within LLMs, effectively steering them towards correct answers by integrating critical thinking from each modality into the final response. Meanwhile, ChatGPT demonstrated considerable proficiency in being able to answer multimodal exam questions across 12 subjects. These findings challenge prior assertions about the limitations of LLMs in multimodal reasoning and emphasise the need for robust online exam security measures such as advanced proctoring systems and more sophisticated multimodal exam questions to mitigate potential academic misconduct enabled by AI technologies.
How will AI text generation and processing impact sustainability reporting? Critical analysis, a conceptual framework and avenues for future research
Purpose The ability of generative artificial intelligence (AI) tools such as ChatGPT to produce convincing, human-like text has major implications for the future of corporate reporting, including sustainability reporting. As the importance of sustainability reporting continues to grow, this study aims to critically analyse the benefits and pitfalls of automated text generation and processing. Design/methodology/approach This study develops a conceptual framework to delineate the field, assess the implications and form the basis for the generation of research questions. This study uses Alvesson and Deetz’s critical framework, considering insight (a review of literature and practice in the field), critique (consideration of the influences on the production and use of non-financial information and the implications for assurers of such information) and transformative redefinition (considering the implications of generative AI for sustainability reporting and proposing a research agenda). Findings This study highlights the implications of generative AI for sustainability accounting, reporting, assurance and report usage, including the risk of AI facilitating greenwashing, and the importance of more research on the use of AI for these matters. Practical implications The paper highlights to stakeholders the implications of AI for all aspects of sustainability reporting, including accounting, reporting, assurance and usage of reports. Social implications The implications of AI need to be understood in society, which this paper facilitates. Originality/value This study critically analyses the potential use of AI for sustainability reporting, construct a conceptual framework to delineate the field and develop a research agenda.
Green MLOps to Green GenOps: An Empirical Study of Energy Consumption in Discriminative and Generative AI Operations
This study presents an empirical investigation into the energy consumption of discriminative and generative AI models within real-world MLOps pipelines. For discriminative models, we examine various architectures and hyperparameters during training and inference and identify energy-efficient practices. For generative AI, large language models (LLMs) are assessed, with a focus primarily on energy consumption across different model sizes and varying service requests. Our study employs software-based power measurements, ensuring ease of replication across diverse configurations, models, and datasets. We analyse multiple models and hardware setups to uncover correlations among various metrics, identifying key contributors to energy consumption. The results indicate that, for discriminative models, optimising architectures, hyperparameters, and hardware can significantly reduce energy consumption without sacrificing performance. For LLMs, energy efficiency depends on balancing model size, reasoning complexity, and request-handling capacity, as larger models do not necessarily consume more energy when utilisation remains low. This analysis provides practical guidelines for designing green and sustainable ML operations, emphasising energy consumption and carbon-footprint reductions while maintaining performance. This paper can serve as a benchmark for accurately estimating total energy use across different types of AI models.
Transformative potentials of generative artificial intelligence: Should international entrepreneurial enterprises adopt GEN.AI?
This article presents a brief synopsis followed by key concepts and keywords to give the reader an overview of the article. Following a regular Abstract and significant keywords, the “ Introduction ” section discusses four topics related to, and influential in iSMEs’ global competitiveness and competition. The section on “Further developments” explores the internationalization of artificial intelligence (AI) in general and also examines both the interaction and potential impact of generative AI (GEN.AI) on internationalized SMEs (iSMEs) in particular. In the “Literature review” section, the two critical topics of iSMEs’ openness to, and affordability of AI’s costs, from the perspectives of three entrepreneurship theories—Causation, Effectuation, and Bricolage—are examined. The “ Discussion and implications ” section follows, and the “ Conclusion ” section appears at the end.
Bridging knowledge gap: the contribution of employees’ awareness of AI cyber risks comprehensive program to reducing emerging AI digital threats
Purpose In the modern digital realm, while artificial intelligence (AI) technologies pave the way for unprecedented opportunities, they also give rise to intricate cybersecurity issues, including threats like deepfakes and unanticipated AI-induced risks. This study aims to address the insufficient exploration of AI cybersecurity awareness in the current literature. Design/methodology/approach Using in-depth surveys across varied sectors (N = 150), the authors analyzed the correlation between the absence of AI risk content in organizational cybersecurity awareness programs and its impact on employee awareness. Findings A significant AI-risk knowledge void was observed among users: despite frequent interaction with AI tools, a majority remain unaware of specialized AI threats. A pronounced knowledge difference existed between those that are trained in AI risks and those who are not, more apparent among non-technical personnel and sectors managing sensitive information. Research limitations/implications This study paves the way for thorough research, allowing for refinement of awareness initiatives tailored to distinct industries. Practical implications It is imperative for organizations to emphasize AI risk training, especially among non-technical staff. Industries handling sensitive data should be at the forefront. Social implications Ensuring employees are aware of AI-related threats can lead to a safer digital environment for both organizations and society at large, given the pervasive nature of AI in everyday life. Originality/value Unlike most of the papers about AI risks, the authors do not trust subjective data from second hand papers, but use objective authentic data from the authors’ own up-to-date anonymous survey.