Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
45 result(s) for "Google Gemini"
Sort by:
Large language models as assistance for glaucoma surgical cases: a ChatGPT vs. Google Gemini comparison
Purpose The aim of this study was to define the capability of ChatGPT-4 and Google Gemini in analyzing detailed glaucoma case descriptions and suggesting an accurate surgical plan. Methods Retrospective analysis of 60 medical records of surgical glaucoma was divided into “ordinary” ( n  = 40) and “challenging” ( n  = 20) scenarios. Case descriptions were entered into ChatGPT and Bard’s interfaces with the question “What kind of surgery would you perform?” and repeated three times to analyze the answers’ consistency. After collecting the answers, we assessed the level of agreement with the unified opinion of three glaucoma surgeons. Moreover, we graded the quality of the responses with scores from 1 (poor quality) to 5 (excellent quality), according to the Global Quality Score (GQS) and compared the results. Results ChatGPT surgical choice was consistent with those of glaucoma specialists in 35/60 cases (58%), compared to 19/60 (32%) of Gemini ( p  = 0.0001). Gemini was not able to complete the task in 16 cases (27%). Trabeculectomy was the most frequent choice for both chatbots (53% and 50% for ChatGPT and Gemini, respectively). In “challenging” cases, ChatGPT agreed with specialists in 9/20 choices (45%), outperforming Google Gemini performances (4/20, 20%). Overall, GQS scores were 3.5 ± 1.2 and 2.1 ± 1.5 for ChatGPT and Gemini ( p  = 0.002). This difference was even more marked if focusing only on “challenging” cases (1.5 ± 1.4 vs. 3.0 ± 1.5, p  = 0.001). Conclusion ChatGPT-4 showed a good analysis performance for glaucoma surgical cases, either ordinary or challenging. On the other side, Google Gemini showed strong limitations in this setting, presenting high rates of unprecise or missed answers.
Google Gemini as a next generation AI educational tool: a review of emerging educational technology
This emerging technology report discusses Google Gemini as a multimodal generative AI tool and presents its revolutionary potential for future educational technology. It introduces Gemini and its features, including versatility in processing data from text, image, audio, and video inputs and generating diverse content types. This study discusses recent empirical studies, technology in practice, and the relationship between Gemini technology and the educational landscape. This report further explores Gemini’s relevance for future educational endeavors and practical applications in emerging technologies. Also, it discusses the significant challenges and ethical considerations that must be addressed to ensure its responsible and effective integration into the educational landscape.
A comparative study of Google Gemini and ChatGPT in enhancing english language learning for EFL learners: A case study of the english research writing course
This study compared Google Gemini and ChatGPT's effectiveness for enhancing the academic writing skills of English as a Foreign Language (EFL) learners in eight-week research writing course in two Thai universities. Using a quasi-experimental approach, 80 third-year learners of a private university (HCU) and a public university (NRRU) were assigned to use either ChatGPT or Google Gemini for writing tasks. Pre- and post-intervention writing assessments, Technology Acceptance Model-inspired Likert-scale questionnaires, and semi-structured interviews were employed to measure writing ability gains, learner attitudes, and learning engagement. The results indicate that both tools effectively improved linguistic accuracy and essay structure, and Gemini was found to work better than ChatGPT in multimodal feedback and source integration, particularly among rural learners (p < .05, Cohen's d = 0.62). However, overdependence and plagiarism risk were cited as issues. Qualitative findings suggest increased learner confidence and motivation, but also require adjustment of implementation in rural areas because of cultural and technological constraints. This underscores AI's promise in EFL writing instruction, emphasizing instructor guidance and AI literacy in Thai contexts. Pedagogical implications include hybrid AI models for low-resource settings, with calls for further longitudinal research.
Discordance in Drug ndash;Drug Interaction Alerts for Antidotes: Comparative Analysis of Electronic Databases and Interpretive Insights from AI Tools
Thitipon Yaowaluk,1 Supawit Tangpanithandee,2 Pinnakarn Techapichetvanich,2 Phisit Khemawoot2 1Drug Information Service and Siriraj Poison Control Center, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand; 2Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan, ThailandCorrespondence: Phisit Khemawoot, Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bang Phli, Samut Prakarn, 10540, Thailand, Tel/Fax +66 28395161, Email phisit.khe@mahidol.ac.thBackground: Drug-drug interactions (DDIs) are a critical clinical concern, especially when administering multiple medications, including antidotes. Despite their lifesaving potential, antidotes may interact harmfully with other drugs. However, few studies have specifically investigated DDIs involving antidotes.Purpose: This study evaluated potential DDIs between commonly prescribed medications and antidotes using two widely used electronic databases, along with artificial intelligence (AI) to assess the concordance between these platforms.Materials and Methods: A descriptive analysis was conducted using 50 frequently prescribed medications from the ClinCalc DrugStats Database (2022) and major antidotes as reported by California Poison Control Center. Potential interactions were assessed using Micromedex and WebMD as electronic databases, and ChatGPT and Google Gemini as representative AI. DDI severity levels and documentation quality were recorded, and database/AI agreement was analyzed using the kappa statistic.Results: Overall, 154 potential DDI pairs were identified by the databases (Micromedex: 100, WebMD: 118). Nineteen DDIs were classified as severe by both databases. The overall agreement between databases was poor (kappa = − 0.126, p = 0.008), indicating significant discrepancies in DDI severity classification. The main mechanisms associated with severe DDIs included serotonin syndrome and QT prolongation, with methylene blue and psychiatric medications being major contributors to severe DDIs. When evaluating the 19 severe DDIs from both databases, the AI models generally aligned with the more severe rating in cases of database discordance. The AI models’ consensus was often supported by severity-oriented justifications, highlighting this as a conservative approach to resolving discordant DDI information.Conclusion: Numerous potential DDIs between prescribed drugs and antidotes were identified, with notable inconsistencies between the two databases and AI. This underscores the need to harmonize DDI evaluation criteria across drug information systems and promote clinicians’ awareness of inter-database variability. Incorporating comprehensive DDI screening and shared decision-making is essential to ensure safe and effective patient care.Keywords: antidote, drug-drug interaction, micromedex, webMD, ChatGPT, google gemini
INVESTIGATION AND COMPARISON OF CHATGPT AND GOOGLE GEMINI EFFICIENCY IN EDUCATION, DESIGN AND ENGINEERING ANALYSIS
In this paper, the potential use of ChatGPT and Google Gemini in engineering education, design, and analysis is investigated and compared through a multiple case study method and the results of other studies are also used in this purpose. For several different engineering cases, it is ilustrated how students and engineers can use them. The capabilities, limitations, advantages and disadvantages of using these chatbots in engineering research and education are also discussed, and practical strategies for the effective use of the two chatbots by engineering students, professors, and designers are presented. The results show that the use of sufficient and clear input information in the above chatbots plays a decisive role in reaching the appropriate response. Also, the two chatbots are not very successful in the face of complex engineering problems and in some cases, they refer the problems to expert consultation or to use other engineering softwares. On the other hand, for simple computational engineering problems, ChatGPT usually does not give correct answers, unlike Google Gemini. Overall, the mentioned chatbots can significantly facilitate the work of engineers in design, calculations, material selection, and other tasks, and also, provide significant assistance in the field of education.
AI Versus Human Lexicographers: A Comparative Analysis of Translation Strategies for Arabic Collocations and Cultural References
Heliel (2000) identified a set of Arabic collocations and cultural references that have consistently challenged human lexicographers, with many opting to simply omit them from their compilations, including the widely used Al-Mawrid Arabic-English dictionary. Deploying Pedersen’s (2011) taxonomy of translation strategies, this study interpretatively evaluated 150 English translations of this set. The translations were mined from three Arabic-English bilingual dictionaries compiled by human lexicographers (Baalbaki, 2001; Abu-Ssaydeh, 2013; Hafiz, 2004) and two sets were generated by the two leading artificial intelligence (AI)-powered systems: ChatGPT and Google Gemini. The findings reveal a striking contrast in strategy use: while human lexicographers frequently omitted difficult phrases, AI tools provided complete translations for all expressions (100%). Despite their relatively recent development and launch, the two AI-systems exhibited lexicographical capabilities comparable to human lexicographers, particularly in adopting target-oriented strategies. This suggests that such tools could complement traditional lexicography by enhancing coverage, efficiency, and contextual richness. Perhaps, a hybrid \"H-AI\" approach may well be the way forward.
Comparative performance of artificial intelligence models in rheumatology board-level questions: evaluating Google Gemini and ChatGPT-4o
ObjectivesThis study evaluates the performance of AI models, ChatGPT-4o and Google Gemini, in answering rheumatology board-level questions, comparing their effectiveness, reliability, and applicability in clinical practice.MethodA cross-sectional study was conducted using 420 rheumatology questions from the BoardVitals question bank, excluding 27 visual data questions. Both artificial intelligence models categorized the questions according to difficulty (easy, medium, hard) and answered them. In addition, the reliability of the answers was assessed by asking the questions a second time. The accuracy, reliability, and difficulty categorization of the AI models’ response to the questions were analyzed.ResultsChatGPT-4o answered 86.9% of the questions correctly, significantly outperforming Google Gemini’s 60.2% accuracy (p < 0.001). When the questions were asked a second time, the success rate was 86.7% for ChatGPT-4o and 60.5% for Google Gemini. Both models mainly categorized questions as medium difficulty. ChatGPT-4o showed higher accuracy in various rheumatology subfields, notably in Basic and Clinical Science (p = 0.028), Osteoarthritis (p = 0.023), and Rheumatoid Arthritis (p < 0.001).ConclusionsChatGPT-4o significantly outperformed Google Gemini in rheumatology board-level questions. This demonstrates the success of ChatGPT-4o in situations requiring complex and specialized knowledge related to rheumatological diseases. The performance of both AI models decreased as the question difficulty increased. This study demonstrates the potential of AI in clinical applications and suggests that its use as a tool to assist clinicians may improve healthcare efficiency in the future. Future studies using real clinical scenarios and real board questions are recommended. Key Points•ChatGPT-4o significantly outperformed Google Gemini in answering rheumatology board-level questions, achieving 86.9% accuracy compared to Google Gemini’s 60.2%.•For both AI models, the correct answer rate decreased as the question difficulty increased.•The study demonstrates the potential for AI models to be used in clinical practice as a tool to assist clinicians and improve healthcare efficiency.
Evaluation of the accuracy and readability of ChatGPT-4 and Google Gemini in providing information on retinal detachment: a multicenter expert comparative study
Background Large language models (LLMs) such as ChatGPT-4 and Google Gemini show potential for patient health education, but concerns about their accuracy require careful evaluation. This study evaluates the readability and accuracy of ChatGPT-4 and Google Gemini in answering questions about retinal detachment. Methods Comparative study analyzing responses from ChatGPT-4 and Google Gemini to 13 retinal detachment questions, categorized by difficulty levels (D1, D2, D3). Masked responses were reviewed by ten vitreoretinal specialists and rated on correctness, errors, thematic accuracy, coherence, and overall quality grading. Analysis included Flesch Readability Ease Score, word and sentence counts. Results Both Artificial Intelligence tools required college-level understanding for all difficulty levels. Google Gemini was easier to understand ( p  = 0.03), while ChatGPT-4 provided more correct answers for the more difficult questions ( p  = 0.0005) with fewer serious errors. ChatGPT-4 scored highest on most challenging questions, showing superior thematic accuracy ( p  = 0.003). ChatGPT-4 outperformed Google Gemini in 8 of 13 questions, with higher overall quality grades in the easiest ( p  = 0.03) and hardest levels ( p  = 0.0002), showing a lower grade as question difficulty increased. Conclusions ChatGPT-4 and Google Gemini effectively address queries about retinal detachment, offering mostly accurate answers with few critical errors, though patients require higher education for comprehension. The implementation of AI tools may contribute to improving medical care by providing accurate and relevant healthcare information quickly.
OpenAI ChatGPT vs Google Gemini: A study of AI chatbots’ writing quality evaluation and plagiarism checking
This study explores the writing quality of two AI chatbots, OpenAI ChatGPT and Google Gemini. The research assesses the quality of the generated texts based on five essay models using the T.E.R.A. software, focusing on ease of understanding, readability, and reading levels using the Flesch-Kincaid formula. Thirty essays were generated, 15 from each chatbot, and evaluated for plagiarism using two free detection tools—SmallSEOTools and Check-Plagiarism—as well as one paid tool, Turnitin. The findings revealed that both ChatGPT and Gemini performed well in terms of word concreteness but demonstrated weaknesses in narrativity. ChatGPT showed stronger performance in referential and deep cohesion, while Gemini excelled in narrativity, syntactic simplicity and word concreteness. However, a significant concern was the degree of plagiarism detected in texts from both AI tools, with ChatGPT's essays exhibiting a higher likelihood of plagiarism compared to Gemini’s. These findings highlight the potential limitations and risks associated with using AI-generated writing.
Solving Econometric Problems Using Generative Artificial Intelligence Models: A Comparative Analysis of ChatGPT and Gemini
The article presents a comprehensive study and comparative analysis of the capabilities of modern generative artificial intelligence models in the context of their application for solving practical tasks in econometric modeling. The study focuses on models of various architectural types: the «advanced» versions with enhanced reasoning capabilities – Google Gemini 2.5 Pro and ChatGPT-5 Thinking + Study, as well as their optimized «light» versions – Google Gemini 2.5 Flash and the basic ChatGPT-5 model. The empirical basis of the study was built using real data from the Ukrainian residential real estate market, specifically a representative sample of 100 properties, including both quantitative and qualitative variables. The experimental methodology involved the sequential execution of the full cycle of econometric research: preliminary data processing, exploratory analysis and visualization, construction of a multifactor linear regression model, diagnostics for multicollinearity and heteroscedasticity, calculation of elasticity indicators for economic interpretation, as well as testing the predictive capabilities of the model on a test sample. The verification of results obtained using generative artificial intelligence models was carried out by comparing them with benchmark calculations manually performed in the MS Excel environment. The results of the experiment revealed a significant difference in the performance of the examined models. It was found that Pro/Thinking class models (Gemini 2.5 Pro, ChatGPT-5 Thinking) demonstrate absolute mathematical accuracy, correctly calculating regression coefficients, the coefficient of determination, the F-statistic, and indicators of average and marginal efficiency. In contrast, the basic and «light» versions of the models (Gemini 2.5 Flash, ChatGPT-5) showed a tendency toward critical errors, including hallucinations in the form of generating fictitious data, loss of context when processing large datasets, and an inability to independently validate input information. A common weakness was also identified across all tested models in tasks requiring qualitative classification of heteroskedasticity types, as well as a tendency to ignore macro indicators in favor of microanalysis of individual variables. Based on the obtained data, it was concluded that at the current stage of development, generative artificial intelligence cannot fully replace humans; however, «advanced» models can be effectively used as an auxiliary tool for automating routine operations, writing code, and preliminary data processing, provided that the results are verified by a specialist.