Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
97
result(s) for
"Academic writing Data processing."
Sort by:
Writing history in the digital age
\"Writing History in the Digital Age began as a one-month experiment in October 2010, featuring chapter-length essays by a wide array of scholars with the goal of rethinking traditional practices of researching, writing, and publishing, and the broader implications of digital technology for the historical profession. The essays and discussion topics were posted on a WordPress platform with a special plug-in that allowed readers to add paragraph-level comments in the margins, transforming the work into socially networked texts. This first installment drew an enthusiastic audience, over 50 comments on the texts, and over 1,000 unique visitors to the site from across the globe, with many who stayed on the site for a significant period of time to read the work. To facilitate this new volume, Jack Dougherty and Kristen Nawrotzki designed a born-digital, open-access platform to capture reader comments on drafts and shape the book as it developed. Following a period of open peer review and discussion, the finished product now presents 20 essays from a wide array of notable scholars, each examining (and then breaking apart and reexamining) how digital and emergent technologies have changed the ways that historians think, teach, author, and publish\"-- Provided by publisher.
Passions Pedagogies and 21st Century Technologies
by
Hawisher, Gail
in
Academic writing
,
Academic writing -- Study and teaching -- Data processing
,
Academic writing -- Study and teaching -- Technological innovations
1999
Gail Hawisher and Cynthia Selfe created a volume that set the agenda in the field of computers and composition scholarship for a decade. The technology changes that scholars of composition studies faced as the new century opened couldn't have been more deserving of passionate study. While we have always used technologies (e.g., the pencil) to communicate with each other, the electronic technologies we now use have changed the world in ways that we have yet to identify or appreciate fully. Likewise, the study of language and literate exchange, even our understanding of terms like literacy, text, and visual, has changed beyond recognition, challenging even our capacity to articulate them.As Hawisher, Selfe, and their contributors engage these challenges and explore their importance, they \"find themselves engaged in the messy, contradictory, and fascinating work of understanding how to live in a new world and a new century.\" The result is a broad, deep, and rewarding anthology of work still among the standard works of computers and composition study.
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
by
Lu, Caide
,
Mugaanyi, Joseph
,
Cheng, Sumei
in
Analysis
,
Artificial Intelligence
,
Computational linguistics
2024
Large language models (LLMs) have gained prominence since the release of ChatGPT in late 2022.
The aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natural sciences and humanities.
Two researchers independently prompted ChatGPT to write an introduction section for a manuscript and include citations; they then evaluated the accuracy of the citations and Digital Object Identifiers (DOIs). Results were compared between the two disciplines.
Ten topics were included, including 5 in the natural sciences and 5 in the humanities. A total of 102 citations were generated, with 55 in the natural sciences and 47 in the humanities. Among these, 40 citations (72.7%) in the natural sciences and 36 citations (76.6%) in the humanities were confirmed to exist (P=.42). There were significant disparities found in DOI presence in the natural sciences (39/55, 70.9%) and the humanities (18/47, 38.3%), along with significant differences in accuracy between the two disciplines (18/55, 32.7% vs 4/47, 8.5%). DOI hallucination was more prevalent in the humanities (42/55, 89.4%). The Levenshtein distance was significantly higher in the humanities than in the natural sciences, reflecting the lower DOI accuracy.
ChatGPT's performance in generating citations and references varies across disciplines. Differences in DOI standards and disciplinary nuances contribute to performance variations. Researchers should consider the strengths and limitations of artificial intelligence writing tools with respect to citation accuracy. The use of domain-specific models may enhance accuracy.
Journal Article
Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text
by
Elsaid, Khaled
,
Almeer, Saeed
,
Elkhatat, Ahmed M.
in
Academic integrity
,
Access to Information
,
AI content detection tools
2023
The proliferation of artificial intelligence (AI)-generated content, particularly from models like ChatGPT, presents potential challenges to academic integrity and raises concerns about plagiarism. This study investigates the capabilities of various AI content detection tools in discerning human and AI-authored content. Fifteen paragraphs each from ChatGPT Models 3.5 and 4 on the topic of cooling towers in the engineering process and five human-witten control responses were generated for evaluation. AI content detection tools developed by OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag were used to evaluate these paragraphs. Findings reveal that the AI detection tools were more accurate in identifying content generated by GPT 3.5 than GPT 4. However, when applied to human-written control responses, the tools exhibited inconsistencies, producing false positives and uncertain classifications. This study underscores the need for further development and refinement of AI content detection tools as AI-generated content becomes more sophisticated and harder to distinguish from human-written text.
Journal Article
A Comprehensive Meta-analysis of Handwriting Instruction
2016
While there are many ways to author text today, writing with paper and pen (or pencil) is still quite common at home and work, and predominates writing at school. Because handwriting can bias readers' judgments about the ideas in a text and impact other writing processes, like planning and text generation, it is important to ensure students develop legible and fluent handwriting. This meta-analysis examined true- and quasi-experimental intervention studies conducted with K-12 students to determine if teaching handwriting enhanced legibility and fluency and resulted in better writing performance. When compared to no instmction or non-handwriting instructional conditions, teaching handwriting resulted in statistically greater legibility (ES=0.59) and fluency (ES=0.63). Motor instmction did not produce better handwriting skills (ES=0.10 for legibility and −0.07 for fluency), but individualizing handwriting instmction (ES=0.69) and teaching handwriting via technology (ES= 0.85) resulted in statistically significant improvements in legibility. Finally, handwriting instmction produced statistically significant gains in the quality (ES=0.84), length (ES= 1.33), and fluency of students' writing (ES=0.48). The findings from this meta-analysis provide support for one of the assumptions underlying the Simple View of Writing (Berninger et al., Journal of Educational Psychology, 94, 291–304, 2002): text transcription skills are an important ingredient in writing and writing development.
Journal Article
The Potential and Concerns of Using AI in Scientific Research: ChatGPT Performance Evaluation
by
Itmazi, Jamil
,
Ayyoub, Abedalkarim
,
Mousa, Allam
in
Academic achievement
,
Artificial intelligence
,
Chatbots
2023
Background:Artificial intelligence (AI) has many applications in various aspects of our daily life, including health, criminal, education, civil, business, and liability law. One aspect of AI that has gained significant attention is natural language processing (NLP), which refers to the ability of computers to understand and generate human language.Objective:This study aims to examine the potential for, and concerns of, using AI in scientific research. For this purpose, high-impact research articles were generated by analyzing the quality of reports generated by ChatGPT and assessing the application’s impact on the research framework, data analysis, and the literature review. The study also explored concerns around ownership and the integrity of research when using AI-generated text.Methods:A total of 4 articles were generated using ChatGPT, and thereafter evaluated by 23 reviewers. The researchers developed an evaluation form to assess the quality of the articles generated. Additionally, 50 abstracts were generated using ChatGPT and their quality was evaluated. The data were subjected to ANOVA and thematic analysis to analyze the qualitative data provided by the reviewers.Results:When using detailed prompts and providing the context of the study, ChatGPT would generate high-quality research that could be published in high-impact journals. However, ChatGPT had a minor impact on developing the research framework and data analysis. The primary area needing improvement was the development of the literature review. Moreover, reviewers expressed concerns around ownership and the integrity of the research when using AI-generated text. Nonetheless, ChatGPT has a strong potential to increase human productivity in research and can be used in academic writing.Conclusions:AI-generated text has the potential to improve the quality of high-impact research articles. The findings of this study suggest that decision makers and researchers should focus more on the methodology part of the research, which includes research design, developing research tools, and analyzing data in depth, to draw strong theoretical and practical implications, thereby establishing a revolution in scientific research in the era of AI. The practical implications of this study can be used in different fields such as medical education to deliver materials to develop the basic competencies for both medicine students and faculty members.
Journal Article
How is ChatGPT acknowledged in academic publications?
2024
This study analysed the acknowledgment of ChatGPT in 1,759 academic publications indexed in Scopus and Web of Science up to August 2024. Around 80% of acknowledgments were related to text editing and proofreading, while only 5.3% mentioned ChatGPT for non-editorial research support, such as data analysis or programming. A small portion (3.5%) of researchers acknowledged ChatGPT for drafting sections of manuscripts. About two-thirds of corresponding authors who acknowledged ChatGPT were from non-English-speaking countries, and 75% of the publications with ChatGPT acknowledgments were published within January to August 2024. These findings suggest that ChatGPT was primarily acknowledged for language enhancement rather than more complex research applications, although some researchers may not have found it necessary to mention its use in their publications, highlighting the need for transparency from journals and publishers.
Journal Article
What do the differences and commonalities in doctoral dissertation acknowledgments across disciplines reveal?
by
Han, Jingwen
,
Yang, Kexin
,
Zhuang, Huibin
in
Academic Dissertations as Topic
,
Authorship
,
Beliefs, opinions and attitudes
2025
Acknowledgments in academic dissertations occupy a unique role within scholarly communication. Prior research has investigated acknowledgments through lenses such as funding attribution, genre analysis, and linguistic features. This study examines acknowledgments in doctoral dissertations from Chinese universities, organized by broad disciplinary categories. Utilizing BERTopic modeling, the research identifies topic keywords embedded within dissertation acknowledgments. Furthermore, computational linguistics techniques are employed to quantitatively evaluate the content and stylistic attributes of these acknowledgments, complemented by hierarchical clustering analysis to explore cross-disciplinary similarities. The topic modeling results indicate that acknowledgments by Chinese doctoral students frequently convey emotional reflections and exhibit distinct disciplinary traits. Additionally, hierarchical clustering shows that disciplines with similar characteristics exhibit greater similarity in the content and writing style of their acknowledgments, indicating that academic training influences researchers’ writing to some degree. This study seeks to catalyze further scholarly inquiry into this domain, advocating for expanded investigations from perspectives including psychology, neuroscience, and cross-cultural studies.
Journal Article
Systematic Review: How Technology Supports Collaborative Writing Learning in Higher Education
by
Aldresti, Fitri
,
Suyitno, Imam
,
Widyartono, Didin
in
Academic achievement
,
Academic writing
,
Authorship
2025
Technology-enhanced collaborative academic writing (TECAW) in higher education has gained increasing attention due to its potential to enhance students’ academic writing skills through interaction, shared authorship, and structured pedagogical support. Framing collaborative academic writing (CAW) as a pedagogical process, this systematic literature review explores how digital technologies and instructional strategies have been utilised to support students' engagement across the writing phases. A total of 27 peer-reviewed empirical studies, published between 2014 and 2024 and indexed in the Scopus database, were analysed using the PRISMA 2020 framework to ensure methodological rigour and transparency. The findings identified twenty types of technologies applied across the three phases of CAW including prewriting, in-writing, and post-writing. These technologies were categorised into five groups: collaborative study tools, classroom-based technologies, cloud-based word processors and shared documents, network-based social computing, and supporting tools. Frequently utilised platforms, including Google Docs, Moodle, Zoom, and WhatsApp, functioned either as interactive collaborative spaces that foster communication and idea co-construction or as task-supporting tools that facilitate drafting, feedback, and revision activities.In parallel, six core instructional strategies were identified: prewriting activities, scaffolding, peer review and feedback, collaborative revising and editing, reflective tasks, and collaborative note-taking. These strategies were systematically mapped across the writing phases, supporting not only the technical aspects of writing but also promoting collaborative interaction, critical thinking, and reflective learning practices. Importantly, the review highlights that successful TECAW implementation requires the intentional orchestration of technologies and instructional designs to align with the pedagogical goals at each stage of collaborative writing. The review emphasises that the effective integration of technology in CAW must be intentionally aligned with the pedagogical objectives at each stage of writing, ensuring that tools not only enhance task performance but also strengthen students' collaborative engagement and academic writing development. Overall, this study offers valuable insights for educators and researchers seeking to design student-centred, technology-supported writing instruction that reflects evolving digital pedagogies in higher education.
Journal Article
A corpus-based analysis of noun modifiers in L2 writing: The respective impact of L2 proficiency and L1 background
2025
Complex noun phrases, as a distinctive feature of academic writing, pose an important learning task for L2 learners. Noun modifiers are the primary means of constructing complex noun phrases. Due to the development of natural language processing (NLP) technologies in recent years, noun phrase complexity, which is a micro-syntactic complexity indicator reflecting the complexity and diversity of clausal and phrasal structures, has emerged as an important research topic. This study applies Bayesian regression with informative priors to analyze the use of English noun modifiers by L2 learners of different proficiency levels and L1 backgrounds through the exploration of the EF Cambridge Open Language Database (EFCAMDAT) corpus. It finds that L2 proficiency has a significant impact on the development of noun phrase complexity in non-academic writing, while the influence of L1 background is observable but limited. It thus concludes that as second language proficiency increases, learners tend to converge towards a common grammatical competence that transcends their native linguistic frameworks.
Journal Article