Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,205 result(s) for "Prompting"
Sort by:
A Comparison of Video Prompting to Least-to-Most Prompting among Children with Autism and Intellectual Disability
Students with autism spectrum disorder (ASD) and intellectual disability (ID) may experience challenges when learning tasks that are complex and require numerous steps. This difficulty can lead to employment issues for this population of learners. Therefore, researchers have explored methods to teach employment-related tasks to students with ASD and ID. Two such procedures are video prompting (VP) and least-to-most prompting. These procedures are frequently combined as an intervention package to boost student responding. The purpose of this study was to explore which of these interventions was more effective and efficient when used to teach office tasks to individuals with ASD and ID. Three adolescent students participated in this study. Using the adapted alternating treatments design, we found that VP was more effective and efficient for two participants, whereas least-to-most prompting was more effective but less efficient for the remaining participant. Implications for research and practice are discussed.
Detection of GPT-4 Generated Text in Higher Education: Combining Academic Judgement and Software to Identify Generative AI Tool Misuse
This study explores the capability of academic staff assisted by the Turnitin Artificial Intelligence (AI) detection tool to identify the use of AI-generated content in university assessments. 22 different experimental submissions were produced using Open AI’s ChatGPT tool, with prompting techniques used to reduce the likelihood of AI detectors identifying AI-generated content. These submissions were marked by 15 academic staff members alongside genuine student submissions. Although the AI detection tool identified 91% of the experimental submissions as containing AI-generated content, only 54.8% of the content was identified as AI-generated, underscoring the challenges of detecting AI content when advanced prompting techniques are used. When academic staff members marked the experimental submissions, only 54.5% were reported to the academic misconduct process, emphasising the need for greater awareness of how the results of AI detectors may be interpreted. Similar performance in grades was obtained between student submissions and AI-generated content (AI mean grade: 52.3, Student mean grade: 54.4), showing the capabilities of AI tools in producing human-like responses in real-life assessment situations. Recommendations include adjusting the overall strategies for assessing university students in light of the availability of new Generative AI tools. This may include reducing the overall reliance on assessments where AI tools may be used to mimic human writing, or by using AI-inclusive assessments. Comprehensive training must be provided for both academic staff and students so that academic integrity may be preserved.
Video Self-Prompting and Mobile Technology to Increase Daily Living and Vocational Independence for Students with Autism Spectrum Disorders
Three male high school students with autism spectrum disorders participated in this study. Vocational and daily living skills were taught using video prompting via an iPhone. Specifically, using a washing machine, making noodles, and using a copy machine were taught. A multiple probe design across behaviors replicated across participants was used to evaluate the effectiveness of the intervention. Results indicate that the three participants increased performance across all behaviors by increasing the percent of steps performed independently. This study introduces a novel approach to using instructional video, in that two of the three students were able to learn how to self-prompt with the iPhone and ultimately teach themselves the target skills. Maintenance probes were also conducted and the iPhone had to be returned to all three participants for two out of three behaviors for a return to criterion levels. In addition to study limitations, implications for practice for video self-prompting are discussed.
Five ways to increase the effectiveness of instructional video
This paper reviews five ways to increase the effectiveness of instructional video and one way not to use instructional video. People learn better from an instructional video when the onscreen instructor draws graphics on the board while lecturing (dynamic drawing principle), the onscreen instructor shifts eye gaze between the audience and the board while lecturing (gaze guidance principle), the lesson contains prompts to engage in summarizing or explaining the material (generative activity principle), a demonstration is filmed from a first-person perspective (perspective principle), or subtitles are added to a narrated video that contains speech in the learner’s second language (subtitle principle). People do not learn better from a multimedia lesson when interesting but extraneous video is added (seductive details principle). Additional work is needed to determine the conditions under which these principles apply and the underlying learning mechanisms.
Exploring prompting for dialectical machine translation: a focus on north Jordanian Arabic
Dialectal variations are common across many languages, and dialectical machine translation to the standard form of the language or other languages is crucial for effective communication with speakers of these dialects. Prompting Large Language Models (LLMs) for Machine Translation (MT) has gained popularity. However, its efficacy for dialectical MT, particularly in comparison to fine-tuning, remains underexplored, especially for regional dialects that lack parallel training and evaluation data. This study presents a new parallel dataset between Modern Standard Arabic and the Irbid dialect, the largest city in northern Jordan, specifically within the travel domain. This dataset, an extension of the MADAR multi-dialect corpus , comprises 12,000 entries translated by native speakers of the Irbid dialect. We also describe the guidelines and evaluation process employed to collect this dataset and present several analyses within this article. Additionally, we investigate the effectiveness of prompting LLMs, particularly GPT-4o-mini, in performing MT under zero-shot and few-shot learning settings. We compare these methods to fine-tuning approaches. This includes the use of dialect-tolerant prompts and constraints. We compare these methods to fine-tuning approaches. Results indicate that prompting, particularly few-shot learning with an optimal number of exemplars, consistently outperforms fine-tuning in our tests. Utilizing several versions of T5 and mBART50 for fine-tuning, we compared their performance with that of GPT-4o-mini, which was employed for prompting. The comparative analysis reveals a notable improvement margin, with Bilingual Evaluation Understudy (BLEU), Crosslingual Optimized Metric for Evaluation of Translation (COMET), and Recall-Oriented Understudy for Gisting Evaluation–Longest Common Subsequence (ROUGE-L) scores surpassing those of the best fine-tuned model by margins of 11.89, 0.2476, and 1.18, respectively. These findings underscore the potential of Few-shot Prompting (FSP) in effectively addressing dialectical MT challenges.
Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies such as ChatGPT
Objective: The article aims to offer a thorough examination and comprehension of the challenges and pro‐ spects connected with artificial intelligence (AI) prompt engineering. Our research aimed to create a theoret‐ ical framework that would highlight optimal approaches in the field of AI prompt engineering.Research Design Methods: This research utilized a narrative and critical literature review and established a conceptual framework derived from existing literature taking into account both academic and practitioner sources. This article should be regarded as a conceptual work that emphasizes the best practices in the domain of AI prompt engineering.Findings: Based on the conducted deep and extensive query of academic and practitioner literature on the subject, as well as professional press and Internet portals, we identified various insights for effective AI prompt engineering. We provide specific prompting strategies.Implications Recommendations: The study revealed the profound implications of AI prompt engineering across various domains such as entrepreneurship, art, science, and healthcare. We demonstrated how the effective crafting of prompts can significantly enhance the performance of large language models (LLMs), gen‐ erating more accurate and contextually relevant results. Our findings offer valuable insights for AI practition‐ ers, researchers, educators, and organizations integrating AI into their operations, emphasizing the need to invest time and resources in prompt engineering. Moreover, we contributed the AI PROMPT framework to the field, providing clear and actionable guidelines for text‐to‐text prompt engineering.Contribution Value Added: The value of this study lies in its comprehensive exploration of AI prompt engineer‐ ing as a digital competence. By building upon existing research and prior literature, this study aimed to provide a deeper understanding of the intricacies involved in AI prompt engineering and its role as a digital competence.
Examining the Effects of Parent-Created and Parent-Implemented Video Prompting to Teach Daily Living Skills to an Adolescent with Autism
Teaching parents how to create their own video-prompting (VP) and implement it to help their children learn daily living tasks at home can be empowering for parents. Using a multiple probe across three tasks design, we examined the effects of parent-created and parent-implemented VP and error correction strategy on teaching three daily living tasks to a 14-year-old child with autism spectrum disorder (ASD). Following a one-time training and continuous coaching, a parent successfully created a VP intervention for all three tasks and implemented VP with error correction with high fidelity. Following the intervention implementation, the child with ASD learned to complete daily living tasks with high levels of accuracy and maintained task completion at a 1-week follow-up.
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
With the continuous advancement of deep neural networks, salient object detection (SOD) in natural images has made significant progress. However, SOD in optical remote sensing images (ORSI-SOD) remains a challenging task due to the diversity of objects and the complexity of backgrounds. The primary challenge lies in generating robust features that can effectively integrate both global semantic information for salient object localization and local spatial details for boundary reconstruction. Most existing ORSI-SOD methods rely on pre-trained CNN- or Transformer-based backbones to extract features from ORSIs, followed by multi-level feature aggregation. Given the significant differences between ORSIs and the natural images used in pre-training, the generalization capability of these backbone networks is often limited, resulting in suboptimal performance. Recently, prompt engineering has been employed to enhance the generalization ability of networks in the Segment Anything Model (SAM), an emerging vision foundation model that has achieved remarkable success across various tasks. Despite its success, directly applying the SAM to ORSI-SOD without prompts from manual interaction remains unsatisfactory. In this paper, we propose a novel progressive self-prompting model based on the SAM, termed PSP-SAM, which generates both internal and external prompts to enhance the network and overcome the limitations of SAM in ORSI-SOD. Specifically, domain-specific prompting modules, consisting of both block-shared and block-specific adapters, are integrated into the network to learn domain-specific visual prompts within the backbone, facilitating its adaptation to ORSI-SOD. Furthermore, we introduce a progressive self-prompting decoder module that performs prompt-guided multi-level feature integration and generates stage-wise mask prompts progressively, enabling the prompt-based mask decoders outside the backbone to predict saliency maps in a coarse-to-fine manner. The entire network is trained end-to-end with parameter-efficient fine-tuning. Extensive experiments on three benchmark ORSI-SOD datasets demonstrate that our proposed network achieves state-of-the-art performance.
A Systematic Instructional Approach to Teaching Finance Vocabulary to Students with Moderate-to-Significant Disabilities
Federal law and judicial rulings in the United States direct educators to provide special education services to students with disabilities that enable them to demonstrate meaningful progress, considering their circumstances. The services are to comprise evidence-based practices and must account for students’ unique learning characteristics and the time allotted for instruction. Accordingly, this paper reports on two interconnected investigations involving four high school students with autism and an intellectual disability who were taught to read and define finance vocabulary via a systematic instructional approach presented during short-duration lessons (5–8 min). A multiple-probe, nonconcurrent single-case design established a functional relationship between the lessons and the students’ vocabulary acquisition. All four students learned to read their targeted words. One student demonstrated acquisition of all the definitions, whereas the other three demonstrated variable acquisition before the study was discontinued because of the end of the school year. The students also demonstrated variable skill maintenance and generalization. The results suggest an appropriate structure for a short-duration lesson and a corresponding research agenda for investigating parameters associated with its effectiveness and efficiency. The study offers teachers instructing students with moderate-significant disabilities a practical evidence-based instructional strategy that accounts for their time management challenges. Furthermore, the strategy’s framework offers a theoretical way for investigating the impacts of increased academic learning time and practice opportunities.