Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,531 result(s) for "Computer Assisted Testing"
Sort by:
Validity arguments for diagnostic assessment using automated writing evaluation
Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). Criterion ®, was developed by Educational Testing Service to analyze students' papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of the diagnostic assessment process in undergraduate university English for academic purposes (EAP) classes. The Intelligent Academic Discourse Evaluator (IADE) was developed for use in graduate EAP university classes, where the goal was to help students improve their discipline-specific writing. The validation for each was designed to support claims about the intended purposes of the assessments. The authors present the interpretive argument for each and show some of the data that have been gathered as backing for the respective validity arguments, which include the range of inferences that one would make in claiming validity of the interpretations, uses, and consequences of diagnostic AWE-based assessments. (Verlag, adapt.).
A critical deconstruction of computer-based test application in Turkish State University
Artificial Intelligence (AI) is growing – as can be clearly observed not only from the rising recognition of assistance tools such as Siri (Apple) but also from the newly introduced Google Voiced Translator. Yet, some crucial benchmarks still have to be supplied before it can act as a proxy for a real instructor: imagination, creativity, and spontaneity. Automated assessment containing the use of AI is one of the recent education practices. It accelerates the time for exam grading, eliminates human prejudice, and is as precise as human assessors. However, it has encountered many criticisms in education community, in our case, English as foreign language (EFL) learning community. Therefore, this phenomenological inquiry examined Turkish EFL students’ and instructors’ conceptions on the Versant English Test (VET), an automated test of spoken and written language functioning by means of an AI software. Using semi-structured interview questions and a focus-group discussion, the study adopted a qualitative research design in order to collect the required data. The findings show that EFL university students developed negative attitudes towards VET and that VET is not a reliable and valid test because same questions were observed to have appeared in the computer-based test. In addition, copying and pasting similar sentences produced better results, which decreased the validity and reliability of the test. Another important finding was that the test was reported to have measured only their memory skills but not their language skills. Besides, the curriculum was totally incongruent with the content of the test, which caused a severe washback in EFL learners.
A Systematic Review of Automatic Question Generation for Educational Purposes
While exam-style questions are a fundamental educational tool serving a variety of purposes, manual construction of questions is a complex process that requires training, experience, and resources. This, in turn, hinders and slows down the use of educational activities (e.g. providing practice questions) and new advances (e.g. adaptive testing) that require a large pool of questions. To reduce the expenses associated with manual construction of questions and to satisfy the need for a continuous supply of new questions, automatic question generation (AQG) techniques were introduced. This review extends a previous review on AQG literature that has been published up to late 2014. It includes 93 papers that were between 2015 and early 2019 and tackle the automatic generation of questions for educational purposes. The aims of this review are to: provide an overview of the AQG community and its activities, summarise the current trends and advances in AQG, highlight the changes that the area has undergone in the recent years, and suggest areas for improvement and future opportunities for AQG. Similar to what was found previously, there is little focus in the current literature on generating questions of controlled difficulty, enriching question forms and structures, automating template construction, improving presentation, and generating feedback. Our findings also suggest the need to further improve experimental reporting, harmonise evaluation metrics, and investigate other evaluation methods that are more feasible.
A systematic review of research on cheating in online exams from 2010 to 2021
In recent years, online learning has received more attention than ever before. One of the most challenging aspects of online education is the students' assessment since academic integrity could be violated due to various cheating behaviors in online examinations. Although a considerable number of literature reviews exist about online learning, there is no such review study to provide comprehensive insight into cheating motivations, cheating types, cheating detection, and cheating prevention in the online setting. The current study is a review of 58 publications about online cheating, published from January 2010 to February 2021. We present the categorization of the research and show topic trends in the field of online exam cheating. The study can be a valuable reference for educators and researchers working in the field of online learning to obtain a comprehensive view of cheating mitigation, detection, and prevention.
A Systematic Review on AI-based Proctoring Systems: Past, Present and Future
There have been giant leaps in the field of education in the past 1–2 years.. Schools and colleges are transitioning online to provide more resources to their students. The COVID-19 pandemic has provided students more opportunities to learn and improve themselves at their own pace. Online proctoring services (part of assessment) are also on the rise, and AI-based proctoring systems (henceforth called as AIPS) have taken the market by storm. Online proctoring systems (henceforth called as OPS), in general, makes use of online tools to maintain the sanctity of the examination. While most of this software uses various modules, the sensitive information they collect raises concerns among the student community. There are various psychological, cultural and technological parameters need to be considered while developing AIPS. This paper systematically reviews existing AI and non-AI-based proctoring systems. Through the systematic search on Scopus, Web of Science and ERIC repositories, 43 paper were listed out from the year 2015 to 2021. We addressed 4 primary research questions which were focusing on existing architecture of AIPS, Parameters to be considered for AIPS, trends and Issues in AIPS and Future of AIPS. Our 360-degree analysis on OPS and AIPS reveals that security issues associated with AIPS are multiplying and are a cause of legitimate concern. Major issues include Security and Privacy concerns, ethical concerns, Trust in AI-based technology, lack of training among usage of technology, cost and many more. It is difficult to know whether the benefits of these Online Proctoring technologies outweigh their risks. The most reasonable conclusion we can reach in the present is that the ethical justification of these technologies and their various capabilities requires us to rigorously ensure that a balance is struck between the concerns with the possible benefits to the best of our abilities. To the best of our knowledge, there is no such analysis on AIPS and OPS. Our work further addresses the issues in AIPS in human and technological aspect. It also lists out key points and new technologies that have only recently been introduced but could significantly impact online education and OPS in the years to come.
Impact of technostress on academic productivity of university students
There has been increasing interest among researchers to understand the negative effects of technology, in the last two decades. Technostress or stress induced due to technology is extensively reported in the literature, among working professionals. Even though there has been an increased proliferation of digital devices in academia, there is a dearth of studies examining the prevalence of technostress and its impact among students. This study examines the prevalence of technostress among the younger population, in the age group of 18–28 years. Using a sample of 673 Indian private university students, this study cross-validated the technostress instrument. Increased use of technology in higher education has compelled students to complete all their academic work, including assessments, using technology. Technology-enhanced learning applications such as learning management systems, MOOCs and digital exam devices require students to develop ICT skills. The study also investigates the impact of technostress on the academic productivity of students. Findings reveal that the technostress instrument is valid to be used in the academic context, with minor modifications, and students experienced moderate levels of technostress. It was also found that technostress had a negative impact on the academic productivity of students.
A Particle Swarm Optimization Approach to Composing Serial Test Sheets for Multiple Assessment Criteria
To accurately analyze the problems of students in learning, the composed test sheets must meet multiple assessment criteria, such as the ratio of relevant concepts to be evaluated, the average discrimination degree, difficulty degree and estimated testing time. Furthermore, to precisely evaluate the improvement of student's learning performance during a period of time, a series of relevant test sheets need to be composed. In this paper, a particle swarm optimization-based approach is proposed to improve the efficiency of composing near optimal serial test sheets from very large item banks to meet multiple assessment criteria. From the experimental results, we conclude that our novel approach is desirable in composing near optimal serial test sheets from large item banks and hence can support the need of evaluating student learning status.
Impact of coronavirus pandemic on the Indian education sector: perspectives of teachers on online teaching and assessments
PurposeIn India, the COVID-19 outbreak has been declared an epidemic in all its states and union territories. To combat COVID-19, lockdown was imposed on March 25, 2020 which has adversely affected the education system in the country. It has changed the traditional education system to the educational technologies (EdTechs) model, where teaching and assessments are conducted online. This paper aims to identify the barriers faced by teachers during online teaching and assessment in different home environment settings in India.Design/methodology/approachInterpretative phenomenological analysis (IPA) of qualitative research methodology has been used in this research. The study was conducted among the teachers working in the government and private universities of Uttarakhand, India. Semi-structured in-depth interviews were conducted among 19 teachers to collect data regarding the barriers faced by them during online teaching and assessment. ATLAS.ti, version 8 was used to analyze the interview data.FindingsThe findings revealed four categories of barriers that are faced by teachers during online teaching and assessments. Under home environment settings, a lack of basic facilities, external distraction and family interruption during teaching and conducting assessments were major issues reported. Institutional support barriers such as the budget for purchasing advanced technologies, a lack of training, a lack of technical support and a lack of clarity and direction were also reported. Teachers also faced technical difficulties. The difficulties were grouped under a lack of technical support, it included a lack of technical infrastructure, limited awareness of online teaching platforms and security concerns. Teachers’ personal problems including a lack of technical knowledge, negative attitude, course integration with technology and a lack of motivation are identified as the fourth category to damper their engagement in online teaching and assessments.Practical implicationsThe findings of the study can be helpful to the regulatory authorities and employers of higher education institutions who are planning to adopt online teaching as a regular activity in the future. The insights gained from the findings can help them to revisit their existing policy frameworks by designing new strategies and technical structures to assist their teachers in successfully embracing the EdTech to deal with any crisis in the future.Originality/valueMany authors have conducted research to address the problems faced by students related to online teaching and learning during COVID-19 in India. To the best of the authors’ knowledge, this is the first study that addresses the challenges faced by teachers during the online teaching and assessment in the home environment settings by using qualitative analysis (IPA) techniques. The current study replenishes the gap by contributing to the literature of online teaching and assessment under the home environment settings during the pandemic situation.
TechCheck: Development and Validation of an Unplugged Assessment of Computational Thinking in Early Childhood Education
There is a need for developmentally appropriate Computational Thinking (CT) assessments that can be implemented in early childhood classrooms. We developed a new instrument called TechCheck for assessing CT skills in young children that does not require prior knowledge of computer programming. TechCheck is based on developmentally appropriate CT concepts and uses a multiple-choice “unplugged” format that allows it to be administered to whole classes or online settings in under 15 min. This design allows assessment of a broad range of abilities and avoids conflating coding with CT skills. We validated the instrument in a cohort of 5–9-year-old students ( N  = 768) participating in a research study involving a robotics coding curriculum. TechCheck showed good reliability and validity according to measures of classical test theory and item response theory. Discrimination between skill levels was adequate. Difficulty was suitable for first graders and low for second graders. The instrument showed differences in performance related to race/ethnicity. TechCheck scores correlated moderately with a previously validated CT assessment tool ( TACTIC-KIBO ). Overall, TechCheck has good psychometric properties, is easy to administer and score, and discriminates between children of different CT abilities. Implications, limitations, and directions for future work are discussed.
E-proctored exams during the COVID-19 pandemic: A close understanding
Researchers have focused on evaluating and exploring the online examination experience during the COVID-19 pandemic. However, understanding the perceptions of using an e-proctoring tool within the online examination experience is still limited. This study explores the first unique experience for students’ attitudes and concerns using an e-proctoring tool in their final exams during the COVID-19 pandemic. It also highlights the e-tools’ impact on students’ performances to guide educational institutions towards appropriate practices going forward, especially as the pandemic is expected to have far-reaching consequences. A mixed-methods analysis was used to examine heterogeneous sources of data including self-reported data and officially documented data. The data was analyzed by a qualitative analysis of the focus group and quantitative analyses of the survey questions and exam attempts. In June 2020, students participated in a focus group to elaborate on their attitudes and concerns pertaining to their e-proctoring experience. Based on the preliminary outcomes, a survey was developed and distributed to a purposive sample (n = 106) of students from information technology majors who had taken at least one e-proctored exam during the COVID-19 pandemic. Finally, 21 online exams with 815 total attempts were analyzed to assess how well students performed under an e-proctored test. The study’s findings shed light on students’ perceptions of their e-proctoring experience, including their predominant concerns over privacy and various environmental and psychological factors. The research also highlights challenges in implementing the e-proctoring tool as well as its impact on students’ performance.