Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
105 result(s) for "ASSESSING STUDENT"
Sort by:
“I Never Realized…”: Shared Outcomes of Different Student-Faculty Partnership Approaches to Assessing Student Learning Experiences and Evaluating Faculty Teaching
The two programs featured here support extra-classroom pedagogical partnership between faculty and students and focus on developing approaches both to assessing students’ learning experiences and to formatively evaluating instructors’ teaching. Although different in terms of institutional context and structure, both programs foster shared outcomes for student partners, including greater appreciation for the work of teaching, deeper engagement in their own learning, and pursuit of greater equity. After reviewing student-faculty partnership in assessing enrolled students’ learning and evaluation of teaching, we compare and contrast our two programs. We then draw on student reflections to present the shared outcomes for student partners.
Measures of People's Beliefs About Knowledge and Learning
Different measures of people's beliefs about the nature of knowledge and learning are described along with the theories upon which they are based. The initial theories tended to be unidimensional developmental theories and their measuring instruments lengthy, in-depth interviews. More recently, multidimensional theories have emerged that provide a more complex theory of epistemology. Recent measuring instruments are attempting to address these complexities using paper-and-pencil questionnaires that are easy to score. The measuring instruments differ considerably on what dimensions they measure and researchers are urged to select carefully the one that best matches their purpose. Specific descriptive information is provided regarding selected instruments to aid researchers in their choices. Both theory and psychometric properties are considered critical issues for the decision-making process.
Closing Deals with Hamlet's Help: Assessing the Instrumental Value of an English Degree
Critics of contemporary higher education frequently overlook an important dimension of assessment of student learning: namely, the social and economic consequences used to justify the assessment measures in the first place. This essay argues that meaningful student assessment must take into account the unintended, transferable utility of liberal higher education. The authors, from a large master’s-comprehensive state university, use a recent survey of alumni of their English degree program from as far back as the 1960s to assess the importance that the degree has had in the lives of former students. Believing that disciplinary differences may help us to understand the navigational courses that emerge as seemingly nonlinear and unpredictable paths from the college degree to the life after college, the authors use students’ responses to identify how, where, and what students have used from their English courses in their most recent professions and, in turn, the limitations of current value-added assessments such as the Collegiate Learning Assessment (CLA).
Hearing Students: The Complexity of Understanding What they are Saying, Showing, and Doing
Teachers are expected today to assess student understanding as an integral part of instruction, using a combination of various assessment methods and tools, among which are observing students solve problems in class and listening to their mathematical discussions. The aim of our study is to explore what it might mean for a teacher to hear students and to interpret their talk and actions. Analysis of an interview with Ruth--an experienced elementary school teacher--after she observed two of her students solve a mathematics problem, suggests four types of her interpretation: describing, explaining, assessing and justifying. This analysis illustrates the complexity of the way Ruth hears her students, as is indicated even in the relatively simple case of describing. Using various sources of data we also analyze different characteristics of Ruth's hearing for the describing and explaining types of interpretation and examine possible resources for her over-hearing, compatible-hearing, under-hearing, non-hearing and biased-hearing.
Assessing Sector Performance and Inequality in Education
This book gathers in one volume all the information needed to use ADePT Edu, the software platform created by the World Bank for the reporting and analysis of education indicators and education inequality. It includes a primer on education data availability, an operating manual for the software, a technical explanation of all the education indicators generated, and an overview of global education inequality using ADePT Edu. The World Bank developed ADePT Edu to fill the need for a user-friendly program designed to give everyone the ability to organize and analyze education data from households. ADePT Edu can be used with any household survey with the aid of its user friendly interface, generating education tables and graphics that comply with international standards for performance indicators. Because this volume is a compendium its chapters can be consulted independently of each other, depending on the need of users.
Diagnostic Feedback in Language Assessment
This chapter contains sections titled: Introduction Definitions and Scope Large‐Scale Assessment Context Classroom Assessment Context New Approaches Challenges Conclusion References
Development and validation of a questionnaire to assess the implementation of physical education programs in Chinese junior high schools
Background Students’ physical fitness has always been the focus of attention of the Chinese government, and the school as an important way to improve students’ physical fitness, there are many studies on the current status of the implementation of physical education in schools, and there are many studies that use self-made questionnaires to investigate the implementation of physical education in schools, but most of the studies do not adequately validate the self-made questionnaires, so the purpose of this study was to develop a questionnaire to assess the level of implementation of physical education programmes in Chinese junior secondary schools and to test its reliability and validity. Method The content of the questionnaire was developed based on the content of Annex 1 of the Assessment Measures for Physical Education in Primary and Secondary Schools issued by the Ministry of Education of China in 2014 and was modified based on feedback from the expert panel and pre-test participants. The questionnaire was initially tested for validity by 5 expert reviewers, and then we collected data information from 350 participants and conducted exploratory factor analysis (EFA) to explore the factor structure of the initial version. One week later, 40 of the 350 participants were randomly selected to assess test-retest reliability. Results The I-CVI and KAPPA value analysis results of the expert review results show that the questionnaire has extremely high reliability and consistency among experts. EFA results indicate that the five dimensions of this questionnaire are highly reliable. In the test-retest reliability, the Pearson correlation coefficients of the initial test data and the retest data of each dimension are all greater than 0.7, and the significance probability values are all less than 0.05, reaching the significance level, the results show that the questionnaire has good stability. Conclusions This study concluded that the 5 dimensions and 38 items of this questionnaire had high reliability and validity and could be used as a preliminary tool to measure the implementation level of physical education programs in junior high schools in China. However, future research should explore the potential need for adjustment to suit different regions and cultures.
Exploring secondary school students’ computational thinking experiences enriched with block-based programming activities: An action research
The importance of developing computational thinking (CT) skills has created many practices and research. A significant amount of research exists in the literature on CT and its related skills, yet the rareness of research studies focusing on both quantitative and qualitative evaluations of students’ CT skills in real school settings is remarkable. This action research focuses on the impact of block-based programming activities used to improve the CT skills of 5th and 6th grade students over a 14-week period. Both quantitative and qualitative data were collected during the study. Computational Thinking Test (CTT) pre-post-tests, teacher journals, and student observations were collected for this study. The quantitative findings showed that learning processes enriched with block-based programming significantly affected the students’ CT scores, while the qualitative findings showed that block-based programming activities not only increased the students’ motivation toward the lesson, but also increased their active participation during these lessons. It has been determined that the majority of the challenging activities were derived from the need for other skills (mathematical skills) than from programming-related skills.
Portfolios versus exams: a study to gauge the better student assessment tool
Portfolio assessment is a method used by teachers to evaluate their students’ academic performance by giving them several assignments and/or projects to work on during the semester. This is one alternative to exams which are an assessment tool seldom questioned for its validity and efficacy. This study seeks to scrutinize both of the mentioned methods of assessment and determine which is more accurate and viable to be adopted in tertiary education. A cohort of sixty 20–22-year-old university students in the College of Engineering at the American University of Sharjah participated in this study. They were required to complete a questionnaire comprising 12 questions concerning the academic, mental health, and professional benefits that students can enjoy as a result of being tested by means of portfolio assessment instead of exams. The research question is as follows: Is portfolio assessment a generally more viable method of evaluating university students’ academic performance than exams in terms of the potential academic, mental health, and professional benefits which such an assessment affords students? Overall, portfolio assessment was found to be a much more appropriate method of assessing university students than exams. The results of the study have implications for university professors, education experts, and examiners.
Expert and preservice secondary teachers’ competencies for noticing student thinking about modelling
This study examined how expert and novice (preservice) teachers solved mathematical modelling tasks as well as how they noticed written artifacts of student thinking that were in response to the mathematical modelling tasks. Some teachers in both groups were aware of the openness and underdetermination of the modelling tasks and that these characteristics implied that underlying assumptions needed to be made in order to solve the tasks. Indeed, nearly all of the expert teachers addressed the need to make assumptions for the Seashell Task. When examining student work, both expert and preservice teachers interpreted positive aspects in the students’ solutions and provided feedback to students. However, almost all of the expert teachers responded by asking questions, whereas around a third of preservice teachers directly corrected students’ mistakes and another third pointed out mistakes without correcting them. This study provides a new angle to study teachers’ mathematical competencies, the assessment of modelling competencies, and considerations for the development of teachers’ modelling competencies.