Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
38,253 result(s) for "PERFORMANCE ASSESSMENTS"
Sort by:
A literature review of type I SLCA—making the logic underlying methodological choices explicit
PurposeThe Social Life Cycle Assessment guidelines (UNEP-SETAC 2009) distinguish two different SLCA approaches, type I and type II. Few comprehensive and analytical reviews have been undertaken to examine the multiplicity of approaches that have been developed within type I SLCA. This paper takes on the task of exploring the evaluation methods used in type I SLCA methods.MethodsIn order to tackle this work, a critical literature review was undertaken, covering a total of 32 reviewed articles, ranging from 2006 to 2015. Those articles have been selected for they make explicit reference to type I, performance reference points (PRPs), corporate behavior assessment, and social performance assessment or if their assessment methods generated a result located at the same point as the inventory data, with regards to the impact pathway. The selected articles were analyzed with a focus on the inventory data used, the aggregation of inventory data on the functional unit, and the type of characterization and weighting methods used. This analysis allowed to make explicit the often implicit logic underlying the evaluation methods and to identify the common denominators of type I SLCA.Results and discussionThe analysis highlighted the multiplicity of approaches that are comprised within type I SLCA today, both in terms of the data collected (in particular, its positioning along the impact pathway); the presence of some optional steps, such as the scaling of inventory data on the functional unit (FU); and in terms of the different characterization and weighting steps. With regards to data collection, this review has highlighted that the furthest indicators are positioned along the impact pathway, the hardest it is to justify the link between them and the activities of companies in the product system. The analysis also suggested that an important differentiating factor among type I SLCA methods lies in “what the inventory data is assessed against” at the characterization step and how it is ultimately weighted. To illustrate this, a typology of six characterization methods and five types of weighting methods was presented.ConclusionsIt is interesting to identify which approaches are most appropriate to respond to the various questions that SLCA aims to respond to. A question that arises is what approaches are most likely to tell us anything about the impact of a product system on social well-being? This question is particularly relevant in the absence of well-documented impact pathways between activities within product systems and impact on social well-being.
Digital twin-based life-cycle seismic performance assessment of a long-span cable-stayed bridge
Long-span cable-stayed bridges often have a design service life of more than a hundred years, during which they may experience multiple earthquake events and accumulate seismic damage if they are located in seismic-prone regions. Earthquake occurrence is discretely and randomly distributed over the life cycle of a long-span cable-stayed bridge and often causes sudden drops in the structural performance instead of yearly fixed seismic performance degradation. This study thus proposes a digital twin-based life-cycle seismic performance assessment method for long-span cable-stayed bridges. The major components of this method include: (1) a seismic hazard analysis-based generation method of earthquake occurrence sequence; (2) a digital twin-based structural response prediction method considering lifetime earthquake occurrence and sequence; and (3) a service life quantification method. The proposed method is applied to a scaled long-span cable-stayed bridge with a series of shake table tests. The results show that the digital twin can closely reproduce the life-cycle seismic response of the bridge under sequential earthquakes. The proposed assessment method provides a more intuitive presentation of the life-cycle seismic damage accumulation process and a more accurate estimation of the service life of a long-span cable-stayed bridge.
The impact of high stakes oral performance assessment on students' approaches to learning: a case study
This paper presents findings from a case study on the impact of high stakes oral performance assessment on third year mathematics students' approaches to learning (Entwistle and Ramsden, Understanding student learning, 1983). We choose oral performance assessment as this mode of assessment differs substantially from written exams for its dialogic nature and because variation of assessment methods is seen to be very important in an otherwise very uniform assessment diet. We found that students perceived the assessment to require conceptual understanding over memory and were more likely to employ revision strategies conducive to deep learning (akin to conceptual understanding) when preparing for the oral performance assessment than when preparing for a written exam. Moreover, they reported to have engaged and interacted in lectures more than they would have otherwise, another characteristic conducive to deep learning approaches. We conclude by suggesting some implications for the summative assessment of mathematics at university level.
A critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms
The term \"Smart Classroom\" has evolved over time and nowadays reflects the technological advancements incorporated in educational spaces. The rapid advances in technology, and the need to create more efficient and creative classes that support both in-class and remote activities, have led to the integration of Artificial Intelligence and smart technologies in smart classes. In this paper we discuss the concept of Artificial Intelligence in Education and present a literature review related to smart classroom technology, with an emphasis on emerging technologies such as AI-related technologies. As part of this survey key technologies related to smart classes used for effective class management that enhance the convenience of classroom environments, the use of different types of smart teaching aids during the educational process and the use of automated performance assessment technologies are presented. Apart from discussing a variety of technological accomplishments in each of the aforementioned areas, the role of AI is discussed, allowing the readers to comprehend the importance of AI in key technologies related to smart classes. Furthermore, through a SWOT analysis, the Strengths, Weaknesses, Opportunities, and Threats of adopting AI in smart classes are presented, while the future perspectives and challenges in utilizing AI-based techniques in smart classes are discussed. This survey targets educators and AI professionals so that the former get informed about the potential, and limitations of AI in education, while the latter can get inspiration from the challenges and peculiarities of educational AI-based systems.
7803 Audit on assessment of clinic letters in the department of paediatrics of a district general hospital using the sheffield assessment indicator for letter (SAIL)
Why did you do this work?My motivation was centered on the crucial role of clinic letters in ensuring effective communication between healthcare professionals. Clear and comprehensive clinic letters are essential for maintaining continuity of care and minimizing the risk of miscommunication. By evaluating the quality and completeness of these letters in the Department of Paediatrics at a District General Hospital in Derby and using the Sheffield Assessment Indicator for Letters (SAIL) as a benchmark,1 we aimed to identify and address gaps in documentation practices.What did you do?From August 1 to 30, 2023, a retrospective analysis was conducted on 30 randomly selected clinic letters from the hospital system, with 15 written by consultants and 15 by middle-grade doctors. The SAIL standard was used to evaluate the letters against 20 specific criteria across 7 domains. Each letter received a score out of 20 based on the criteria met, and results were analyzed using Excel. In March 2024, an intervention was implemented to improve the quality of letters. This included creating and displaying posters with guidelines for effective letter writing, strategically placed near every computer used by middle-grade doctors and consultants. Training sessions were conducted for both, focused on best practices in clinical documentation. Following this intervention, second cycle was conducted from April 1 to May 3, 2024, focusing on the domains identified for improvement in the initial audit.What did you find?In the first audit cycle, only 1 of 15 (4%) letters written by consultants required improvement in overall assessment. In contrast, letters from middle-grade doctors showed more significant issues: 3 (20%) needed clarity enhancements, 2 (12%) had management issues and 2 (12%) required improvement in overall assessment. The second audit cycle targeted the identified areas for improvement: overall assessment, management, and clarity. From the second audit cycle, it appeared that the intervention yielded positive results, with consultants showing improvements across all domains. For middle-grade doctors, the second audit cycle showed notable progress in the areas of clarity and overall assessment. Only 1 (6%) of letters required improvement in overall assessment, and 2 (12%) had room for improvement in clarity. However, the management domain still presented a challenge, with 3 (20%) of letters needing improvement in this area.Abstract 7803 Figure 1What does it mean?This audit highlights the effectiveness of simple interventions, such as posters and training sessions, in enhancing clinical communication. It demonstrates that low-cost, straightforward strategies can lead to measurable improvements when properly implemented. However, the limited sample size may restrict the generalizability of the findings. Despite this, the audit successfully identifies areas needing continued focus, particularly management documentation for middle-grade doctors. The results can guide future training initiatives and serve as a model for similar improvements in clinical documentation.ReferenceCrossley JGM, Howe A, Newble D, Jolly B, Davies HA. Sheffield assessment instrument for letters (SAIL): performance assessment using outpatient letters. 2002.
Employing automatic analysis tools aligned to learning progressions to assess knowledge application and support learning in STEM
We discuss transforming STEM education using three aspects: learning progressions (LPs), constructed response performance assessments, and artificial intelligence (AI). Using LPs to inform instruction, curriculum, and assessment design helps foster students’ ability to apply content and practices to explain phenomena, which reflects deeper science understanding. To measure the progress along these LPs, performance assessments combining elements of disciplinary ideas, crosscutting concepts and practices are needed. However, these tasks are time-consuming and expensive to score and provide feedback for. Artificial intelligence (AI) allows to validate the LPs and evaluate performance assessments for many students quickly and efficiently. The evaluation provides a report describing student progress along LP and the supports needed to attain a higher LP level. We suggest using unsupervised, semi-supervised ML and generative AI (GAI) at early LP validation stages to identify relevant proficiency patterns and start building an LP. We further suggest employing supervised ML and GAI for developing targeted LP-aligned performance assessment for more accurate performance diagnosis at advanced LP validation stages. Finally, we discuss employing AI for designing automatic feedback systems for providing personalized feedback to students and helping teachers implement LP-based learning. We discuss the challenges of realizing these tasks and propose future research avenues.
Implementation of PROMETHEE Method for Employee Performance Assessment System
PT Trimba Engineering is a company which turns on field of the repair and maintenance modules for various telecommunications devices and transmissions to aviation navigation systems. The firms, does an annual audit to assess employees performance that the outcomes of their performance will determine whether they acquire an annual bonus or not. Performance assessment is divided by five achievement categories which are Special, Excellent, Good, Enough, and Poor. But, those who get a bonus, if the outcomes of the performance assessment obtained is Special (two time salary), Excellent (one time salary), and Good (half salary). The performance assessment over five that achievement categories, still have any error in manage of its data. There is no ranking for outcomes of performance assessment obtained. And, it only focuses on the first criteria, but the employee is not necessarily excellent to several other criteria. In this research, the assessment uses a Promethee method, which it done based on three criteria such as technical performance, task execution, and personality. Each criterion has twenty one sub-criteria. The final result according to employee assessment performance obtains by using Promethee with comparize between an alternative with others, then looking for a deviation in order to compute a value of leaving flow, entering flow, and net.
Subgrade performance assessment for rigid runway using long-term pavement performance database
Maintaining desired subgrade performance is an effective way to reduce runway pavement deterioration. Due to lack of extensive field test data, life-cycle performance of runway subgrade has not been fully understood. In order to quantitatively estimate subgrade condition, a novel method of evaluating subgrade performance was developed and validated using the 726 sets of Heavy Weight Deflectometer (HWD) test data of ten runway sections. Statistical analysis demonstrates that the structural behaviour of rigid runway subgrade follows normal distribution in different service stages and can be efficiently evaluated by the subgrade performance index (ψ). The results of factor analysis show that Accumulated Air Traffic Volume (ATV) during service life is the major cause of spatial variations in subgrade condition. In the designed service period of runway, it validates that sea-reclaimed subgrade results in faster degradation in the initial stage of service life while thicker pavement exhibits better capability in protecting the subgrade soil in long-term view. Besides, the differences in applied loads and pavement thickness give rise to the subgrade performance variation in longitudinal direction. Meanwhile, the comparison between the main and the less trafficked test lines in transversal direction reveals that the aircraft impacts play a positive role in resisting the natural fatigue process. By the suggested method, subgrade performance of HWD test points can be categorized into 4 levels from “Excellent”, “Good”, “Fair” to “Poor” based on ψ value. It is helpful for airport agency to make scientific decisions on Maintenance and Rehabilitation (M&R) treatment by calculating the effective area of envelope (β) using the ratio of subgrade performance (η).
Risk-based integrated performance assessment framework for public-private partnership infrastructure projects
Public-private partnerships (PPPs) play a pivotal role in global infrastructure development, significantly impacting economic growth. However, a notable research gap exists in addressing risk management adequately within the performance assessment of PPP projects, particularly in developing nations like Pakistan. This study aims to address this gap by developing an integrated performance assessment framework (IPAF) in order to fill the deficiency of structured risk management in PPP project evaluations. Therefore, the purpose of this study is to devise a systematic methodology for assessing PPP project performance, with a keen emphasis on robust risk management criteria. Employing a comprehensive approach, the methodology integrates 16 performance measures (PMs) aligned with key performance indicators (KPIs), covering the triple constraints of projects (cost, time and quality) during the project feasibility, execution and operation and maintenance phases of project life cycle. Additionally, it incorporates an analysis of 10 prominent risks, spanning financial, environmental, operational, construction, legal and governmental dimensions inherent to PPP projects. The IPAF not only identifies these risks but also offers calculated mitigation strategies to enhance overall project performance. Emphasising alignment with project objectives, stakeholder engagement and contextual factors, the framework aids decision-makers, project managers and policymakers in making informed decisions throughout the project lifecycle. Furthermore, this study contributes by providing a systematic approach to address the critical bond between risk management and project performance in PPP projects. By bridging this gap, the IPAF fosters enhanced project outcomes, thereby contributing to the advancement of infrastructure development practices in both developed and developing contexts.
Embodied carbon in construction materials: a framework for quantifying data quality in EPDs
Embodied carbon constitutes a significant portion of a building’s greenhouse gas (GHG) emissions and is a key challenge for the construction and real estate sectors. Embodied carbon includes construction product manufacturing, building construction, material replacement and end of life. During the specification and procurement stage, designers and contractors have the opportunity to prioritize products with lower carbon footprints. Environmental product declarations (EPDs) are a growing source of environmental data in the construction products market, and are increasingly being used for (1) environmental performance assessment of buildings and (2) product comparison for procurement decisions during the later stages of building design. An obstacle to identifying and purchasing lower embodied carbon products is a lack of data quality and the transparency of EPDs. However, EPDs vary widely in their data quality and specificity, which can lead to inaccurate and misleading comparisons. A new method is presented to account quantitatively for estimates of variation in underlying data specificity in EPDs to enable fairer comparisons between EPDs and to motivate the reporting of actual variability and uncertainty in EPDs. The application of this approach can help purchasers to assess EPDs quantitatively.Practice relevanceLife-cycle assessments (LCAs) and LCA data can be used within the construction sector to evaluate buildings and to assist in design, specification and procurement decision-making. A new method is presented to support the assessment of comparability of functionally equivalent materials and products during the specification and procurement stage. Given the known variation and lack of precision within EPDs, this method provides quantitative metrics that correlate to a qualitative interpretation of EPD precision. This method can be used by anyone who is using EPD data to make product comparisons at the specification and procurement stage:* It provides more confidence in choosing low-carbon material or product options when comparing between functionally equivalent options.* It can incentivize product manufacturers and LCA practitioners to improve data quality and transparently report known variation in their EPDs.* It may also motivate manufacturers to reduce GHGs from their products and processes.