Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
16,821
result(s) for
"Assessment centers"
Sort by:
Personality Traits, Organizational Support, and Anxiety in Assessment Center Process
by
Zulkarnain, Zulkarnain
,
Rahmadani, Vivi Gusrini
,
Novliadi, Ferry
in
Anxiety
,
Assessment centers
,
Data collection
2026
Assessment centers are widely used in human resource management to assess employees' suitability for specific job roles, including promotions and selections. Despite their effectiveness in predicting candidates' success, previous research has shown that individuals undergoing assessment center procedures often experience elevated anxiety levels. The study involved 241 officers functioning as assistant managers, all of whom had previously undergone assessment center procedures. Surveys were administered to collect relevant data for analysis. Hierarchical regression analysis was employed to analyze the collected data and examine the relationships between participants' personality traits, the extent of organizational support they received, and their anxiety levels during the assessment center process. The results of the hierarchical regression analysis revealed a clear connection between participants' personality traits and the level of organizational support they received during the assessment center process. The findings have significant implications for addressing the issue of employee anxiety within assessment center processes. Organizations can take practical steps to reduce anxiety's impact on candidates by understanding how personality traits and organizational support influence anxiety levels. Implementing strategies that leverage personality traits and foster organizational support can enhance the overall effectiveness of assessment center evaluations and improve the well-being of individuals undergoing these assessments.
Journal Article
Assessment centers do not measure competencies: Why this is now beyond reasonable doubt
2024
Although assessment centers (ACs) are usually designed to measure stable competencies (i.e., dimensions), doubt about whether or not they reliably do so has endured for 70 years. Addressing this issue in a novel way, several published Generalizability (G) theory studies have sought to isolate the multiple sources of variance in AC ratings, including variance specifically concerned with competencies. Unlike previous research, these studies can provide a definitive answer to the AC construct validity issue. In this article, the historical context for the construct validity debate is set out, and the results of four large-scale G-theory studies of ACs are reviewed. It is concluded that these studies demonstrate, beyond reasonable doubt, that ACs do not reliably measure stable competencies, but instead measure general, and exercise-related, performance. The possibility that ACs measure unstable competencies is considered, and it is suggested that evidence that they do so may reflect an artefact of typical AC design rather than a “real” effect. For ethical, individual, and organizational reasons, it is argued that the use of ACs to measure competencies can no longer be justified and should be halted.
Journal Article
Generating Dynamic Reports on Competency Clusters Using Text Generation and Paraphrasing Approaches
2025
An assessment center is an assessment tool that is carried out using multi-simulation, multi-assessors, and an assessment aggregation process that are all mandatory in the implementation. Currently, the competency cluster dynamics report, one of the components of the assessment report, is manually made by an assessor. This study proposes the use of Natural Language Generation (NLG) as an alternative solution for creating the competency cluster dynamics report. This study uses three approaches to create the NLG model. The template-based model can create paragraphs directly from tabular data, where the model is created by defining a sentence frame and then filling the frame with relevant information. Then, the data-to-text approach is carried out by transforming tabular data into a flat string format (linearization) as a model input. Finally, the text-to-text approach, a paraphrasing model, trains a pre-trained language model from text data input using template-based output as model input. Following quantitative and qualitative evaluations, the text-to-text model was found to be the model with the best report output. In terms of fluency, the model had the same score as the report by humans. On the other hand, in terms of faithfulness and coherence, the model had higher evaluation scores than the report by humans. Meanwhile, template-based model outperformed reports by humans and text-to-text models in the faithfulness. Data-to-text approach had the lowest evaluation score of all models.
Journal Article
Any slice is predictive? On the consistency of impressions from the beginning, middle, and end of assessment center exercises and their relation to performance
by
Breil, Simon M.
,
Ingold, Pia V.
,
Heimann, Anna Luca
in
Assessment centers
,
Cognition & reasoning
,
Hypotheses
2024
This study generates new insights on the role of initial impressions in assessment centers. Drawing from the “thin slices” of behavior paradigm in personality and social psychology, we investigate to what extent initial impressions of assessees—based on different slices of assessment center exercises (i.e., two minutes at the beginning, middle, and end of AC exercises)—are consistent across and within AC exercises, and are relevant for predicting assessment center performance and job performance. Employed individuals ( N = 223) participated in three interactive assessment center exercises, while being observed and evaluated by trained assessors. Based upon video-recordings of all assessment center exercises, a different, untrained group of raters subsequently provided ratings of their general initial impressions of assessees for the beginning, middle, and end of each exercise. As criterion measure, supervisors rated assessees’ job performance. Results show that initial impressions in assessment centers are (a) relatively stable, (b) consistently predict assessment center performance across different slices of behavior (i.e., across the three time points and exercises), and (c) mostly relate to job performance.
Journal Article
Assessment centers in the virtual age: validity and fairness in gender and age
by
Mariani, Marco Giovanni
,
Rizzo, Barbara
,
Petruzziello, Gerardo
in
Assertiveness
,
Assessment centers
,
Candidates
2025
Purpose This study focused on integrating digital methodologies in personnel selection processes and explored the psychometric properties of virtual assessment centers as alternatives to traditional ones. We evaluated the validity of a virtual application of an assessment center and its fairness in gender-based evaluations. Design/methodology/approach We collected data from 120 managers at an Italian company undergoing evaluations through an online assessment center. This virtual platform administered tests in English across three exercise phases to gauge competencies such as teamwork, decision-making, execution and assertiveness. The study employed Pearson correlation indexes, principal component analysis, McDonald’s omega, binary logistic regression, MANOVA, chi-square and the 4/5th rule for statistical analyses to assess the validity and gender fairness of the virtual assessment method. Findings The results indicate that virtual assessment centers show promising validity and do not exhibit adverse impacts based on gender or age. This suggests that they are a fair method for evaluating candidates. These results support the potential of digital methodologies to serve effectively in personnel selection processes, offering theoretical and practical implications for future application and research. Originality/value This article contributes novel insights into the evolving field of digital personnel selection processes. By systematically evaluating the validity and gender fairness of virtual assessment centers, the research addresses a significant gap in existing literature, providing a foundational basis for further exploration into virtual assessment methodologies.
Journal Article
Talent quotient: development and validation of a measurement scale
by
Yogalakshmi, J.A
,
Supriya, M.V
in
Assessment centers
,
Assessment Centers (Personnel)
,
Cognitive Ability
2020
PurposeThe aim of the current study was to develop and validate a measure for identifying talent in the workplace. This is a gap long identified by researchers in this field.Design/methodology/approachHinkins methodology was adopted for the establishment of a psychometrically sound measure. A 16-item scale for assessing the construct was developed. The reliability and validity were established by analyzing content adequacy, convergent validity, divergent validity and external validity. Primary data were collected from employees signaled as talent by their organization.FindingsThe study yielded a six-factor structure scale for the construct. These factors accounted for 66.8 percent of observed variance. All six dimensions, namely, calling orientation, critical insight, continuous learning, collaboration, cohesiveness and challenge drive established acceptable reliability and validity.Social implicationsThe research provides a precise definition of the talent construct. Identification and retention of individuals with a high talent quotient is a critical challenge to organizations. Identifying talent is made possible through this measurement scale.Originality/valueThis research made an attempt to develop a reliable and valid measurement scale for the talent construct. The scale provides a precise definition of the talent construct. This simple sound scale could be useful at both the individual and organizational levels. It helps individuals to identify and focus on critical areas for achieving talent status. Organizations benefit through better human resource management practice. Identification and retention of talent are essential to career management. Overall, it also satisfies the urgent need in talent management research for a clear definition of the talent construct.
Journal Article
Encouraging residents’ professional development and career planning: the role of a development-oriented performance assessment
by
Bustraan, Jacqueline
,
de Beaufort, Arnout J.
,
Velthuis, Sophie I.
in
Analysis
,
Assessment centers
,
Behavioral Objectives
2018
Background
Current postgraduate medical training programmes fall short regarding residents’ development of generic competencies (communication, collaboration, leadership, professionalism) and reflective and deliberate practice. Paying attention to these non-technical skills in a structural manner during postgraduate training could result in a workforce better prepared for practice. A development-oriented performance assessment (PA), which assists residents with assessment of performance and deliberately planned learning activities, could potentially contribute to filling this gap. This study aims to explore residents experiences with the PA.
Methods
We conducted a qualitative interview study with 16 residents from four different medical specialties who participated in the PA, scheduled halfway postgraduate training. The PA was conducted by an external facilitator, a psychologist, and focused specifically on professional development and career planning. Residents were interviewed 6 months after the PA. Data were analysed using the framework method for qualitative analysis.
Results
Residents found the PA to be of additional value for their training. The overarching merit was the opportunity to evaluate competencies not usually addressed in workplace-based assessments and progress conversations. In addition, the PA proved a valuable tool for assisting residents with reflecting upon their work and formulating their learning objectives and activities. Residents reported increased awareness of capacity, self-confidence and enhanced feelings of career-ownership. An important factor contributing to these outcomes was the relationship of trust with the facilitator and programme director.
Conclusion
The PA is a promising tool in fostering the development of generic competencies and reflective and deliberate practice. The participating residents, facilitator and programme directors were able to contribute to a safe learning environment away from the busy workplace. The facilitator plays an important role by providing credible and informative feedback. Commitment of the programme director is important for the implementation of developmental plans and learning activities.
Journal Article
The Relevance of Emotional Intelligence in Personnel Selection for High Emotional Labor Jobs
by
Hock, Michael
,
Herpertz, Sarah
,
Schütz, Astrid
in
Assessment centers
,
Aviation - manpower
,
Biology and Life Sciences
2016
Although a large number of studies have pointed to the potential of emotional intelligence (EI) in the context of personnel selection, research in real-life selection contexts is still scarce. The aim of the present study was to examine whether EI would predict Assessment Center (AC) ratings of job-relevant competencies in a sample of applicants for the position of a flight attendant. Applicants' ability to regulate emotions predicted performance in group exercises. However, there were inconsistent effects of applicants' ability to understand emotions: Whereas the ability to understand emotions had a positive effect on performance in interview and role play, the effect on performance in group exercises was negative. We suppose that the effect depends on task type and conclude that tests of emotional abilities should be used judiciously in personnel selection procedures.
Journal Article
A META-ANALYSIS OF THE CRITERION-RELATED VALIDITY OF ASSESSMENT CENTER DIMENSIONS
by
ARTHUR JR, WINFRED
,
EDENS, PAMELA S.
,
DAY, ERIC ANTHONY
in
Assessment centers
,
Assessment Centers (Personnel)
,
Cognitive ability
2003
We used meta‐analytic procedures to investigate the criterion‐related validity of assessment center dimension ratings. By focusing on dimension‐level information, we were able to assess the extent to which specific constructs account for the criterion‐related validity of assessment centers. From a total of 34 articles that reported dimension‐level validities, we collapsed 168 assessment center dimension labels into an overriding set of 6 dimensions: (a) consideration/awareness of others, (b) communication, (c) drive, (d) influencing others, (e) organizing and planning, and (f) problem solving. Based on this set of 6 dimensions, we extracted 258 independent data points. Results showed a range of estimated true criterion‐related validities from .25 to .39. A regression‐based composite consisting of 4 out of the 6 dimensions accounted for the criterion‐related validity of assessment center ratings and explained more variance in performance (20%) than Gaugler, Rosenthal, Thornton, and Bentson (1987) were able to explain using the overall assessment center rating (14%).
Journal Article
A list-scheduling heuristic for the short-term planning of assessment centers
2018
Many companies operate assessment centers to help them select candidates for open job positions. During the assessment process, each candidate performs a set of tasks, and the candidates are evaluated by some so-called assessors. Additional constraints such as preparation and evaluation times, actors’ participation in tasks, no-go relationships, and prescribed time windows for lunch breaks contribute to the complexity of planning such assessment processes. We propose a multi-pass list-scheduling heuristic for this novel planning problem; to this end, we develop novel procedures for devising appropriate scheduling lists and for generating a feasible schedule. The computational results for a set of example problems that represent or are derived from real cases indicate that the heuristic generates optimal or near-optimal schedules within relatively short CPU times.
Journal Article