Although a center is typically a place where something occurs, an assessment center is not so much a place as it is a method. A key principle of this method is multiple-attribute assessment. That is, assessment focuses on multiple attributes or dimensions relevant to an individual’s overall performance. Another key principle is that assessment is not based on any single method. Rather, evaluations are derived from assessment across multiple methods. In an assessment center procedure, an individual’s performance is observed in multiple situations or exercises. Performance across these exercises serves as the basis for an assessment of several performance-related dimensions. A unique characteristic of this approach is that small groups of people are evaluated simultaneously, as opposed to one person at a time. Another important characteristic of the assessment center approach is that final assessments are usually made by a group of assessors observing performance and working together to reach a consensus.
Assessment Center Background
The history of assessment centers as a formal approach to assessment begins with the work of the World War II era Office of Strategic Services (OSS). A wartime intelligence agency, the OSS was concerned with the selection of officers for intelligence positions. Candidates for these positions were evaluated in a week of interviews, tests, and exercises intended to ascertain whether they had the necessary capabilities (e.g., mental ability, motivation, physical stamina, emotional stability, stress resistance). Following the war, AT&T modified and adapted this approach for use in managerial selection. AT&T’s managerial assessment centers demonstrated considerable value in predicting advancement through the organizational hierarchy. As a result, the assessment center has traditionally been viewed as a tool for selection into managerial jobs. However, given that multiple individuals are evaluated simultaneously, assessment centers are particularly advantageous for direct evaluation of interpersonal variables. Consequently, assessment centers are a potentially valuable selection tool for any jobs in which social skills are of particular importance and thus are being used more frequently for purposes other than managerial assessment. Today, assessment centers are used in numerous private and public organizations to evaluate thousands of people each year. For example, assessment centers have been used to evaluate salespeople, teachers and principals, engineers, rehabilitation counselors, police officers and firefighters, and various customer service positions, as well as for managerial selection. They have also been used to assist high school and college students with career planning. In addition, while predominately used as a selection tool, the use of assessment centers for training and development purposes has increased dramatically. Specifically, assessment centers are often used as a tool for identifying individuals’ relative strengths and weaknesses with respect to key performance domains.
Assessment Center Dimensions
Most assessment centers are designed specifically for the jobs and/or organizations in which they are used. Assessment center “dimensions” are typically developed around organizationally specific values and practices derived from a systematic analysis of the job and/ or organization. Thus, specific assessment centers are relatively idiosyncratic with respect to the performance dimensions being assessed. The dimensions might represent personal traits, job-specific competencies, or general knowledge or skill constructs. A recent review of the assessment center literature, for example, compiled a list of more than 168 different labels for performance dimensions assessed across different assessment centers. However, examination of these labels indicated that the vast majority of these dimensions could be assigned to one of seven general categories. These categories are communication skills, problem-solving skills, consideration and awareness of others, ability to influence others, organizing and planning ability, drive, and tolerance for stress or uncertainty.
Assessment related to communication skills focuses on the extent to which an individual conveys oral and written information and responds to questions and challenges. Problem solving focuses on the extent to which an individual gathers information; understands relevant technical and professional information; effectively analyzes data and information; generates viable options, ideas, and solutions; selects supportable courses of action for problems and situations; uses available resources in new ways; and generates and recognizes imaginative solutions. Consideration and awareness of others focuses on the extent to which an individual’s actions reflect a consideration for the feelings and needs of others as well as an awareness of the impact and implications of decisions relevant to other components both inside and outside the organization. Influencing others focuses on the extent to which an individual persuades others to do something or adopt a point of view in order to produce desired results, and takes action in which the dominant influence is the individual’s own convictions rather than the influence of others’ opinions. Organizing and planning focuses on (a) the extent to which an individual systematically arranges his or her own work and resources as well as that of others for efficient task accomplishment and (b) the extent to which an individual anticipates and prepares for the future. Drive focuses on the extent to which an individual generates and maintains a high activity level, sets high performance standards and persists in their achievement, and expresses the desire to advance to higher job levels. And finally, tolerance for stress or uncertainty focuses on the extent to which an individual maintains effectiveness in diverse situations under varying degrees of pressure, opposition, and disappointment.
Assessment Center Exercises
Assessment center exercises are performance tests designed as samples or abstractions of the jobs for which individuals are being assessed. While assessment centers typically do not include high-fidelity work samples or simulations, the exercises are designed to reflect major job components. Although there is a great deal of variability across assessment centers with respect to the exact content of exercises, the form of these exercises is relatively consistent. Almost all assessment center exercises may be classified with respect to six different types: leaderless group discussions, in-basket exercises, case analyses, interviews, presentations, and one-on-one role play.
Leaderless group discussions are probably the most commonly used and well-known assessment center exercises. There are many variations of this type of exercise. Typically, a group of four to eight participants is given a problem to solve, a time limit in which to do so, and a requirement to develop a written solution agreed to by all members of the group. Specific roles may be assigned to the various group members. However, no one is assigned the role of leader or chair. Rather, leadership behaviors must emerge during the discussion. A common variant of these exercises is the competitive leadership group discussion, which adds the requirement of persuading others to adopt a particular solution or outcome while still maintaining the requirement of a consensus decision.
In-basket exercises are another commonly used exercise. These exercises are not group exercises. Rather, they are designed to simulate administrative work and are performed individually. They usually include a simulated set of memos, messages, e-mails, letters, and reports, such as might accumulate in a manager’s “in-basket,” as well as other reference material (e.g., organizational charts, personal calendars). The materials are usually interrelated and vary with respect to complexity and urgency. Participants are typically asked to play the role of a person new to the job, working alone with the goal of trying to clear the in-basket. Scoring protocols differ across in-basket exercises but often include a follow-up interview in which participants are asked to explain their approaches to the exercise and reasons for actions taken.
Although leaderless group discussions and in-baskets are the most commonly used exercises, many assessment centers include other exercises, such as one-on-one role play, case analyses, and presentations.
In addition, assessment centers often include interviews. Unlike more typical employment interviews, however, interviews used in assessment centers are usually highly structured and may incorporate role play or other simulated components.
Typical assessment centers include three to five exercises to assess anywhere from 3 to 25 performance dimensions. Exercises may be of different types or variations of a common type (e.g., a competitive and a non-competitive leaderless group discussion). Similarly, dimensions may represent different major categories (as presented above) or subcomponents of the major categories (e.g., oral communication and written communication). The general strategy is for exercises and dimensions to be crossed such that each exercise allows for an assessment of each performance dimension. Often, however, particular dimensions are not assessed in a particular exercise (e.g., written communication may not be assessed in an interview).
Participants in an assessment center are observed by one or more “assessors” in each of the exercises. Assessors observe and record participant behaviors relevant to each performance dimension to be assessed. An assessor may also directly interact with participants either as a role player or as an interviewer. After all of the exercises have been completed, assessors typically meet as a group to review and discuss each participant’s performance across exercises. The goal of this discussion is to generate a consensus rating representing each participant’s standing on each performance dimension. In addition, an overall performance rating may be generated.
Assessment Center Design
Although there is a great deal of variability with respect to the design and implementation of assessment centers, several design issues are important for the ultimate success of the assessment process. These issues are the number of performance dimensions assessed, the number of assessors needed, assessor qualifications, and assessor training.
As noted above, the number of performance dimensions evaluated in a given assessment center ranges from 3 to 25, but the average number of dimensions included is approximately 9. However, research suggests that assessors may have difficulty differentiating among a large number of performance dimensions. In one study, assessors were responsible for rating 3, 6, or 9 dimensions. Those assessors who were asked to evaluate 3 or 6 dimensions provided more accurate assessments than those asked to rate 9 dimensions. This suggests that when asked to evaluate a large number of dimensions, the demands placed on assessors may make it difficult for them to process information at the dimension level, resulting in less accurate assessments. It appears that 5 to 7 may be an optimal number of dimensions to include in a given assessment center.
A similar concern exists with respect to the number of assessors needed to effectively evaluate participants. That is, as the number of assessment center participants any given assessor is required to observe and evaluate in any given exercise increases, the demands placed on assessors make it more difficult to evaluate each participant. Although some assessment centers have required each assessor to simultaneously observe and evaluate as many as four participants, the typical ratio of participants to assessors is 2:1. Exceeding this ratio should be considered very carefully.
Another potentially important design factor focuses on the assessors. What qualifications should assessors have? Assessors may be psychologists, human resource specialists, job incumbents, or managers. Assessor teams may also be some combination of these. In general, assessors should be good observers, objective, and articulate. Some authors posit that as a result of their education and training, psychologists and similarly trained human resource professionals are better equipped to observe, record, and evaluate behavior. Alternately, managers and other job incumbents or experts may have more practical knowledge with respect to the job as well as the organization and its policies. There are both costs and benefits associated with different types of assessors. These costs and benefits must be weighed carefully in the assessment center design process.
Regardless of assessor qualifications, assessment center ratings are obviously inherently judgmental in nature. Consequently, training assessors is a crucial element in the development and design of assessment centers. A recent review of assessment center practices indicates that assessors may receive anywhere from one day to two weeks of training. In addition, the type of training is also an important consideration. For instance, there is a consensus in the literature that frame-of-reference training is a highly effective approach to assessor training. Frame-of-reference training typically involves emphasizing the multidimensionality of performance, defining performance dimensions, providing a sample of behavioral incidents representing each dimension (along with the level of performance represented by each incident), and practice and feedback using these standards to evaluate performance. The goal of frame-of-reference training is to train assessors to share and use common conceptualizations of performance when making evaluations. However, irrespective of the training approach used, more extensive assessor training is generally associated with more effective assessment.
Assessment Center Validity
In general, assessment centers demonstrate positive utility as a tool for selection as well as training and development. A great deal of research has examined the validity of assessment center ratings with respect to job performance. Content-related methods of validation are regularly used in assessment center development in an effort to meet professional and legal requirements. In essence, both dimensions and exercises are derived from a content sampling of the job. An important component of a manager’s job, for example, may be participating in meetings with peers. Leaderless group exercises serve as a representation of this aspect of the job. Evidence supporting the criterion-related validity of assessment center ratings is also consistently documented. Research suggests that assessment center ratings of specific performance dimensions as well as overall assessment center ratings can predict job-related criteria such as supervisory performance ratings, sales performance, promotion rate, and salary progression. Evidence for the construct-related validity of assessment center dimensions, however, has been less promising. Specifically, assessment centers are designed to evaluate individuals on specific dimensions of job performance across situations or exercises. Research, however, has indicated that exercise rather than dimension factors emerge in the evaluation of participants. Thus, a lack of evidence of convergent validity, as well as a partial lack of evidence of discriminant validity, has been extensively reported in the literature. These findings have led to a prevailing view that assessment center ratings demonstrate criterion-related validity, while at the same time lacking construct-related validity. Research continues to examine this issue, since it is inherently inconsistent with the current unitarian view of validity.
Over the past several decades, assessment centers have enjoyed increasing popularity. The validity of assessment centers is undoubtedly partially responsible for their popularity. In addition, assessment centers tend to be well accepted by both organizational decision makers and participants. This approach to assessment can be quite labor-intensive and thus quite costly. In addition to the time required of the participants, assessment centers require a considerable investment of time on the part of assessors. Despite these costs, assessment centers present a unique tool for the assessment of individual differences.
See also:
- Career centers
- Career counseling
- Individual career management
- Career planning workshops
- Computer-based career support systems
References:
- Arthur, W. Jr., Day, E. A., McNelly, T. L. and Edens, P. S. 2003. “A Meta-analysis of the Criterion-related Validity of Assessment Center Dimensions.” Personnel Psychology 56:125-154.
- Arthur, W. Jr., Woehr, D. J. and Maldegan, R. M. 2000. “Convergent and Discriminant Validity of Assessment Center Dimensions: A Conceptual and Empirical Reexamination of the Assessment Center Construct-related Validity Paradox.” Journal of Management 26:813-835.
- Bray, D. W. and Grant, D. L. 1966. “The Assessment Center in the Measurement of Potential for Business Management.” Psychological Monographs: General and Applied 80(17): 1-27.
- Gaugler, B. B. and Thornton, G. C. III. 1989. “Number of Assessment Center Dimensions as a Determinant of Assessor Generalizability of the Assessment Center Ratings.” Journal of Applied Psychology 74:611-618.
- Lievens, F. 1998. “Factors Which Improve the Construct Validity of Assessment Centers: A Review.” International Journal of Selection and Assessment 6:141-152.
- Spychalski, A. C., Quinones, M. A., Gaugler, B. B. and Pohley, K. 1997. “A Survey of Assessment Center Practices in Organizations in the United States.” Personnel Psychology 50:71-90.
- Task Force on Assessment Center Guidelines. 1989. “Guidelines for Ethical Considerations.” Public Personnel Management 18:457-470.
- Woehr, D. J. and Arthur, W. Jr. 2003. “The Construct-related Validity of Assessment Center Ratings: A Review and Meta-analysis of the Role of Methodological Factors.” Journal of Management 29:231-258.