Part III: Administering a Multirater EQ 360 2.0

Planning the EQ 360 2.0 Assessment Process

Selecting EQ 360 2.0 Raters

As part of an EQ 360 2.0 assessment, raters must be recruited for each individual who is to be assessed (also referred to as the “participant” in this guide). Recruiting can be handled by the administrator, the participant, or both. Whoever is involved in the recruiting process must understand that the interpretability and the usefulness of EQ 360 2.0 results rest on the careful assignment of raters to the correct rater groups. There are five rater groups: (1) managers, (2) peers, (3) direct reports, (4) family/friends, and (5) others. These groups will be described in detail later in this section.

In accordance with the multirater approach, each rater group should include as many raters as possible. For each participant, it is recommended that a minimum of three raters be included in each rater group involved in the assessment, although it is common for the manager group to only contain one manager. Inclusion of a sufficient number of raters in each group not only helps to increase the quality of feedback obtained from the particular groups that participate in the assessment, but also helps ensure the confidentiality of the responses provided by of each rater within the group. If there are too few raters (i.e., less than three) in the peers, direct reports or friends/family groups to meet the confidentiality requirement, all raters in that group should be assigned to the “other” rater group. Bear in mind that the quality of feedback increases when the number of raters increases, and the robustness of the overall assessment also increases with the number of rater groups involved.

Selecting the right raters is generally not a straightforward process. Figuring out who will choose the raters, who to choose as raters, and what criteria to use for both can be difficult. Choosing who selects the raters is generally performed either by the organization (i.e., Human Resources), the participant’s manager, or the participant. It is, however, important that organizations allow participant input into the rater selection process. Perhaps the participant selects several peers, clients, direct reports and knowledgeable co-workers. Then either the participant’s manager or Human Resources selects several more to round out the rater groups.

Some participants might choose certain raters in order to minimize negative feedback, either in the hopes of improving the chance of being seen in a positive light, or out of a belief that the positive results may increase chances of a promotion or monetary gain. A representative of the organization should speak to participants before completion of the inventories in order to make clear that the assessment is for developmental purposes only. This way people can seek honest feedback from the right people, receive information that might truly be helpful, and not worry about hurting their chances of promotion or financial gain.

Be sure to consider the purpose of the EQ 360 2.0 assessment and the type of information required from it when selecting raters. Determine the number of raters that will provide the most robust information for the participant. The more diligently the raters are selected, the more reliable the assessment tool will be.

Issues to Consider When Selecting Raters

Choose raters who...

  1. are credible and trustworthy. Raters with these characteristics will give fair and accurate information. These individuals generally will not inflate results that are not deserved and will not be malicious or opportunistic.
  2. work closely with the person being assessed. These people are in a better position to rate the individual’s performance in different settings.
  3. have worked with the person being assessed for some time (more than 1 year). They will be more familiar with how the individual performs in different areas.
  4. have worked with the individual being assessed for a shorter period of time (less than 1 year); they will be more likely to respond given the current context and be less swayed by history.
  5. represent as many different groups as needed. Each rater group offers a perspective of performance that raters from the other groups may not observe, and certain rater groups are better than others at rating certain aspects of behavior (e.g., direct reports are the targets and therefore the best observers of leadership).

Rater Definitions

In most organizations, assessments will be based on the manager, peer, and direct report groups. Although the administrator should strive to maximize the number of raters per group, many participants will have only one manager, and some will not have individuals who report directly to them.

Discuss with your client whether the rater definition you use will be narrow or broad so that results from similar raters can be pooled and interpreted in a meaningful way. For smaller organizations with fewer potential raters, a broad definition may be most appropriate. For larger organizations, however, potential raters will be more plentiful, allowing a narrower approach to defining rater groups. Broad and narrow rater definitions are suggested in the following examples.

Manager

Peer

Direct Report

Family/Friend

Other