Chapter 10 Administrative Indexes

The administrative indexes are designed to give practitioners an overall view of a respondent’s response patterns. These indexes will highlight potential issues related to random or inattentive responding.

10.1 Response percentages

Response percentages indicate how often a respondent uses each of the five response options (for most items these options are: “Strongly like”, “Like”, “Indifferent”, “Dislike”, “Strongly dislike”). By examining these response percentages, a practitioner can see the general levels of like and dislike responses across the entire inventory.

Response percentages are also used as inputs for some of the flags discussed below.

To view the mean and standard deviations for each of the five response percentages in the GRS, see Table 10.1.

Table 10.1: Response percentage descriptives by group
Group n Strongly dislike Mean Strongly dislike SD Dislike Mean Dislike SD Indifferent Mean Indifferent SD Like Mean Like SD Strongly like Mean Strongly like SD
Combined GRS 100,000 17.2 17.8 20.9 13.4 23.5 12.6 26.6 11.0 11.8 10.3
Female 50,000 20.7 18.8 21.2 13.9 21.4 11.8 25.1 10.3 11.5 9.9
Male 50,000 13.6 16.1 20.6 12.9 25.5 13.1 28.1 11.4 12.1 10.7
Asian 4,297 14.9 16.9 19.6 13.8 25.9 14.7 27.2 12.1 12.3 11.7
Black 7,139 17.9 20.1 19.8 15.8 22.5 13.9 25.4 12.1 14.4 12.6
Hispanic 11,257 18.3 19.4 20.3 14.9 23.3 13.3 25.1 11.3 13.0 11.6
Indian 969 11.8 14.7 18.7 12.7 25.5 13.9 29.0 11.3 14.9 12.7
Middle Eastern 970 16.9 18.0 20.3 13.8 23.4 12.4 26.3 11.5 13.0 10.6
Native American 6,784 17.3 19.5 19.3 15.0 23.4 14.0 25.4 11.8 14.4 13.0
Pacific Islander 543 17.2 18.0 17.9 12.1 25.1 13.5 26.7 11.5 13.0 11.2
White 69,891 17.1 17.2 21.3 12.6 23.4 12.0 27.0 10.6 11.1 9.3


To view the lower and upper bounds (plus or minus 2 standard deviations from the mean) for each of the five response percentages in the GRS, see Table 10.2. Note that zero (as a lower bound) is within the normal range for most response percentages.

Table 10.2: Response percentage lower and upper bounds by group
Group n Strongly dislike lower bound Strongly dislike upper bound Dislike lower bound Dislike upper bound Indifferent lower bound Indifferent upper bound Like lower bound Like upper bound Strongly like lower bound Strongly like upper bound
Combined GRS 100,000 0 52.8 0 47.7 0 48.7 4.6 48.6 0 32.4
Female 50,000 0 58.3 0 49.0 0 45.0 4.5 45.7 0 31.3
Male 50,000 0 45.8 0 46.4 0 51.7 5.3 50.9 0 33.5
Asian 4,297 0 48.7 0 47.2 0 55.3 3.0 51.4 0 35.7
Black 7,139 0 58.1 0 51.4 0 50.3 1.2 49.6 0 39.6
Hispanic 11,257 0 57.1 0 50.1 0 49.9 2.5 47.7 0 36.2
Indian 969 0 41.2 0 44.1 0 53.3 6.4 51.6 0 40.3
Middle Eastern 970 0 52.9 0 47.9 0 48.2 3.3 49.3 0 34.2
Native American 6,784 0 56.3 0 49.3 0 51.4 1.8 49.0 0 40.4
Pacific Islander 543 0 53.2 0 42.1 0 52.1 3.7 49.7 0 35.4
White 69,891 0 51.5 0 46.5 0 47.4 5.8 48.2 0 29.7

10.2 Flags for different response patterns

There are several different “flags” that are designed to highlight cases where a respondent has been inattentive in their responses or is deliberately responding randomly.

10.2.1 Consistency index and flag

The consistency indicator is designed to detect inattentive or random responding. In the previous version of the Strong assessment, this was done by counting highly correlated item pairs where responses were within one point of each other.

The Strong 244 assessment refines this approach, using 106 pairs of highly correlated (r >= .50) item pairs. The consistency index was built by:

  1. Computing the differences between each of the item pairs (e.g., responding “Strongly like” to one item and “Indifferent” to another is 5-3=2)
  2. Using these difference scores in a logistic regression that compares the GRS respondents against 10,000 cases of randomly generated data
  3. Setting a consistency flag threshold such that it detects 95% of randomly generated cases. In the GRS, this threshold flags approximately 2% of cases.

10.2.2 One response option 90%

This flag checks to see if a respondent has used one of the five response options at least 90% of the time. The assumption is that this indicates someone who is not taking the assessment seriously.

10.2.3 Two response options 100%

Similar to one response option 90%, this flag checks to see if a respondent used only two of the five response options. The assumption is that this indicates someone who is not taking the assessment seriously.

10.2.4 Too many omits

Respondents are allowed to skip up to 10 items on the Strong assessment. Typically, respondents and practitioners will not see cases with this flag, as respondents who have not completed enough items will not be able to generate reports. The GRS has all cases flagged here, because there are 11 new items on the Strong 244 assessment and the entire GRS consists of respondents who completed the 2004 Strong assessment, which did not include those 11 items.

10.2.5 Flag frequencies in the GRS

Percentages of the four administrative index flags in the GRS are presented in Table 10.3. It is rare for a respondent who completed the assessment honestly to trigger any of the four flags.

Table 10.3: Administrative indexes: Percentages of GRS with each flag
Consistency index One option 90% Two options 100% Too many omits
1.9 0.3 0.3 100

10.3 Occupation RIASEC total scores

Although each respondent to the Strong 244 receives a GOT code based on their GOT scale results, many practitioners have found it useful to calculate another GOT code based on the respondent’s top occupations. The original method was developed by Judith Grutter of GS Consultants and was taught in Strong certification programs.

The Strong 244 employs a modified approach to calculating occupation RIASEC total scores:

  • Take the GOT codes associated with the respondent’s top 20 occupations.
  • For each single letter code, assign four points to that letter
  • For each multi-letter code, assign
  • Three points to the first letter
  • Two points to the second letter
  • Three points to the third letter, if it exists
  • Sum the points for each letter. These are the raw RIASEC totals.
  • Calculate baseline points and divide the respondent’s point totals by these baselines. This adjustment is needed because our current set of occupations has an uneven distribution of GOT codes (e.g., more Social occupations and Conventional occupations).
  • Calculate the proportions of adjusted points and present on the profile as percentages.

In short, the percentages on the Career Satisfaction Report represent the percentage of the respondent’s top occupations in each theme above baseline.

This set of scores can be used by practitioners as an additional way to generate exploration possibilities with their clients.