The administrative indexes are designed to give practitioners an overall view of a respondent’s response patterns. These indexes will highlight potential issues related to random or inattentive responding.
Response percentages indicate how often a respondent uses each of the five response options (for most items these options are: “Strongly like”, “Like”, “Indifferent”, “Dislike”, “Strongly dislike”). By examining these response percentages, a practitioner can see the general levels of like and dislike responses across the entire inventory.
Response percentages are also used as inputs for some of the flags discussed below.
To view the mean and standard deviations for each of the five response percentages in the GRS, see Table 10.1.
To view the lower and upper bounds (plus or minus 2 standard deviations from the mean) for each of the five response percentages in the GRS, see Table 10.2. Note that zero (as a lower bound) is within the normal range for most response percentages.
There are several different “flags” that are designed to highlight cases where a respondent has been inattentive in their responses or is deliberately responding randomly.
The consistency indicator is designed to detect inattentive or random responding. In the previous version of the Strong assessment, this was done by counting highly correlated item pairs where responses were within one point of each other.
The Strong 244 assessment refines this approach, using 106 pairs of highly correlated (r >= .50) item pairs. The consistency index was built by:
- Computing the differences between each of the item pairs (e.g., responding “Strongly like” to one item and “Indifferent” to another is 5-3=2)
- Using these difference scores in a logistic regression that compares the GRS respondents against 10,000 cases of randomly generated data
- Setting a consistency flag threshold such that it detects 95% of randomly generated cases. In the GRS, this threshold flags approximately 2% of cases.
This flag checks to see if a respondent has used one of the five response options at least 90% of the time. The assumption is that this indicates someone who is not taking the assessment seriously.
Similar to one response option 90%, this flag checks to see if a respondent used only two of the five response options. The assumption is that this indicates someone who is not taking the assessment seriously.
Respondents are allowed to skip up to 10 items on the Strong assessment. Typically, respondents and practitioners will not see cases with this flag, as respondents who have not completed enough items will not be able to generate reports. The GRS has all cases flagged here, because there are 11 new items on the Strong 244 assessment and the entire GRS consists of respondents who completed the 2004 Strong assessment, which did not include those 11 items.
Percentages of the four administrative index flags in the GRS are presented in Table 10.3. It is rare for a respondent who completed the assessment honestly to trigger any of the four flags.
|One option 90%
|Two options 100%
|Too many omits
Although each respondent to the Strong 244 receives a GOT code based on their GOT scale results, many practitioners have found it useful to calculate another GOT code based on the respondent’s top occupations. The original method was developed by Judith Grutter of GS Consultants and was taught in Strong certification programs.
The Strong 244 employs a modified approach to calculating occupation RIASEC total scores:
- Take the GOT codes associated with the respondent’s top 20 occupations.
- For each single letter code, assign four points to that letter
- For each multi-letter code, assign
- Three points to the first letter
- Two points to the second letter
- Three points to the third letter, if it exists
- Sum the points for each letter. These are the raw RIASEC totals.
- Calculate baseline points and divide the respondent’s point totals by these baselines. This adjustment is needed because our current set of occupations has an uneven distribution of GOT codes (e.g., more Social occupations and Conventional occupations).
- Calculate the proportions of adjusted points and present on the profile as percentages.
In short, the percentages on the Career Satisfaction Report represent the percentage of the respondent’s top occupations in each theme above baseline.
This set of scores can be used by practitioners as an additional way to generate exploration possibilities with their clients.