ACT staff. The panel first develops a broad definition of
a skill area and identifies the lowest and highest level of
the skill that is worthwhile to measure. The panel then
identifies examples of tasks within this broadly defined
skill domain and narrows that domain to those examples
that are important for job performance across a wide
range of jobs. Next, the tasks are organized into
“strands,” which are aspects of the general skill domain,
or skill area that pertain to a singular concept to be
measured. The strands assessed in Reading for
Information, for example, include “choosing main ideas
or details,” “understanding word meanings,” “applying
instructions,” and “applying information and reasoning.”
The strands are also divided into levels based on the
variables believed to cause a task to be more or less
difficult. In general, at the low end of a strand a few
simple things must be attended to, whereas at the high
end, many things must be attended to and a person must
process information to apply it to more complex
situations. In the “applying instructions” strand of
Reading for Information, for example, employees need
only apply instructions to clearly described situations at
the lower levels. At the higher levels, however,
employees must not only understand instructions in
which the wording is more complex, meanings are more
subtle, and multiple steps and conditionals are involved,
but must also apply these instructions to new situations.
Test Specifications
Using the skill definitions described above, the ACT
WorkKeys development team works on the
specifications, outlining in more detail the skills the
assessment will measure and how the items will become
more complex as the skill levels increase. Each level is
defined in terms of its characteristics, and exemplar test
items are created to illustrate it. While it is sometimes
appropriate to assign content to a unique level, in most
cases the complexity of the stimulus and question
determines the level to which a particular test item is
assigned.
WorkKeys test specifications for the multiple-choice
assessments are unlike the test blueprints used in
education. They are not a list of the content topics or
objectives to be covered and the number of test items to
be assigned to each. Rather, they are more like scoring
rubrics used for holistic scoring of constructed-response
assessments (White, E. M., 1994). Similarly, the
alternatives for a single multiple-choice question may
include multiple content classifications, modeling a well-
integrated curriculum, yet making the typical approach to
test blueprints, which assume that each item measures
only one objective, inappropriate.
Prototyping
After development of the general test specifications,
ACT test development associates (TDAs) begin writing
items for the prototype test. All the items must be written
to meet the test specifications and must correspond to the
respective skill levels of the test. A number of prototype
test items sufficient to create one full-length test form
(usually 30 to 40 items) for the skill area are produced.
Each prototype test form (one per skill area) is
administered to at least two groups of high school
students and two groups of employees. Typically, one
group of students and one of employees will be from the
same city. The second groups of students and employees
will be found in another state with a different situation
(for example, if the first groups are from a suburban
setting, the second may be from an inner city). The
number of examinees varies according to the test format,
with more being used for multiple-choice tests than for
constructed-response tests. Typically, at least 200
students and 60 employees are divided across the two
administration sites for each multiple-choice prototype
test form.
During the prototype process, TDAs interview the
examinees to gather their reactions to the test instrument,
which helps ACT evaluate the functioning of the test
specifications. Questions such as whether the prototype
items were too hard, too easy, or tested skills outside the
realm of the specifications must be answered before
development can move to the pretesting stage. Whereas
the examinees are asked to provide comments and
suggestions about the prototype test form, educators and
employers are also invited to review and comment on it.
Based on all the information from prototype testing, the
test specifications are adjusted if necessary, and
additional prototype studies may be conducted. When the
prototype process is completed satisfactorily, a written
guide for item writers is prepared.
Pretesting
For the pretesting phase, ACT contracts with
numerous freelance item writers who produce a large
number of items, which ACT staff edit to meet the
content, cognitive, and format standards. WorkKeys item
writers must be familiar with various work situations and
have insight into the use of a particular skill in different
employment settings because both content and contextual
16