The Kuder assessments are subject to ongoing research to ensure their reliability and validity. This process is guided by Dr. Hoi Suen, distinguished professor emeritus of educational psychology at The Pennsylvania State University, whose areas of specialization include psychometrics, educational assessment, and evaluation.

The target reliability, validity, and fairness of all Kuder assessments – in both original and localized forms – is to meet or exceed the 1999 as well as the 2014 Standards for Educational and Psychological Testing issued by the Joint Committee on Testing Practices, established by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education.

Below are some frequently asked questions along with our responses.


Kuder Career Assessment Research and Evaluation FAQs

What is the purpose of career assessments?

We think Dr. JoAnn Harris-Bowlsbey said it best in her Kuder Blog post on the topic.

The most important use of career assessment results is to assist individuals at a given point in time to identify their interests, skills and/or work values in order to identify the next educational or vocational choice in the sequence that makes up career development.

There is a danger, however. Assessments may lead clients to believe that the process of career planning is simplistic or that assessments can tell them definitively what to do. Assessments are better understood as tools that can guide the process of career exploration and career discernment.

The hallmark of formal assessments is that they have been subjected to scientific rigor; that is, authors, researchers and publishers have invested professional expertise, time and money to develop a quality product. They have performed research on the assessment instruments to assure quality and to be able to know the properties (such as reliability and validity) that each instrument possesses.

Why does Kuder conduct ongoing research on its assessments?

We believe career assessments serve a critical role in career exploration and career development. We conduct ongoing research on the Kuder assessments in order to ensure that they’re of high quality, that they remain relevant for today’s students and adults, and that they’re bias-free, reliable, and valid.

We also conduct ongoing research on our assessments to ensure that our assessments continue to meet or exceed the Standards for Educational and Psychological Testing established by the Joint Committee on Testing Practices established by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education.

How often does Kuder review its assessments?

Typically, our assessments are reviewed every five years, at a minimum.

How does Kuder ensure that its assessments are as free from bias as possible?

We seek broad cultural diversity in our student testing groups to ensure our assessments are as free from bias as possible. The process begins with input from a small group of selected individuals of diverse backgrounds. For example, in our current research on the interests and skills confidence assessments, we began with input from people in three distinct geographies of the United States with varying socioeconomic contexts. Later stages of research will include data collected from thousands of individuals who broadly represent the population of the United States. Various statistical methods will be applied to test and verify that our assessments are as free from bias as possible.

What is Kuder’s assessment review process?

Our assessment review process begins with a series of think-alouds and focus groups.

The think-aloud protocol is a standard data collection technique often used in qualitative research (cf., Ericsson & Simon, 1993), which has been demonstrated to be an effective aid to improve the design of items in surveys, questionnaires, and self-report tools (e.g., Collins, 2003; Desimone & Floch, 2004; Li & Suen, 2013). This process helps to determine whether particular aspects of an item are meaningful or appropriate. During a think-aloud session, a qualified researcher typically engages a single participant at a time. Each is invited to read the assessment items aloud and speak whatever comes to their mind at that moment. Each think-aloud session is audio-recorded live and the transcripts subsequently analysed for micro-pauses and other audial cues that may not have been verbalized but signify an item for further consideration.

Focus groups are done in an open discussion format and involve multiple participants at the same time. Our research team uses prompt questions to kick-start a conversation. We show participants a list of original items along with alternative versions that we’re considering, and then request their feedback on both. There’s a lot of back-and-forth that takes place in a focus group setting. As with the think-alouds, our interactions with focus group participants are recorded and later analysed.

These initial research stages lead to proposed changes in Kuder’s assessments. All proposed changes are filtered and further refined through the work of Kuder’s panel of experts. Field testing the qualified changes provides evidence we need to then release an updated version of our assessment or to go back and do further revision work to ensure the assessment meets or exceeds standards.

What types of information does Kuder examine when updating its assessments?

We want to make sure the assessment items are up to date in terms of content and to be sure they’re relevant to assessment-takers. So, one aspect we look at very closely is the way in which our assessment items are worded. Evolving language, emerging technology, changing cultural norms, and generational trends are among the factors that can affect wording of an item. For example, the ways in which we store and access music, movies, and other audio-visual content has changed radically in the course of Kuder’s eight decades of career assessment – from vinyl record albums, to tapes, to discs, to tiny hard drives, to flash memory storage, to streaming. As this type of technology evolves, we want to be sure that any related assessment items still make sense and measure what they are intended to measure.

Here are some examples of questions we ask in our focus groups:

We’re interested in observing how participants understand and interpret each item, and the factors that go into how they respond.


References

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press.

Li, H. & Suen, H.K. (2013): Constructing and Validating a Q-Matrix for Cognitive Diagnostic Analyses of a Reading Test, Educational Assessment, 18(1), 1-25.