Now more than ever, candidate experience is in the driver's seat with respect to the selection of appropriate pre-hire assessment tools. The onboarding process begins in the earliest stages of the recruitment process, where candidates are given exposure to the company culture, branding, and get a taste of what to expect if their application is successful. If the assessment process is unengaging, laborious, and opaque, candidates are unlikely to view the organisation in a positive light.
This issue is no more apparent than when we measure aptitudes. Candidates are often exposed to items that are not relevant to them. Test items might be too hard or too easy; either frustrating candidates or giving them a reason to doubt the hiring company’s process.
As painful as it can be to admit, this fundamental demand—to minimise the time and effort involved in assessment—can be at odds with some of the best practices in test development and interpretation. While it would be ideal to have unobtrusive and effortless assessments, this has to be balanced with the need to ensure the reliability (accuracy, consistency) of the tools we use.
Classical Test Theory is a foundational approach to test construction, validation, and interpretation, which is primarily concerned with the statistical properties of whole tests. This approach is convenient and relatively intuitive. It paves the way for widespread test use, and makes interpretation easy for end users. However, one of the consequences of this approach is that we increase a test's reliability by simply adding more items. Conversely, reducing the length of a test can have the effect of diminishing their reliability. A simplistic view of this is essentially to say "the longer the candidate spends on a test, the better."
This tension between practicality and best practice can be summed up in a simple question.
Q: How do we maximise the accuracy of our results, while ensuring candidates spend no more time than they have to complete arduous tests of their abilities?
A: We tailor the test to the candidate.
An alternative approach to test development, known as Item Response Theory is a methodological framework that we use to study the properties of individual test items. From this perspective, we ask questions such as:
- How informative are test items?
- Are these the right test items for this assessment scenario?
- How certain can we be that a candidate has a particular score/ ability level, given their response to this question?
The item level view offered by item response theory offers a way out. Knowing exactly what a question tells us about a person allows us to tailor the test to them and their specific capabilities. These tests, called adaptive tests, respond to the candidate's answers. They assess what a particular response means about a person, and then offer up new questions that are appropriate to that person and their abilities.
By administering only the most relevant and informative questions we minimise the time required to administer an assessment; improving candidates' experiences of the assessment process and their views on the recruiting organisation itself.
The article was first published by Luke on his LinkedIn page.
Consultant at Central Test, London