Quantitative & qualitative measurements of success
What are the top methods for successful recruitment? The League Ladder of Recruitment Predictiveness ranks Structured Interviews, Biodata Scoring, Cognitive Ability and Integrity testing.
Having a strong, research-backed recruitment process can reduce the chances of hiring someone who’s not a good fit for the job, the team or the organisation.
Learn more about the strong and weak predictors for accurate recruitment.
Hi, Andrew from SACS, and welcome to video number four in our sequence on Candidate Attraction and Evaluation.
Previous videos, we dealt with the question of candidate attraction, and we are now turning our attention to candidate evaluation.
And the first thing that we want to talk about is the ranking of the accuracy of recruitment techniques.
And it may interest you to know that there’s been a lot of research on this topic.
The one that this research operates is that you assess candidates using recruitment techniques.
And you hire them, and then let’s say, a year later, you go back and you find out whether they succeeded or not.
Now, there are a range of means to assess whether a person has succeeded or not. Some of them might be quantitative.
So a quantitative success measure might be, let’s say you’re hiring a salesperson and you simply add up the number of dollars worth of stuff that they sell.
Or let’s say they’re planting trees, you might add up the number of hectares of trees that they planted.
Those are examples of quantitative methods of assessment of success.
But there are qualitative measures of assessment of success. And the qualitative measures of assessment of success might be you get a supervisor rating.
So 10 means that this is an absolutely fantastic candidate who’s worked out really well for us.
Zero means as bad as possible, maybe the person’s left us, or we had to fire that person, or whatever. Five is sort of somewhere in the middle.
So they’re qualitative measures.
There are also measures of things like counterproductive work behaviours. Has this person stolen things? Has this person taken sick days when they’re not really sick?
All of those kinds of things can be assessed and added up quantitatively or qualitatively.
So this research is typically not done in laboratories. It’s typically done in the real world of real world jobs.
And we’ve done a lot of this research in Australia and New Zealand, in partnership with academic partners such as Deakin University in many cases.
But the concept is that what you’re trying to do is to find out what going on in the real world, not in some psychology laboratory, let’s say.
Real world research
Now there are a number of things that you find when you do this research.
The results are surprisingly consistent across geographical locations.
So what works in New York tends to work in Auckland in New Zealand, tends to work in Beijing in China.
So there are certain things that seem to be good predictors of success at work no matter where you are.
The second thing is that there is surprising consistency across job types and industry sectors.
So in effect, the kinds of characteristics that can be used to accurately predict whether somebody’s likely to be a successful lecturer in a university is surprisingly similar to trying to figure out whether somebody’s likely to be a successful, what, road worker.
In other words, there are certain characteristics that seem to come up again and again and again as being predictors of success.
And if you think of it, maybe that’s not such a farfetched idea, because let’s think of whatever job you’re talking about.
Is it good if somebody’s kind of hardworking and committed? Oh, actually yes it is.
Is it good if somebody’s kind of cheerful and optimistic? Well, there’s lots of research evidence, both at work and in areas away from work that optimists tend to succeed better than people who are kind of gloomy, and pessimistic, and cynical.
So there are certain things that are really consistent.
And we know, for instance, as we’re going to show you in the remaining videos, we’ll take you through some of those characteristics that are known to be consistent across different sectors.
League Ladder of Recruitment Predictiveness
Let’s look at what the results say (refer to video).
This thing we call the League Ladder of Recruitment Predictiveness.
If you’re in Sydney or Queensland, you might call the League Table of Recruitment Predictiveness.
It’s just a ranking of the various recruitment methods that are listed here on the left.
Ranked from high to low. And the way that we rank them is through this thing here, validity.
That little r means a correlation coefficient.
So the correlation coefficient works as follows.
If it’s close to zero, what that means is that there is no relationship between the recruitment method and whether the person succeeds or fails.
If it is high, a long way from zero, what that means is that it is an accurate predictor.
The higher it is, the more accurate the prediction.
Now, these correlation coefficients can theoretically go from zero to one.
They can also go to minus one, but I won’t bother with that at the moment.
It’s from zero to one.
So zero means no prediction at all, one would mean perfect prediction.
And of course you never get perfect prediction in the real world.
So firstly, let’s look at the things that are shown not to be accurate predictors (refer to video).
If we go right to the bottom of the table here, we’ll see that age is a very weak predictor.
Its correlation is very close to one.
Graphology is also a very weak predictor.
Well, age is an important thing.
I mean, I’m approaching 60, I’m actually approaching it from the wrong side to be truthful.
But age is not a good predictor, so don’t discriminate in favour of older people or younger people.
The relationship of job performance is practically zero.
Graphology, graphology is handwriting analysis, and you might even wonder why it’s there.
Well, the answer is that in certain places in Europe, it’s very heavily favoured to give somebody an opportunity to write something and then send the results to a graphologist to find out about them.
But as you can see from the correlation, it’s terribly inaccurate.
Years of education is also a very weak predictor.
This surprises people.
Well, I got a PhD, does that make me a better employee? Well, probably not by virtue of the research.
If you’ve got the necessary minimum qualification to do the job, so let’s say if you’re an accountant, and you need to be a member of a professional society so you’ve kind of got a ticket.
I’m a psychologist, so I’ve got a ticket. Psychologists with PhDs can’t be assumed to be better than psychologists with the absolutely raw requirement to do the job.
So you also see that a person’s interest is a very weak predictor.
Now, you shouldn’t really ask people about their interests in Australia and New Zealand because it’s not job related, and in fact, it is touching on a form of discrimination called private life.
So if a person believes that you haven’t given them a job because they’re into macrame or dog breeding, then you’ve got a problem from a legal point of view.
So it’s better not to ask questions about interests.
What about years of experience? Now, years of job experience is often used as a predictor of performance.
While this person’s got only two years of experience, this person’s got five years of experience.
Well, don’t discriminate on the basis of years of experience because there are other things that are much more accurate rather than years of experience.
I know that certain people really care about years of experience, but the research doesn’t support that that’s a good way of recruiting.
Then we have references.
Now that’s interesting, isn’t it?
So references come very low down, and you’ll see that the correlation here is 0.26 (refer to video).
Now, what’s going on there, 0.26 correlation.
Firstly, what does 0.26 even mean? 0.26 is a mathematical result from a correlation equation.
And it’s not a very meaningful thing to talk about until you square it.
Because when you square 0.26, so you multiply 0.26 by 0.26, what you get is kind of an accuracy figure.
It’s really statistically shared variance.
So what percentage of the variance of one thing is shared by the other? Now, if you square 0.26, you get something like 0.07, which means that reference checks are 7% accurate or 93% wrong.
Now, reference checks are really quite weak in that light, aren’t they? I mean, 7% is not really something that would make you feel really confident.
So why? Well, for a couple of reasons. And I think the main one is that why would they tell you the truth?
They’ve got a relationship with a person who is being reference checked.
They’ve got no relationship with you as a prospective hire.
And many people will not want to hurt the career prospect of the person that they are providing a reference from.
So look, reference checks are worth doing.
We recommend them, but let’s be honest, they really rarely give you information that is helpful.
Having said the things about the things that are not very accurate predictors, let’s go to the top of the table.
And what we see here is the cognitive ability and integrity testing (refer to video).
What do we mean by cognitive ability and integrity testing?
Cognitive ability has been called a range of things. Aptitude tests, IQ tests, intelligence tests. It’s measuring how smart the person is.
This, by itself, is one of the best predictors of job performance that’s been found ever.
And it doesn’t seem to relate just to university professors, it also relates to relatively menial jobs. The smarter a person is, the more likely they are to succeed in any job. Now, of course, if the job is, as I said, a road worker before, and this person is a genius, then they may not stick at the road working for very long, and maybe it’s a good idea to find them a job that’s more suited to the talents that they have. But cognitive ability is a good predictor.
What about integrity? Integrity testing is a form of assessment which has been used for decades particularly in the United States of America, where it is being used much more widely here in Australia and New Zealand.
An integrity test often asks questions about bad things that a person has done in the past.
So have you stolen things? Have you been unkind to people?
Now, people are shocked that people would answer accurately to these kinds of things, but you know what, they do every day.
People sitting in front of a computer screen don’t get the visual cues to indicate when they should lie to you.
I mean, if you’re interviewing somebody and you say, “Have you been nasty to people?” the answer invariably is no.
But I’ve been in a position on numerous occasions where you ask that question or something related to a candidate in an interview, and then you go back to their answers in an integrity test, and you find out that they have been unkind to people.
They’ve confessed it in their online questionnaire.
Now there are reasons for this.
There are psychological reasons for this. And one of them is the way that people learn to lie.
Lying is an incredibly important social skill.
People who have ever worked with people who have autism spectrum disorder will find that those people very often haven’t learned the social skills to know when it’s an appropriate thing to tell an untruth, and they’ve learned the rule, well, you just don’t lie.
So they will say things that other people will find uncomfortable sometimes because they haven’t developed that social skill.
People develop the social skill by watching the visual cues of other people.
If you don’t have the visual cues like sitting in front of a computer filling in a questionnaire, people are much more likely to be accurate with their responses.
And I know it’s strange, but integrity test blunt questions about things like have you ever taken sick days when you’re not really sick? People answer those accurately. And certainly, our clients reject people regularly on the basis of that.
So integrity testing is worth doing.
If we look down at some of the other combinations, cognitive ability and structured interviews, structuring interviews, we’ll be talking about this in subsequent videos.
So structuring interviews is about doing a competency analysis and then writing behavioural interview questions to match that competency analysis.
Later on, we’ll show you exactly how to do that in subsequent videos.
But that’s about as accurate as interviews can get. Now, the best correlation that you can get out of this form of interview is 0.5.
Most interviews are right around about 0.35 or something like that. 0.35 adds up to about 12% accuracy.
If you think of 0.5 as an accuracy for an interview, that’s about 25% accuracy.
If you use a combination of psych tests and the best form of interviews, you’ll get up to about 50% accuracy.
I mean, 50% doesn’t sound like much, but two things. One, it’s the best that anybody’s ever been able to consistently achieve. And two, it’s not just 50% right, it’s 50% better than chance.
I mean, you would make some decisions appropriately based on chance, but this combination of 50% better than chance is the best that we’ve ever been able to achieve consistently.
Now, if you look at the top methods from a predictiveness point of view, you’ll see that pretty much all of them have some form of assessment in them.
Why is that? Well, the honest truth is, human beings don’t always make great decisions.
And if you’re interested in this topic, read the book by the great Daniel Kahneman, “Thinking Fast and Slow,” where he gives hundreds of examples of the various forms of biases that apply to recruitment processes and other processes.
Human beings, when they make their decisions best, they make their decisions algorithmically.
So algorithms are decision techniques.
Psych testing is an algorithmic technique.
And so we’ll show you how to use psych testing to take away the impressionism of recruitment. T
his person will be a good candidate because I like them, or they’ll be a culture match. You’ll make some subjective judgement . Some people are good at doing this, but most people are not. And very few people are good at doing it consistently, because unlike a mechanism like a psych test and an algorithm, people’s ability to make these judgements vary.
I don’t know about you but I don’t tend to make great decisions when I’m fatigued on a Friday afternoon, for instance. Whereas maybe earlier in the week, when I’m fresh, I’m a little bit better at concentrating and can make better decisions.
So, coming up. Algorithms like psych testing help us enormously.
And certainly, from a longitudinal research point of view, you can markedly improve the success of your hires, and also reduce the error rate of your hires by virtue of adding algorithms such as psych testing to your process.
So in the next videos, we’ll be pursuing the question of candidate evaluation further, and we’ll be talking about firstly, how to score applications.
Biodata scoring is a process of putting together a simple algorithm to evaluate the CVs that you receive. You may well receive lots of CVs. I mean, we regularly receive 80, 90, 150 for certain jobs.
So how do you whittle that down to let’s say 10? Biodata scoring is a way of doing that.
Join us for the next video to find out how to do that.
Watch the next video in this series to find out more about Candidate attraction & Evaluation:
And watch the previous video here:
And if you’d like some help evaluating your next hire, contact us about our Psychometric Assessment Tools.