AIHR Live – Episode 6 with Ryne Sherman
This week I had the pleasure of discussing topics such as assessments, gamification and candidate experience with Ryne Sherman, Chief Science Officer at Hogan Assessment Systems. Enjoy!
Erik: Welcome, everyone, to a new episode of AIHR Live. I’m here with Ryne Sherman. Ryne, how are you doing?
Ryne: Hey, I’m great, Erik, good to see you.
Erik: Well, good to see you, too, and thanks for being here. I’m very excited about this episode. Can you please start with a brief introduction?
Ryne: Sure, Ryne Sherman, I’m the Chief Science Officer at Hogan Assessment Systems. We are a global leader in personality assessment for selection and for leadership development.
Erik: Fantastic, and I’ll start with the simplest question, and then we’ll take a deeper dive. Assessments, you know, assessments are not a new thing. Assessments are, I think 50 years ago, there was already a tremendous amount of research about assessments. Why are assessments so important when it comes to selecting high-performance professionals?
Related (free) resource ahead! Continue reading below ↓
People Analytics Resource Library
Download our list of key HR Analytics resources (90+) that will help you improve your expertise and initiatives. Your one-stop-shop for People Analytics!
Ryne: Yeah, so assessments are really important because, for a couple of reasons. Number one is, they give you a data-based, a data-driven answer to really important questions, so questions like, who should we hire for this job? Who should be in high-potential programs? Who should we develop as leaders? What are the kinds of things we need to develop in our leaders? Assessments are the only way to really get an objective, data-driven answer to that question. Of course, you can do the same thing with interviews, but we know there’s lots of research showing how interviews are biased. You can do the same thing with resumes, but again, even there, bias creeps in. Assessments are really the only way where we can put everybody on a level playing field, so for example, on our assessments, men and women get the same scores, blacks and whites get the same scores, people who speak different languages get the same scores. So, assessments is really the only way to level the playing field so that we can make sure everybody is evaluated fairly and in a data-driven way.
Erik: So in the end, it’s about making a better selection whenever you, for example, hire someone and interview. We’re all quite familiar, I think, with the limitations of the interview, and an assessment would then be a tool to support that, or can we also replace the interview?
Ryne: Yeah, sure, so, I mean, from my point of view, I would never suggest you just hire solely based on assessments. I think assessments is part of the equation, right, so you may use personality assessments. You may use cognitive assessments, you may use work history. Of course, experience in a field matters. All of these kinds of things, educational background, there’s lots of factors that matter in making a hiring decision or making a promotion, or any sort of personnel decision, but I think assessments play a key role in that equation. I think it would be a mistake, in fact, many companies, I think, make mistakes by not using them.
Erik: You’re bounding right into my next question that was, and we know that there’s a vast body of literature about the effectiveness of assessments, as an addition to the traditional structured or unstructured interview. Why are assessments still, I have the feeling that assessments are underused, and that would be my question, is that the case? But you already answered it. Why are assessments underused?
Ryne: Yeah, I think that there’s several reasons. One is that I think, in part, psychology sort of hurts itself in some regards, because we haven’t really established ourselves as, so we’re sort of looked at as pseudo-scientific, even though the scientific rigor behind assessments is as strong as the scientific rigor behind anything you would see in medicine, for example. And so I think that’s one reason, is that people think, well, this isn’t really science. This isn’t like going to see a doctor, but of course, as I said, the predictive power of assessments is as strong as the predictive power as nasal decongestant, to make you feel better when you’re sick. I mean, the data show that pretty clearly, but I think, so that’s one part, is the sort of lack of prestige or status that psychologists have, compared to, say, like medical doctors. But other reasons are, people think, oh, they’re too difficult or they’re too long, or how can I implement it right? We get lots of questions about the length of assessments. People say, well, people will never spend 30 minutes. People will never spend 45 minutes taking an assessment to apply for a job, but actually, the data we have show that people are perfectly happy to take those assessments, so there’s a lot of myths about assessments, that they take too long. But people are perfectly happy to take these assessments, particularly if they know that it’s for their own career development, or if it’s for selection for a job, that this really isn’t an issue. There’s a lot of data, not even our own data, some other folks have data looking at dropout rates, so people who drop out early in an assessment process, that’s what a lot of people tell us. Oh, we’re worried that people are dropping out, or really good candidates aren’t completing the assessments and then we’re missing them, that’s what people worry about. But the data that we have and the data that some other folks have collected, show really clearly that people who drop out out of the assessment process ended up not being good fits for the job anyway. If they had completed the assessment, it wasn’t going well, so it’s sort of like being in an interview and realizing this is kind of a disaster, this isn’t really working out, I think I’m just gonna quit now. And that’s exactly what we see with those kinds of assessments, so there’s a lot of myths around assessments. People think they’re too expensive, or they wonder about ROI. I mean, we have a lot of ROI data. I know we just completed a study on data in Europe, where we showed that our safe, we have an assessment based on safety, so, being safe practices at work. We then determined that the ROI on that assessment alone saved companies in Europe millions and millions of euros and the ROI is about 500% on that one alone. So, these are the common objections to assessments that we get, but they really work, and the data are really clear.
Erik: That is very interesting. I think that the ROI research, the utility analysis, I think, was solid 20, 30 years ago, quite well-established that assessments actually work, so it’s, we’re in the field, of course, of HR analytics and there’s in an HR population, always a bit of, there’s 1/3 of the population that’s very enthusiastic and say, hey, I’ve always worked with Excel, and this data approach is something that I’m happy with, and there’s 1/3 that says, like, I’m a bit on the edge and they need some convincing, and there’s always 1/3 that says, you know, data, you can’t reduce people to numbers. That’s yes, those two worlds don’t go together. And it sounds similar to what you’re describing, and what we’re seeing when it is about the implementation of HR analytics, it sounds similar.
Ryne: Yeah, I think that’s exactly right, that’s another part of it, right? You can’t just reduce me to a number. This is objections that I’ve gotten a lot. I used to ask people, you’re gonna be evaluated for a job. How do you wanna be evaluated? How many of you would like to be evaluated based on a series of assessments? And inevitably, some people say yes, some people say no, and the people who say no, inevitably, the people who say no, every time, there’s always some really young, very good-looking person in the front who says, no, I think I should be an interview, and it’s like, well, of course you think it should be an interview. You probably are socially skilled and charming and good-looking, and of course you’ll do really well on an interview, and so I think that’s part of it, is they say, well, I want an interview. You can’t reduce me to a number, and that’s because a lot of, I think, people who reject that, that sort of notion, in part it’s because they know that they can, they have advantages in the system when they don’t have to take the same assessment that everybody else takes, when they can use their social skill to their advantage. And to some degree, they’re not wrong, ’cause social skill does matter in a lot of workplace context, so, from that standpoint, they’re not wrong. And this is why I would, of course, say, I wouldn’t just only do assessments alone, in terms of making personnel decisions. But it does at least level the playing field and give everybody a fair shot at it, and so, from my point of view, it’s not about reducing you to a number. It’s about trying to get a whole picture of who you are and assessments are just part of that picture.
Erik: Yeah, it’s perceived to be measuring specific aspect of a person and reducing maybe those specific aspects to a number, but in the end, the whole picture that you see.
Ryne: Yes.
Erik: Very, very interesting. So, when we talk about, and I wanna move to gamification after this, ’cause gamification, I think, is one of the things that people are very excited about, when it comes to assessments. Before, one question, what would you consider are best practices for organizations who say, hey, we are not using any assessments now, maybe there is some tangible benefit in it, what would the first step be?
Ryne: Yeah, I think the first step is to find a well-validated assessment that you can use, and one of the big problems with the assessment world, look, and I don’t know how it is in European countries, but in the United States, we have something called the FDA, and the FDA stands for the Food and Drug Administration. And if I wanted to have a new drug, a new medical drug, I have to go through a rigorous testing process to get approval by the FDA. I have to show them all of the data, all of the studies I did to get approval for this drug, to be able to sell this prescription-based medicine, in the United States. For assessments, there’s no such body. That is, there’s no regulations whatsoever. Anybody can build an assessment tomorrow and just start selling it, right? And so, and in fact, that’s exactly what happens in the assessment world, lots of people create, come up with their own assessments and then they just start selling them, with no mind towards validity whatsoever. So, I mean, I think as a first step in the process, you’ve gotta talk about validity and you’ve gotta talk about validity in a deep way, right? So, unfortunately, a lot of these, sort of, I would describe as sort of like shyster-like companies, they know that validity matters, and so what they do is, they just say, we’re valid, right? You say, are you valid, they say yes, we’re valid. We’re validated. And so the only way to really evaluate someone’s validity is to either be, I would say, an IO psychologist or have some background in personality psychology or in any kind of industrial, organizational career where you can critically evaluate, you can say, send me your technical manual, send me your technical materials. Because that’s the first question for any assessment, is does it actually predict the outcome that you care about? And the only way to know that is to see their results. To me, if a company or anybody was not willing to share their technical materials with me up front, if they were not willing to share their validity evidence up front, I would just say, well then, I’m not interested, because chances are they’re not valid. Companies that can predict what the outcomes that they say they can predict, would be happy to show it to you. So to me, that’s the first step, is getting a real psychometric analysis of the tool you’re considering.
Erik: Yeah, that’s very interesting. One of the, we’re now a small company, and you have to practice what you preach, so when we started hiring, with the second hire, we started implementing the personality assessment and the intelligence, an IQ test, because those two, I think, are among the more predictive assessments that you can have. And then in the buying process, I have an I and O psychology background and I am very interested in personality measurements, but then the models that I used by commercial party have variable respective parties here in the Netherlands, so they are international parties, so I would say, European parties. You end up with a personality model that is definitely not a Big Five, that’s also not a questionable MBTI, that’s something semi-based on Jung and semi-based on something else that I’ve never heard of. So, how do you handle that? It was really hard for me to find a validated, Big Five personality assessment with one of the commercial parties.
Ryne: Yeah, well, that’s interesting. Yeah, that is another thing I would recommend, if somebody was evaluating assessments, first I would wanna make sure it was sort of Big Five-oriented or Big Five-based. It doesn’t have to be, I wouldn’t say it has to be, it has to be measuring the Big Five, but it’s gotta be roughly similar to, or to some degree map onto the Big Five. Different commercial parties do it differently. At least, to me, the legitimate commercial parties who do this, all have some Big Five-based measure, even if it’s not exactly the Big Five. But yeah, you’re right, your experience is very consistent. I mean, if you start Googling personality assessments, personality tests, and you’ll find all sorts of commercial vendors, many of whom are not Big Five-based. There’s all kinds of theories out there that are sort of lounge chair theories of personality and what matters, and they, as I said, anybody can make an assessment, and so they can make an assessment and start selling them. And so, those would be the two criterion I would look for. One is, it’s gotta be Big Five-based. Two, you’ve gotta share your technical materials, showing predictive validity with the assessment.
Erik: Very interesting, so one of the trends that we’re seeing is now gamification. A lot of the tech vendors are selling gamified assessment tools, selection tools, et cetera. What, let’s first talk about some of the benefits of gamification, where do you see this going? You know the space way better than almost anyone. What are the benefits of this gamification approach?
HR 2025
Competency Assessment
Do you have the competencies needed to remain relevant? Take the 5 minute assessment to find out!
Start Free AssessmentRyne: Sure, yeah, so there’s a fair amount of research. It’s really sort of just getting started from a technical research standpoint, but there’s a fair amount of research already showing that gamified assessments do improve user experience in the sense that people say they like it more, right? So people complete the game-based assessment, they say, yeah, I enjoyed that more than, now, I get it. I’m not saying they enjoy the assessment process, but they do report enjoying that more than, say, a classic paper and pencil type of assessment. Again, I would caution to say that that doesn’t necessarily mean a gamified assessment is enjoyable, right? To me, to be a really enjoyable thing, people would have to volunteer their time. They’d say, I would like to go spend my time doing that thing, and it’s not clear that anybody’s doing that, saying, you know, I would really like to go do those assessments again, ’cause that was a lot of fun, but they do seem to be more fun than the classic paper and pencil assessment.
Erik: So, they’re more fun today. What are the other tangible, or are there any other tangible benefits that we know of?
Ryne: Well, I think that’s, the idea there is, that if the assessments are more fun, if people report having a more positive experience with the assessments, then there’ll be two things. One is that they’ll be more engaged with the assessment process. They’ll be less likely to drop out. Again, this goes back to the question we talked about before where people are worried about people dropping out during the assessments. If the assessment is engaging, people will be less likely to drop out, so it helps with that. The other thing that we see when clients talk about gamification, this is usually our big, really big, multinational clients who, not only are they worried about the people that they’re assessing from an employee standpoint, they’re also worried about the people that they’re assessing as customers, right? So imagine you also wanna sell this person something, right, whether it’s clothing or a beverage or something down the road, and they’re coming in to take your assessment. These companies are telling us, they want their employees to have a positive experience, or, sorry, their applicants to have a positive experience because even if they never become an employee, they still want them to buy products. They still want them to use, I mean, imagine if you’re, and Amazon is a good example, right? If you have a bad experience applying for a job at Amazon, now maybe you don’t wanna shop on Amazon anymore, right? So you see, you can see a company like that would have a vested interest in making sure people have a good experience, and I think that’s part of it, is they wanna make sure that the people who apply, even if they never hire them for a job, have a positive experience.
Erik: Yeah, surely the candidate experience would potentially improve. Is there data on that already?
Ryne: There is data that the experience is more enjoyable. As far as I know, there’s no data saying that, good or bad experiences result in future purchases or not. As far as I know, there’s no data on that. We sort of assume that that’s the case, but I don’t actually know. I mean, quite frankly, even if you had a bad experience, it’s pretty hard to avoid buying products on Amazon, even if you’ve had a bad experience there.
Erik: So, I would like to, I think we can all see the benefits of gamified assessments, for candidates for companies. What are the drawbacks, especially, you spoke about we’re not having an FDA kind of approach to validation of assessments. Why do you see, the way you talk about validity or reliability of gamified assessments? Can you elaborate on that?
Ryne: Yeah, sure, so there’s a few problems with gamified assessments. One, at least that we’ve seen so far, one is that you have to start with the definition of what counts as a game, or what counts as gamified, right? So you can make assessments gamified by just adding points and things like that to it, but that doesn’t necessarily make it a game, doesn’t necessarily make it enjoyable, and so that’s a big part of it, is what actually counts as a game? Now, we can compare different games, right? So there’s very popular video games now in the world that people play. Those video games cost tens of millions of dollars to make. Now, of course, they make their companies way more when they go sell them, but these are things that people, again, would take, they would go play on their own. They would say, I’m gonna go spend my free time playing this thing, right? And so a lot of the gamified assessments we’re seeing today aren’t really that, so it’s unclear to me if I would actually call them games, right? To me, to be a game, you’d have to say, I would like to go spend my time doing that, and I don’t think that that’s the case with these things, okay? So that’s one issue that we’re seeing, but from a more technical and psychometric standpoint, there’s a number of concerns I have with gamified assessments. Often, they’re short, and so they don’t end up measuring you very often. For example, I’ll give you, I won’t say the name of the company, but I did one that was sort of this simulation-based game where you went through a variety of interviews with different people and different meetings with people on this team that you were gonna be in charge of, and by the end of it, it took about an hour to get through and by the end of it, I estimated that we probably answered, I did it with a colleague of mine, we worked on it together. We answered probably 30 questions. And I’m sitting here going, wait a minute. An hour long, of assessment time, yes, it was interactive in some regards, but it took more than 30 minutes, and it didn’t really feel, it felt very forced at times, and again, this is the point here, is that real games take millions and millions of dollars to make. Huge investments up front, and it might not sell, by the way, right? If I invest millions of dollars in a game and it doesn’t sell, I’m out. Same thing for an assessment. If I invest millions of dollars in a game-based assessment, it might not work. And then I’ve got nothing to show for it, but you have to build, right, you have to build all the programming for the game first, and that’s exactly what happened with this one, is it just didn’t really feel that comfortable, and at the end of an hour, I had answered 30 questions and I thought, wow, is that really the best use of a candidate’s time? If you have an hour with an applicant, how would you spend their time? And this goes back to that previous question when people said, these assessments take too long. They take too long, we don’t get enough information in the amount of time. Well, these game-based assessments often take just as long or longer, and you get less information from them. So that’s one of the concerns I have with them. And it’s actually a difficult question, in some regards, to answer, because each game-based assessment that I’ve seen is different, right? And so there’s different things about them that make them different. Other examples, when I think about game-based assessments, and from a psychometric standpoint, have problems with reliability. So if we use, there’s some assessments that are using, they call them neuroscience games, right? And they’re not really neuroscience games, right? So the only way to, okay, so this, sorry. This is a little bit of a different issue, but it’s related to gamification. A lot of companies say they do neuroscience-based games, and these, they’re not really neuroscience-based. To be a neuroscience-based game, we have to actually measure your brain, right? So I would actually have to either put you in an FMRI scanner or a CT scanner, or put some kind of thing on your scalp that’s measuring brain activity. Most of these neuroscience-based games are just, they call them neuroscience-based games, are just pushing buttons on a keyboard. I’m seeing how fast you react, so it’s this sort of reaction time measure. These are, in psychological parlance, like if I was in a psychology laboratory, people would call these cognitive games, right? They’re not cognitive like cognitive ability, but they’re cognitive as in cognitive psychology, so they’re focusing on things like memory, interpretation, decision-making. What’s really interesting about those paradigms that have been used in cognitive psychology forever, is that they’re actually designed to test the limits of human capacity. So, like, how long can you remember stuff? How quickly can you react, right? They’re designed to test the limits, and by their very nature, by their very design, they actually try to eliminate individual differences. And so it’s a very funny thing to me to use these games to try to measure individual differences, when these were designed, these tasks were designed to actually eliminate individual differences. And so, what ends up happening with these tasks, and there’s actually quite a large literature on this in the academic literature on these tasks, and using these tasks for individual differences, shows that the reliability of them is very low, the test-retest reliability, well, of course it’s really low because they’re not designed to measure individual differences, they’re designed to eliminate individual differences. And the test-retest reliability is a really important marker because a thing can only predict some outcome as well as it can predict itself. That is, if you think about it, how well does this thing predict itself? That’s the test-retest reliability, and there’s just a paper a couple of months ago showing that these kinds of cognitive tasks have really low test-retest reliability. Well, that’s a real problem, because if the test-retest reliability is low, that means it can’t predict anything else higher than it predicts itself, and so that’s a real limiting feature to these kinds of cognitive tasks or cognitive games. And along those same lines, and I just actually had a paper sent to me last week, this paper’s under review, so I really can’t talk in much detail about it, showing, looking at these cognitive task games and seeing how they predict outcomes. Like, how much income you make, how long you, all kinds of outcomes that we think of, typically associated with personality, right? And the thing is that these cognitive-based games, Erik, are supposed to be measuring personality. That’s what they tell us, they say, well, it’s measuring impulse control. It’s measuring decision-making. It’s measuring self-control, those kinds of things. When we look at outcomes historically associated with self-control, like overspending, overeating, those kinds of things, they don’t correlate with that. That’s what this latest paper show, that they don’t predict these things, and so for me, there’s a lot of concerns on game-based assessment from the classic psychometric standpoint, is the reliability good, is the validity good? And the evidence I’m seeing in terms of predicting individual differences in the workplace, I just don’t see the evidence there yet. That doesn’t mean it can’t be there. That doesn’t mean that there’s no hope for game-based assessments. I think, in fact, you could measure personality via games, but I think what we have to realize, it’s not that simple. I think if you really wanted to measure personality via game-based assessment, it would be something that actually takes quite a bit of time, right? So what you could imagine, and in fact, there is some research on this, people playing World of Warcraft, video games like that, that they play over long periods of time, that you can measure how people behave and interact in these kinds of immersive games, and that does correlate with personality, quite well. But these games, this is measuring people over months and years at a time, which really doesn’t seem like an efficient way to go about measuring personality.
Erik: That is very interesting, and then, you know, if you would play World of Warcraft, then it would be fun, you do it in your free time, and then it would fit your definition of having a proper game, as you would put it. So, I think the story that you’re telling about the, I would say, darker side of gamified assessments is a story that we don’t really hear enough, and as an I and O psychologist, myself, I’m also quite skeptical to the games. There’s quite some people who are saying, hey, this is absolutely the future and candidates are gonna love it, and they primarily have the candidate experience in mind without really thinking about, why are we doing these assessments in the first place? What I am interested in, where do you see, is it all a dark side of the gamified assessment part, or are there also some possibilities? Or what would you measure, if you would say, hey, maybe today, maybe in one or two years, what are the things that you would measure, using gamified assessments?
Ryne: Yeah, I think there’s still a lot of potential for gamified assessments. I just think what we’ve got right now is a little more myth than reality, but there is some, I mean, I know of one really well-done gamified assessment in the cognitive ability domain, right? So this assessment, it’s basically classic cognitive ability tests, getting the shapes to line up right, matching the patterns, moving marbles across some space, with roadblocks in the way, like it’s basically logical sorts of solving problems, so they’re classic cognitive ability problems, but they are put into a gamified context. So you get points for doing it quickly, you get points for doing it the most efficient manner possible, right? And they look like little kind of games, and they seem, they’re certainly, they’re more enjoyable than classic cognitive tasks, but at the end of the day, what they’re really measuring, it’s really classic cognitive ability measures, is what they’re doing, and I think that that stuff has really high potential. In fact, there was a paper on this particular assessment I’m thinking about relatively recently, done by a third party, and they showed that it correlated like .9 with classic cognitive ability tests, right? So, I think there’s still plenty of potential for gamification. I think the IQ, cognitive ability domain, I think it really can be done and can be done really well. I think it’s much more difficult for personality. I think, that doesn’t mean it can’t be done, but one of the problems with any kind of game-based assessment for personality, right, is that personality is about these patterns that show up over time, over lots of instances. It’s not something that you do right now, or every time, all the time. It’s something that shows up fairly consistently across the workplace or across your lifespan, and it’s hard to pick that up in a one-off assessment. It’s hard to pick that up in a one-time behavioral kind of assessment, right? So I have you do something, and we see, oh, you’re high in achievement-striving, or something like that, from that. Whereas with cognitive ability, it’s much easier to do that by picking up tasks, looking at how people achieve tasks and you go, okay, this is their cognitive ability. Well, it’s a lot harder to do that with personality, because, are the patterns of behavior, like, there’s lots of reasons that we behave the way that we do, and so it’s hard to pinpoint that, ah, the reason they behaved this way in this particular circumstance is because of this personality dimension. And so, I’m moderately optimistic that there will be some game-based personality assessments that will work in the future, but what I do think for sure, the only way that’s really gonna happen is through significant, significant financial investment and significant risk up front, to try to make it work.
Erik: So to wrap it up, and I know you have mentioned all the different elements already throughout this interview, if someone who’s watching is working for a large company and they’re looking at optimizing the candidate experience making a better, making it a more enjoyable process for a candidate, what would be the key criteria for them to, in order, check before they implement something that might not work?
Ryne: Yeah, so, I mean, this is a problem we run into all the time, is how do we make sure candidates have a good experience, and one of the things that we see, that works really, this worked well with us, and I know some of our other people in our area, respected competitors, I would say, do something very similar, is that candidates really enjoy getting feedback, right? So, you can’t tell me that people don’t like personality tests. I know they do, because they do them all day long on social media, right? You’re scrolling along on some social media, and it says, ooh, take this personality, people click on it, and they find out what Game of Thrones character they are, or all that kind, people love that kind of stuff. And so, from my point of view, to have an enjoyable candidate experience, it’s important to give the candidate feedback. When I think candidates have a bad experience is when they take an assessment, and they never hear from you again, and they don’t know what happened and they don’t know why they didn’t get the job, and they don’t know what their results were on the test, and it feels like their privacy’s been invaded a little bit. You got to learn something about me, and you gave me nothing back, and that, right? And so, to me, a really good way to have a good candidate experience with current personality assessments, is to give candidates something back, to give them some feedback on, whether it’s about what careers might be a good fit for them or what their strengths are. That would be a really nice way, I think, and again, we have data showing this, that candidates do appreciate that experience much more when they get that feedback.
Erik: I think that’s a very good point, so to wrap it up, as a company, you should look into the validity of the assessment that you’re doing, and really ask for the technicalities, and let someone who understands or who has an I and O psychology background, or who understands the situation behind it, really vet the assessment before you implement it. And indeed, I think that the feedback that you’re giving is very helpful in creating a better candidate experience, even though you might be using the slightly more old-fashioned, but the safer and more predictive ways of assessing. Ryne Sherman, thank you very, very much.
Ryne: Yeah, of course, Erik, my pleasure.
Are you ready for the future of HR?
Learn modern and relevant HR skills, online