• Services
  • News & Articles
  • Careers
  • FAQ
  • Contact
Logo VSmall Trans

LOGIN

Sylvain Rochon, January 12 2023

Scott Filgo - Selecting the Right Psychometric Assessment

A CykoMetrix Spotlight Production

Every week, the Spotlight shines on an amazing professional with a story to tell and lessons to teach. Welcome to the CykoMetrix Spotlight.

The following is an adapted transcript of the exchange between Sylvain Rochon, CMO at CykoMetrix as host, and Scott Filgo, Talent Assessment Consultant. www.linkedin.com/in/scottkfilgo

Sylvain Rochon: Welcome to the CykoMetrix Spotlight. My name is Sylvain Rochon. I am the Chief Marketing Officer at CykoMetrix, a leading edge combinatorial, psychometric, and human data analytics company that brings the employee assessment industry to the cloud, with instant assessments, in-depth analysis, trait measurements, and team-based reporting features, that simplify informed decision making around recruiting, training, and managing today's modern workplace. 

Today on the Spotlight we have Scott Filgo. Scott has 20 years of assessment development experience with many product families from several test publishers. Consulting in both agile, entrepreneurial and methodical academic test publishing organizations, big and small. Past experiences are as a consultant, specialized in psychometric assessment, includes contracts with Deloitte and Pearson's Talent Assessment Group. Some of the big boys in the field. Nice to see you here in the spotlight Scott. 

Scott Filgo: Thank you. It's good to see you too Sylvain. 

Sylvain: Excellent. Now, today we're going to talk about something I find interesting, which is about the selection process around which psychometric assessment should you use? There are arguably thousands, maybe more out there in multiple languages. I mean, it's a complex landscape, some of them are of course, better known, others are more niche. You can probably speak to that. How does one go about selecting the right one for their organization? 

Scott: I think it's important to use sources of information like, for instance, the American Psychological Association, or the Association of Test Publishers, who can link you to those publishing groups out there. Of course, you can Google search and find lots of options out there that are going to range from big ones like Hogan to tiny little organizations that are doing much smaller projects, which is the bread and butter of the kind of work I do as a consultant, helping small organizations today go through these processes of developing an assessment. 

Sylvain: How does an organization develop an assessment? 

Scott: It takes a long process from, of course, finding out what the need is with what are we trying to measure, and who are we measuring it for. In talent assessment, you're going to try and focus on the qualities that are probably tied to important competencies that are related to a specific job, or to a group of jobs, and you want to be able to demonstrate that whatever it is you're measuring has captured that construct, that concept consistently and with good predictive qualities so that you can say, "This person has scored in this fashion, and we predict that they're going to perform well under these particular criteria." 

Sylvain: I mean, it's such an onerous job to create a valid assessment. Isn't it just easier to... if you want to have the right one, just to pick out one that already exists, that's been created by somebody else and just use that? 

Scott: Absolutely. Yes. They ought to provide a check manual that provides you with their proof that they have the qualities, the test qualities that are acceptable for using in the workplace. That is that they have reliability, that the questions ask the right questions consistently, people respond to them in a consistent manner. You don't always see this, but you would like to see examples of validity that you can generalize to your own employee groups. 

You also want to probably take a look at the adverse impact that the assessment is hopefully lacking, that it does not introduce bias by asking questions that are predicting your demographic qualities just as much as it's predicting your job-related qualities, because the former is not job related. Therefore, you want a test that does not say, "This person looks like a bad fit." 

That's not when it's not talking about the job qualities that are up for. So I say those 3 are the big ones for me with the teams that I work with. But of course, I'm also helping them write the questions in the first place, piloting those questions with samples to see if they're doing the thing that we expect them to do. Tweaking, going back, reiterating, and should we get a product that delivers a reliable and valid test. 

Sylvain: Let's say a director of HR in a company, or it could be maybe some kind of a lead or a Director at a training company or a consulting company that services you, an HR department, whatever, either of those people are. Do they have typically the competence to choose or create or to direct the creation of an assessment that would properly be deployed for their purpose? It sounds a bit academic? 

Scott: Yeah, it was certainly scientific, and it does require the gathering of data and analyzing data. An HR director might have all those experiences in the past. They may have a degree in the field, same thing with someone in a consultancy, of course, especially if they're talent assessment based, if that's their bread and butter, but then just as many times you're going to want to throw in someone to help them on these kinds of projects, to go through all the steps. 

Psychometrics and test development. It's just one small piece of organizational psychology, and your PhDs out there with an IO background are going to have that and many other skills, and someone like me with a masters may focus specifically on psychometrics, which is what I've done. 

Sylvain: Let's talk about culture and language. The typical psychometric assessment is a self-assessment on the testee, if that is a word. 

Scott: Not all them are. 

Sylvain: Not all of them are, yeah. I guess those that most people being exposed to are self-assessment, but you're right, they're not all of that, but let's look at the self assessment ones. In North America, we are multicultural nations. A lot of immigrants that come from various cultures. Their primary language is not necessarily English or French or Spanish, or some of the main languages, and yet companies will typically deploy an assessment that is generally going to work with the group, perhaps something that was tuned, assessed and tested in North America, in English. Is it important that the assessment that is deployed is fine tuned to the person receiving it, who may come from an entirely different culture that has different understanding what the questions mean, as well as a different language. Can you speak to that? 

Scott: Well, first off, if you're going to do an assessment internationally, you want to choose a norm that scores the assessment the same for every candidate, just because that's fair testing I believe, and if your test is only in English, then you may need to decide not to use the assessment with people who have English as a second language. 

That would not be a fair representation, even if it's personality. Just understanding the question and the nuances, can be affected by how adept you are in speaking the language. Many test publishing companies produce the tests in various languages. When I was a profiler, that's what we did across the globe, and each language had its own separate norm group for people in that country, so if you were taking the test in Hungary, your test answers would be compared to others in Hungary, not to others in the United States. 

What does that mean? Can you use the two test scores together? I would have to defer to a PhD on the best practices for that, but in developing the product, which is my background, yes, you would definitely use the best translators that you have with an understanding of psychological concept as well. If it's a skills test, the kinds of questions that the test is asking, reading, writing, whatever. Then gather data from that region for test examples, so that you can build that norm table for your scoring. 

Sylvain: Yeah, because that's definitely not only North America, but every company is dealing with the whole world anyway nowadays through the internet, so even if your own geography --- let's say you're living in Romania --- so everybody's speaking Romanian as a first language, pretty much. I suspect, it's not a country that has loads of immigrants. 

I don't know, but using that as an example, you're still managing a whole bunch of cultures because your clients are everywhere, and you may have offices in different places, and a small team in Bulgaria, another one in Japan. 

Things like that, so in a practical sense, if you are in the HR department, and you want to make some assessments and you have varied teams, and  you do want to get a sense of the company as a whole, for example. How challenging is that? Is there a way to do it without breaking some of the psychometric norms, or breaking the validity? 

Scott: Well, I am going to defer on that one because that's in practice in use, which is not what I do, but there are professionals in IO who use assessments and handle how you're going to juggle an internet company. If you're trying to fill a role, it's going to have say, ten different positions across the globe. Do you have ten separate assessments for each language, and norm each one on just the locality? 

Or do you work out with your practitioner a new norm that includes people from all countries? That's going to be a decision, like I said, that someone who has lots of experience in the selection world as an on the ground practitioner should answer. A guy like me sitting back in the lab building a test, I can only say it should have the functionality of test of norm development. 

You should be able to tweak the norms or a request that they be done by the publisher, you might want to have a special report built that can help the HR executive understand that this guy in this country is or is not like this guy in this country, even though you've seen their norm scores. What is assertive in say China, and what is assertive in the US, and if one is on a stem ten out of ten, what does that mean from one culture to the next? That is expertise. You've got to find the right help. 

Sylvain: Yeah. It's very specific and there is a lot of nuances in those questions, so as part of your work as a consultant, do you affiliate yourself with a network of professionals that you can pick out and say, "Hey, we need solutions for this specific case." Therefore you have these experts, and specialists at your fingertips? 

Scott: I would definitely query someone like the society of Industrial Organizational Psychologists, an arm of American Psychological Association. Or if you have an IO psych practitioner in your business, you should check with their network and see. My network is filled with test developers, so... 

Sylvain: Still, it is a community, right? And it's so specific. Like I know one person we were speaking with in India. They are assessment makers in India. India is an English nation. It is, but they are also a Hindi nation with somewhere like three hundred different dialects and languages that don't really communicate with each other, because it's such a big country, it's a subcontinent. The culture is kind of different across that one country, right. 

Scott: And growing at enormous rates when it comes to talent assessment. 

Sylvain: Yeah, so for them, it's like, "Okay, we're building assessments for different parts of India." Not even... and they're changing the language.  The differences are similar to those between American English, British English to Indian English.  All different, so it gets very segmented to try to get a better understanding of the whole set. 

I mean, that's just India, I'm just talking about that case, and the whole world is like that. So how do organizations navigate this to get good results? Or do most organizations to your knowledge, because it's not your work, just say, “let's pick out an assessment in English and just deploy that. What's the behavior you have seen knowing how segmented things should be? 

Scott: Well, they should be segmented. You should use norms for different countries, but within a country, I've not heard of people using multiple norms for the same job. In the United States, you're not supposed to do that. You need to have a consistent practice going on with this particular selection process, so you choose your norm, hopefully it's one that's based off of a group of the same profession, and the same level within an organization, all these kinds of things. 

Or you work on building one of your own. You back that up with a validity study that said that this is how we saw people of this level in the organization, of this position that was diverse in gender and ethnicity, even age groups. Now we know that this related to these particular criteria, the performance competencies. Now you can build a norm. 

You can build expected good scores and bad scores. You can set cut scores. You've got to do all that kind of leg work to know that it's a competent and reliable measure to be a part, not all, of the selection process.

They just get it off the shelf and use it, which is fine. It's just not best. Best practices will help you when you need the support of the work that it takes. For instance, if there's a challenge to the selection process if someone saying, "I don't know that this was particularly fair. I think I was eliminated too quickly. I got all this experience. Did you even look at my resume?" "Well, we looked at your test scores and we said, no." 

Sylvain: Right. Let me give you an example. Maybe you have one from your experience.  Let’s say the hiring situation is one where emotions tend to be high, because you don't want to be biased against, and you want the job. It’s important to you, and there's a generic assessment that everybody uses, which is fair, right? It’s a norm, like you said, but then candidate A doesn't get the job and happens to have English as a 3rd language, and it's an English assessment. 

They may complain and say, "Hey, I didn't get the jb because it's my 3rd language. This assessment didn't measure me correctly, because of this fact." Whereas the other candidates, maybe English is their first language and they were from the area, whatever. Is there a way to avoid this? In impractical terms, companies have limited resources, or ways, like you said. Even if you have 2 or 3 assessments, you don't know if the results can be put together, and normalized together anyway, so it's a bit of a problem. 

Scott: It is a lot. One thing that dissuades a lot of people from jumping into the talent assessment pool in the first place, is how much work it is. “I don't know that we have the time. Can't I just use it off the shelf?” and someone throws a book on their desk and says, "This is the tech manual. Good luck." It does help when a consultant or a member of your HR team can review the data they have provided and say, "Well, this is our best look at being able to generalize what they have done to what we're doing." 

If the groups that they've piloted with feel and look like it's made up of people with similar jobs, then maybe you can generalize, but if it's not, then you do need to do a local validation. You can do it in house, or you can hire a consultancy to do it for you to gather the data that demonstrates that something like performance ratings that we decided on were most important, are related to test scores. 

Whether that means high test scores or low test scores are actually related to an inverse relationship with high performance, or something else. Some linear results show that the best team, the best people selected score in the mid range. I mean, a lot of people will misuse a test by thinking, high scores are the way you win, because that's how academic testing is. We all need to pass with a high score. It is not always true with every scale, especially personality. I can't say this for skills and abilities, but I can say it for personality. 

The only way to see how personality is related, is with validation studies that are specific to your group. You may not have enough people in your company to do a study like that, so talk to a consultancy and see if they've done the work with a similar group of people, and you can maybe ask for a meta analysis where they can pull the data from different companies and make all the right professional assumptions about playing with that data, so you have the proof where you can say, "Okay, I can use this to determine of the thousands of people that apply, which twenty are the best candidates for the next step in the process." 

Anyone that has been eliminated from a candidacy for a job by a score alone, might not... I have issues with using testing to that degree. It is not a magic wand, it is a decision enhancer, not a decision maker. However, lots of people do use assessments in the early stages, to talk about job fit and cutting people out early because of the masses of people they have to deal with to process, and I understand that. 

Maybe at that stage, it's okay because hopefully you've also at least looked at the basic requirements that you can discover from their resumes. There needs to be multiple elements involved in the early stage weeding out candidates. On the latter stages of talent assessment or the selection process you really do need at this point to know the test results as you are choosing to use them, is a valid way to use them. Having validity study to back you up to say that these scores are related to performance, is good backup.

Having a gut feeling that my employees all ought to be gung-ho and high assertiveness. Great. That feels good, but it may not be true. 

Don't do the assessment in that way. Use a validation study to prove what is related. 

Sylvain: Right? The assessment itself is just a tool. 

Scott: It's just a tool. 

Sylvain: it's predictive. It tells a story. If it's a good one, but the person that's doing the actual hiring and interpreting the results is responsible for whether or not at that point bias is introduced. If the person doesn't really understand personality dynamics, or how to get the best outcomes out of individuals, they'll probably take bad decisions anyway with an assessment. 

I think the 2 most remarkable dimensions that can be often construed, and ill used as part of hiring process are cognitive ability or IQ assessment ---people are very sensitive about that and rightly so. There are plenty of studies that show, "Hey, if you have high intelligence, you can do more." That makes sense too, on a rational level, so that's a true thing, but it's a very sensitive thing. If the job requires... let's use IQ score because everybody's going to understand. The job requires a 110 IQ, minimum because it's very complicated, lots of moving parts, whatever. Well, okay, that's kind of a barrier, but you don't need the 135 IQ for you to do that job either. 

Scott: To my knowledge, in the US at least, you can't say, "I expect a minimum IQ." At all. Throw the IQ test out the door. 

Sylvain: Out the door. 

Scott: Back to the clinicians to work in hospitals and, for people whom those tests are built for, those tests are not built for talent assessment at all, but if you want a measure of skills, yeah. Say you have skills that talk about your ability to tend to detail or see certain mistakes in printed material or to do mathematical equations. A lot of people would say that's IQ, but it's not clinically. 

Sylvain: No, our assessment is not an actual IQ test. We use IQ test questions, but the output is really workplace intelligence, like visual acuity and different things like that. It's not meant to do an IQ curve at all. It's just to measure different competencies that are related to different parts of the brain, so that different... 

Scott: Yeah. In the best IQ test, you're maneuvering blocks to make patterns. You're doing things that do measure your acuity, and your mental abilities, but that does not measure something job related. Not going to... 

Sylvain: No, exactly, and it's usually proctored. Like you have a person there to ask questions. 

Scott: One thing if you're trying to save people’s time... most people in HR are not trying to spend hours administration. I can agree. IQ tests are not a solution for job selection.

Sylvain: The other 1 is, is just one dimension in the big 5, neuroticism, which comes from the term neurotic, which is usually viewed as a bad thing, as a derogatory term. “You're neurotic!” But it’s opposite, emotional stability sounds way better, it's less scary, but if you're neurotic, it doesn't necessarily mean that you're a crazy person either. It is just a parameter of personality, so that's another example where people do balk. 

Scott: Right. There are diagnostic tests that are appropriate for using in a clinical setting, to see if one is neurotic. Sure. If you use that in the workplace, in the United States, because I'm only speaking to the rules I know. You've made a gigantic, no, no, no, but in measuring a big 5, this isn't... let me put it this way. 

Personality is just on a continuum that goes all the way into the clinical zone, where things are not as productive or fruitful, so let's just play in this zone over here at the top edge of it and talk about daily personality. If your stability, your reaction to stress shows that you're rather stable, or maybe it shows that you may need to take a break from the stress source, that's within the normal realm. Those are both healthy choices. They're just two different ways that people exhibit their stability. 

Sylvain: Correct. 

Scott: No personality test deeps low enough into the clinical zone, so I'm just saying, when it comes to that, the reason why they don't say neurotic anymore is because we don't want it to be considered a neurotic scale. 

Sylvain: That's correct. It's not a clinical evaluation, but that's the point. It reinforces the argument that you have to have a competent consultant, or HR person interpreting some of these results so that you know what it means in practice.

Scott: Take it within the realm of reasonable deduction. 

Sylvain: That's right, so that's great. Thanks a lot Scott for clarifying that, because I think that's really important, especially from an assessment maker and evaluator. Some people do look at dimensions and say, "Well." Either they think they're the king of the world because it sounds great. Or like in a case of neuroticism and other personality quirks, "Oh, that's terrible, I'm unemployable." Or whatever perception. 

Now, there has to be an accompanying education about what it actually means, and how it applies to the job market. There are just different ways of behaving. That's what it really means. 

Scott: For every job that has been validated, and I don't mean as a job across the United States, that doesn't exist. I'm talking about at your organization, if it's been validated, this position and these test scores, then there's a range of scores on each personality scale, that are the best fit. The question is, if scores are outside that zone, can the person adapt? Well, that's what the interview should be about. 

That's why I'm not a fan of putting cutoffs on personality. Let's just see how closely they fit, and then ask questions about how they adapted to situations. Like let's say that you have a position where low scores on stability are actually most common amongst your highest performers. I've never seen that, but that could happen, and your lowest performance score on the high end stability, maybe they're solid as a walk and they're not responding quickly enough to whatever the thing is in your job that you're doing. 

When you go to the interview stage, you might want to ask the person who's very stable, "Tell me about a time where you've done a particular task." They involve responding very quickly and very strongly and very clearly emotionally, bringing out your emotional responses, to get the job done. That's completely hypothetical. I've never seen a job that did require that, but it is possible. Personality can be good on the low end, or the high end is your validation study that says where the range of scores is, that best matches that role. 

Sylvain: Yeah. We had a case where we were measuring with our tools, measuring a team, a startup team, 3 academics and 1 salesperson. A very small team doing something. It was very interesting because if you looked at their graph, their charts, their personality and emotional intelligence, it looked like a dysfunctional team.   

The salesperson's profile was entirely opposite from the academics, which you'd expect on the average of course. In this case it looked like they would have very dysfunctional personality conflicts. The assessment predicted all these possible conflict zones, but as a team measurement, because we also have research that shows team effectiveness, and climate and things like that,  they rated very high together. 

That was one of the interesting cases where from trends and reliable research, the team looked fine, but on the granular scale they looked dysfunctional We talked with them, and they didn't express any kind of frustration with each other. They laughed at the results. It seemed like it was very healthy team. The predictions seemed like the team predictions seemed to be okay, but it's that salesperson looked like really out of place, but he had characteristics. If you looked at his profile, his characteristics would seem extremely logical, and beneficial to his job, to what he needed to do. 

That was one of the interesting cases where from trends and reliable research, the team looked fine, but on the granular scale they looked dysfunctional We talked with them, and they didn't express any kind of frustration with each other. They laughed at the results. It seemed like it was very healthy team. The predictions seemed like the team predictions seemed to be okay, but it's that salesperson looked like really out of place, but he had characteristics. If you looked at his profile, his characteristics would seem extremely logical, and beneficial to his job, to what he needed to do. 

Sylvain: His role, yeah, while the academics were extremely suitable to their role, and that's really the power of these assessments. You're doing trends, you're doing predictions, the team may or may not fit. There may be some tweaks needed and some discussions, maybe they need somebody else in there to help with a piece, a gap that they're missing in the team, things like that, but just because it looks dysfunctional doesn't mean that it is. 

Scott: Yeah. Did they choose that it was dysfunctional in the first place? Not from validity study, not from any data, just from our hunches, and that's fine. We have losses based on our intuition. That can be a strength in certain roles, but a lot of people do look at assessment and say the first thing that they look at, especially if they're candidates, not HR personnel, is why do you want a cookie cutter person? 

Why does everyone have to look the same? The only reason we say, “we are looking for a particular type,” is because like I said before, the data shows that there's a clear separation in top and bottom performers on this scale, and the tops are somewhere on that scale. There's a range of scores that reflects them, and the bottoms tend to not be in that range. 

I haven't heard too many people say it but I'm sure it is said in the field. Hopefully it is. If your top performers are all over the place, or your bottom performers are all over the place, the validity of this particular scale is going to be a lot lower than the rest of the scales, where there is separation. You might even not want to use this. A lot of tests have lots of scales. 

A big 5's going to have 5 scales, but there are other tests that have sixteen or more. Twenty even, and you don't always need every single scale. Now they may not be able to give you a CART assessment, which is a really nice assessment, but it needs a validation study behind it. I deviate. 

When you have an assessment, you may not need to use every single scale. You want to use the scales that show differentiation between top and bottom performers. Otherwise, it's just a nice story about the person, so you get to know them and that's it. That's where the rest of the scales become measures of individual differences, and that's perfectly great, but the scales with the valid coefficients behind them are the ones which can predict performance, and you can use them to create some sort of a match result, to carry forth that selection process.

I'm just saying that you described a team of people there that were chosen for certain activities on their team. I assume the salesperson has a certain role in that team, and the academics had another role, but they were all working towards one goal. They were a team, but really to select people for those position, I would use two different norms, maybe even different scales on the same personality test, but this guy that has to have the sales profile, and these guys need to have the research profile. 

Sylvain: Yeah. Yeah. Yeah. That's what makes it really interesting. It's all about the interpretation, and what you're trying to achieve with a company, based on the data. The data in itself is dead. It becomes alive when a consultant or an HR person that knows how to interpret, breathes life into it, and then it becomes useful, and can become profitable, because you can reduce a lot of costs. You can choose the good teams and could fit and there's all these really nice uses, but badly used assessments, can create chaos inside an organization. 

Scott: It’s not only a waste of money, but also a waste of credibility and possibly even legal action, so use it right, use it prudently, and use it as a part of the selection process. 

Sylvain: Excellent. Well, I think that's a great way to end this interview with Scott, on some practical case, and this thought: it's always in the hands that people are using it. 

Scott: Yeah. 

Sylvain: I think that's really powerful, and you are obviously a strong expert in this arena, so anybody that's listening or reading this, that needs that kind of expertise to make the right decisions based on data, or also select the proper assessments, which is where this started,  contact Scott Filgo. His LinkedIn profile is listed somewhere, you can find it around where we are. Just look for him on LinkedIn and he'll help you out and he's got a whole network of individuals around him that can fill in gaps, and you'll be fine. You're perfectly fine with good help. 

Scott: This has been a great discussion. I appreciate your time and interest. 

Sylvain: Well, thank you Scott. 

About Scott Filgowww.linkedin.com/in/scottkfilgo

Scott Filgo, MS Ed Psych (Psychometrics Focus) applies personality theory and learning science for maximizing hiring and T&D decisions, backed by objective data from top of the line employee cognitive/personality assessments, employee performance ratings and job analyses through the expert application of… 

He has collaborated in several global teams, in both corporate publishing and independent contracting projects, developing personality and cognitive assessments in multiple languages, localizing norms, and analyzing test reliability & validation evidence. He collaborates with decision-makers to link their competency models to the most appropriate assessment tools while guiding clients toward EEOC compliant assessment implementation through the analysis of adverse impact

He has twenty years of assessment development experience (with many product families from several test publishers) in both agile/entrepreneurial and methodical/hierarchical test publishing organizations. Scott embraces a work model emphasizing telecommuting and has enjoyed the self-reliance of remote work for over eight years. He's happy to see organizations being open to this "new" way of work and encourages testing the effectiveness of their remote teams and discovering what facilitates remote work and what hampers it. 

About CykoMetrix - www.CykoMetrix.com

CykoMetrix is a leading edge combinatorial psychometric and human data analytics company that brings the employee assessment industry to the cloud, with instant assessments, in-depth analysis, trait measurements, and team-based reporting features that simplify informed decision-making around recruiting, training, and managing today’s modern workplace.

 

Written by

Sylvain Rochon

Older Pylin Chuapetcharasopon - Maximizing Your Human Potential
Newer 2023 Features Roadmap