Career Practitioner Conversations with NCDA

Evolving Cultural Leadership: Tools and Considerations for Recruiting and Retaining Talent with Dr. Bret Anderson

May 28, 2024 NCDA Season 3 Episode 17
Evolving Cultural Leadership: Tools and Considerations for Recruiting and Retaining Talent with Dr. Bret Anderson
Career Practitioner Conversations with NCDA
More Info
Career Practitioner Conversations with NCDA
Evolving Cultural Leadership: Tools and Considerations for Recruiting and Retaining Talent with Dr. Bret Anderson
May 28, 2024 Season 3 Episode 17
NCDA

In this episode, Dr. Bret Anderson sheds light on the multifaceted shifts that are altering the fabric of the corporate world. Discover the connections between DEI initiative, artificial intelligence, and legislation as they shape the hiring process for many employers. Bret also shares his expertise in the development and use of employer-fit algorithms, and tools that evaluate job applicants beyond resumes, analyzing everything from speech patterns to facial expressions. The discussion includes recent EEOC guidelines to prevent bias and ensure fairness across all candidate backgrounds. Bret shares his insights on maintaining authenticity in the AI hiring era and offers strategic advice for companies seeking to leverage these technologies without sacrificing the human touch. This conversation reaffirms the necessity of access to career guidance for all.

Dr. Bret Anderson is President of h2 Communication, LLC, providing job readiness training, executive coaching, and frontline leadership development.

Resources

Send us a Text Message.

Show Notes Transcript Chapter Markers

In this episode, Dr. Bret Anderson sheds light on the multifaceted shifts that are altering the fabric of the corporate world. Discover the connections between DEI initiative, artificial intelligence, and legislation as they shape the hiring process for many employers. Bret also shares his expertise in the development and use of employer-fit algorithms, and tools that evaluate job applicants beyond resumes, analyzing everything from speech patterns to facial expressions. The discussion includes recent EEOC guidelines to prevent bias and ensure fairness across all candidate backgrounds. Bret shares his insights on maintaining authenticity in the AI hiring era and offers strategic advice for companies seeking to leverage these technologies without sacrificing the human touch. This conversation reaffirms the necessity of access to career guidance for all.

Dr. Bret Anderson is President of h2 Communication, LLC, providing job readiness training, executive coaching, and frontline leadership development.

Resources

Send us a Text Message.

Speaker 1:

Welcome to Career Practitioner Conversations. This podcast is presented by the National Career Development Association. Hi everyone, I'm Melissa Venable, ncda Director of Professional Development, and I am here with Dr Brett Anderson. Dr Anderson is President of H2 Communication LLC, which provides job readiness training, executive coaching and frontline leadership development. Welcome back to NCDA's podcast, brett.

Speaker 2:

Thank you, Melissa. I was glad to be back. I'm glad to be here.

Speaker 1:

Awesome, we're glad to have you. In a previous episode, you and I talked about employer fit algorithms, some of the research you've done in that area, and how it all relates to corporate culture. So, today we're going to take it a little bit further and we're going to talk more about cultural leadership. Okay, A lot has happened right, Particularly where diversity, equity and inclusion initiatives are concerned, since we last talked. So seeing that the relationship between employers and employees has gotten a little more tense, might even say adversarial- how did DEI initiatives promise to improve the situation?

Speaker 2:

Oh, that's a great point. Yeah, you know employers in the past they really thought taking care of their employees was the right thing to do, and then obviously was the right thing to do, and then obviously profits became more important. So DEI programs you know what they promised to do was to bring about a workforce that was much more innovative, much more relatable to the customer base, depending on the company that the individual was joining or the customer base that the product was associated with. And for the most part, dei programs contributed to that because they've lended to higher financial goals and other types of stability issues associated with finances. So for the most part, di programs have done that. But of course there's been a backlash. With any social movement you have a backlash right.

Speaker 1:

Yeah, these initiatives we're seeing are starting to maybe decline. Some companies, even campuses, are doing away with those offices and groups that they set up, so it seems like they're stalling as a corporate culture improvement initiative. So, you know why do you think that that's happening?

Speaker 2:

was no longer necessary for college recruitment.

Speaker 2:

You have state legislatures who believe that you know they roll back certain programs that are associated with understanding the history of the United States, including slavery and reconstruction and things like that, and so I think the backlash has been mainly related to just those types of things, and the media has put a big emphasis on those types of things, of course, because that sells in the media.

Speaker 2:

But, based on some research that I've been doing and just my personal experience, dei programs they've gone quiet. They haven't necessarily gone silent, and so there's been many studies, but one of the more recent studies in 2023 says that about. It was based on 300 C-suite executives. You know they're saying that their organizations are expanding the DEI or IED whatever you want to call it programs because they believe that it works for their organization and they're getting the metrics that kind of show that. So 57% of these C-suite executives in the US said that they had grown their diversity commitments over the past 12 months. Even as much as 59% reported opposition to diversity programs, but they still said, hey, no, we're going to continue to go with this even though we get opposition internally. And then, obviously, you have the SCOTUS and the state legislature issues.

Speaker 1:

That's encouraging, and I would invite listeners to listen to a recent episode with our Government Relations Committee. That really, I think, ties into how we all need to be aware of what's going on and how it does affect the workplace as far as laws are concerned. So thanks for tying that in there no worries.

Speaker 2:

Yeah, and I'll just add one more piece of that. I think you know there is a decline. About 6% of the companies are declining, but mainly that's due to legal issues, legal liability and other primary factors related to cost, because either they didn't find that it works but, most importantly, they found that, look, it's not meeting with our legal requirements for human resources or talent resource organizations. So we'll talk about that later in this podcast, but I think that's more. I think that's something that we need to elevate and address, because that is sort of an elephant in the room for DEI programs.

Speaker 1:

And that is an interesting take and some insight there which you don't really hear people talking about.

Speaker 2:

Like why?

Speaker 1:

Yeah, there's a lot of assumption about why, but it's not necessarily the real reason why things are happening. Let's switch gears just a little bit, Brett, into another area of some controversy, which is artificial intelligence, right? Yes, that's another hot topic we're hearing a lot about in the media. It's really in the workplace in lots of ways. So how has AI become part of recruitment efforts for many employers, and what benefit do they have for using these kinds of tools?

Speaker 2:

benefit do they have for using these kinds of tools? Wow, you know, ai has been a part of recruitment and hiring and onboarding for gosh at least 20 plus years, because we've used scanners and we've done things through Boolean operators or keyword or key search. You know word search engines. It's just taken a more prominent place today because it's just really hard to comply with the metrics that the Equal Opportunity Employment Commission or other regulation or compliance agencies want you to take, and so it's just easier if you have something more automated to help you do that, to help you do that. So, um, ai has.

Speaker 2:

But although you know AI has played a big role and I think it's playing an increasingly larger role because now you have things like chat, GPT or Gemini or these programs who they can mimic, right, um, uh, an actual person's character through words and through, through descriptors and adjectives that can be placed on resumes. And if you're good enough and if you're smart enough and if you're boldly an operator, the resume scanner that you're using is highly sophisticated. You know, oftentimes it's better to use artificial intelligence because your descriptors are much more dynamic and they show that you probably have a bit more opportunity for innovation. So a lot of people say that's cheating or plagiarizing. But honestly, if it helps you increase your vocabulary, at least show that you can increase your vocabulary and you can comply with that when you meet face to face within the company representatives. Then I think you're going to be okay.

Speaker 1:

Yeah, that's a helpful point. There's a convenience of these things, but we need to be checking them to make sure what we're actually the results are accurate and that we can back them up once we get on the job. That's a great point. Thanks for that. So, employer fit algorithm, let's let's kind of get back into that Give us a refresher on what that is and how it fits into the topics we're talking about today.

Speaker 2:

You got it. So employer fit algorithms are complex, obviously, but they're made up of a variety of key components associated with recruitment, hiring and then ultimately retaining your staff. So resume scanners you know, obviously the resume is a big component when it comes to the hiring process. So you want to make sure that you have a weight, or at least you have a percentage, of how you're going to evaluate a person's competence and fit within your company by their resume. So you want to make sure that that's a part of it. Some companies have keystroke monitoring so that they can determine how fast the person can input information or data.

Speaker 2:

Oftentimes there are job filters that are used by recruiters. That's part of their algorithm where they filter out job applicants based on keywords and information that they may put on LinkedIn, for example, or some other social media site. There's software that evaluates candidates based on their facial expressions. Now, this was new to me, but I have seen videos of it and speech patterns in video interviewing. So this is more of an international experience. From what I saw, I'm sure it's being done here in the US. I just didn't see an example of that.

Speaker 2:

But from an international standpoint, yeah, they're looking at facial expressions and speech patterns as part of the interviewing process to make assumptions about your ability to be trustworthy and truthful. And then, of course, there's software that talks about job fit and culture, things like personalities, aptitudes, cognitive skills, your cultural fit within the actual organization, how that'll work, and a lot of those are based on things like performance, past performance or quizzes or some sort of elaborate assessment that tries to evaluate that. So, when it comes to an algorithm, I'll give as succinct an answer as I can. It's the combination of these critical components that an employer believes helps them evaluate the efficacy or the viability of a candidate, and then you place a weight on those critical components to see, you know, how this person stacks up in comparison to their competition.

Speaker 1:

And there really are components in that of DEI, artificial intelligence, of legislation, all of these things. It's a real complex interaction there.

Speaker 2:

It's very complex and you're right, there's so much that's downstream that affects this and honestly it should affect it. I mean, you know we've had the worst. In my opinion we've had the worst or the most inefficient and ineffective hiring process forever. For as long as I've been doing this work and it's been around 30 years you know just a resume and interview doesn't necessarily guarantee you that you're going to get the best person. So expanding how you select an individual is really helpful.

Speaker 2:

And obviously I think people have heard of the price metrics that are associated with hiring the wrong person. You know it could cost you up to 150% of a person's salary to onboard them, to recruit onboard, give them a desk and a computer and a phone and then ultimately, you know it usually takes that person about six months to be efficient. You know it could take that long for an individual to be efficient, but if you lose them before the six month time period then you've lost all the money that you've invested in them and they haven't been able to give you any return. So I think it's important that we expand how these tools are used. We just have to be I don't want to say cautious, but I think we just have to be strategic in how we do them and how career professionals like us can consult with businesses and companies on the usage of these things and the efficacy of these types of programs.

Speaker 1:

Absolutely. And how can we, as career development professionals, also convey to job seekers? You know what? What's happening in this complex process?

Speaker 2:

You got it and what it's used for. Because a lot of people say, oh, I can trick this particular employer fit tool. No, really you can't, and and really the point is is to not trick it. Um, I think you and I both know that that one of the big components of any type of vocational self-concept development is self-awareness. Um, you have to know yourself and you want to be able to express who you are, because companies want to know who they're getting and you want to know what kind of company you're going into. So I think being able to be honest and authentic to both of those elements helps you make a better selection as an individual and, of course, it might help the company make a better selection, overall too, of the applicant help the company make a better selection, overall too, of the applicant.

Speaker 1:

Absolutely so. With these tools, what do employers need to understand related to going back to legislation, to EEOC guidelines as of April 2023?

Speaker 2:

Yeah, so there are four clear guidelines that are established. One is the selection procedure, because this is how EEOC will evaluate. So employers must and I'm just going to read from the EEOC guidelines, so it's going to sound legalistic, so forgive me for that but employers must ensure that their selection procedures, when using algorithmic decision-making tools, do not result in a disparate or adverse impact under Title VII of the Civil Rights Act. And so, being able to prove that what you're doing in terms of implementing an employer-fit algorithm process, you have to be able to demonstrate that it's not causing undue harm to the protected classes under Title VII of the Civil Rights Act of 1964. So in order to do that, you have to be able to comply with what's called the four-fifths rule. So under the four-fifths rule, adverse impact is generally indicated where a quote, selection rate, end quote for any protective characteristic is less than 80% or four-fifths of the rate of the group with the highest selection rate. And so, melissa, I don't know if you can do this, but I have a slide that shows this. That shows how it works. I think describing it is nice, but I think showing it is much more helpful. But I'll do my best to describe it. And then if folks want to look at this particular slide when they're doing the podcast, we can do that.

Speaker 2:

But let's say, for example, you have four categories of applicants. You have a group of individuals who classify themselves as white, a group that classifies themselves as black, a group that classifies themselves as Hispanic and a group that classifies themselves as Black, a group that classifies themselves as Hispanic and a group that classifies themselves as Native American. So amongst the group that classified themselves as white, there were 80 total applicants. So 48 of those 80 were hired. All right, and so 48 divided by 80 is 60%, and so that 60%, that's the group with the highest selection rate. In this particular example, if we hired 20 of the 40 applicants who classified themselves as Black, that's a 50% rate, and so what you do is you take the rate percentage and you divide it by the highest percent hired rate percentage. So in this case it would be 50 divided by 60, and that equals 83%. So in the four-fifths rule that passes that particular classification, that group, or you know that hiring percentage passes.

Speaker 2:

And let's say that for the group of individuals who classify themselves as Hispanic, we hire 10. Well, 10 divided by 30 is 33%. 33 divided by 60 equals 55%. So for that population, the algorithm might, it could have led to this particular impact or disparate impact for that group, and then, of course, for the Native Americans. We had 20 applicants and 10 were hired. Again, that's a 50%. So you end up with an 83% and that's what this example shows. So that's how you evaluate. Now the hard part about this is, you know, again, the metrics are somewhat challenging, but you want to be able to, as a business, you want to be able to automate and protect yourself against this particular outcome. So I think you know the biggest thing is understanding how these algorithms work so that you can design them prior to using them in a way that will minimize or restrict this particular outcome. That's going to be the hardest part for most companies.

Speaker 1:

Gotcha, you know that's. That's super helpful and listeners can check the show notes for a link to that graphic, which I think will be, really helpful as well. Okay, so are there other things that would make a strong employer fit tool that would help employers kind of avoid the sanctions with this new EEOC?

Speaker 2:

Oh, absolutely yeah, I'm just going to finish up the other four, the other two categories according to EEOC. Oh, absolutely yeah, I'm just going to finish up the other four, the other two categories according to EEOC, then I'll jump right into where you are. So you know, the EEOC doesn't allow you to say that, well, a third party did this or a vendor is the one that's doing this work. So therefore, we're not responsible for disparate impact. No, you're not able to be shielded in that way as a company. If it's something that you bought or you have a vendor giving you insight or consultation on, then it's yours. You own the outcome as well as the process. So, as a business, you want to make sure that you're involved heavily in knowing what algorithm and how that algorithm is working for you. And then the last thing is that you need to be able to have options to address disparate impact. So again, I'm going to read from the EEOC. So, in the event that an employer discovers that his algorithmic decision-making tool does result in disparate impact, the EEOC suggests that the employer can either discontinue the use of the tool or select an alternative tool that does not have a disparate impact, or modify or redesign the tool using comparatively effective alternative algorithms during the developmental stage. So you know you get a chance to correct the problem. Eeoc is not just going to come in with a hammer and say you do fail and that's it, you're going to be sanctioned. No, you do get a chance. This is an emerging technology, this is an emerging science, and so it's not always going to work the way that you envisioned or idealistically expected to work. So you are given an opportunity to do that, so you know. So, in terms of just looking forward, melissa, I think you know the good thing is that more and more employers are beginning to use these particular tools, mainly because they're reducing human resource staff. It's obviously more costly to have staff and doing these types of metrics are so automated today that it's just easier to go with this particular approach.

Speaker 2:

So if you're offering software, there's software offered in this particular area. The things that you need to be aware of are making sure that the tool meets the certain criteria. Now you said that I've done research in this area. Yep, I have a patent. I have my own employer-fit algorithm that's been patented. So here are some things about the employer-fit algorithm and I call it a quality match index. So that's just a title for the algorithmic tool that we use. So the software is cloud-based and so therefore it's not trapped in some local database, and that makes it a little bit more flexible.

Speaker 2:

It's based on a set of comprehensive survey questions that are directed to compare individual traits with predetermined set of survey data from the company that reflects their culture. So basically, you get a baseline score for the company. So, in essence, what we do is we take 20 people within the company and we evaluate them using this survey, and then we this survey and then we take their individual scores and put them on a normative scale. So we combine them and we use some fancy mathematics and, based on that, we put them on a scale. So basically, you get a fingerprint or an identifier for your company culture, because the 20 people that are selected within the company are diverse. They're a variety of folks. Some have long-term tenure, others may not, but some may be more innovative, some may be more in alignment with where the company wants to go versus where they've been. So there's a 10-year difference there, and that's what we recommend. So with that, if you can put together a cultural fingerprint of the company when you're recruiting part of the algorithm that we suggest is that in your recruitment process, which includes resumes, interviewing and all the things that you may want to utilize, part of it is doing a measure to to uh, have these individuals do a survey. Individuals do the survey and then, based on those traits, how do those traits align with what the, the baseline of the culture is? And so that's what that index measures, and uh, basically, and from that you get a score, um and so, and it just tells you whether the person is an optimal fit, a good fit, a fair fit or a poor fit with that particular culture normative scale. We found that it's been very effective.

Speaker 2:

Companies use it as part of the process. It doesn't determine who gets hired, but it certainly is effective in helping them understand if they are going to bring an individual on because they believe this individual is the right person. Then they have an idea, from a trait perspective, how that individual might operate and function and how that might evolve the culture in a way that's more positive. So you also want to make sure that if you have an employer fed algorithmic tool that's measuring culture, that you keep it anonymous. You want to have numbers assigned to this individual that's doing it, not names, because obviously name biases play a big role. Biases play a big role. They do for resumes, and you can't really avoid that for a resume to a large extent, but you can avoid it for this particular assessment so that you're not unduly biased when it comes to their data. The results can be used to analyze and pinpoint team conflicts, which is sometimes important to know potential team conflicts before the person comes in, as well as just score the predictability of how this person might fit in and evolve with the existing culture.

Speaker 2:

The software that I've patented does not use simulation-based analysis, which fails to account for decision-making bias. Oftentimes you put people in simulations and then you think, okay, based on the simulation, this is how they're going to act within the workspace. That's not necessarily true. That hasn't been proven to be a predictable factor. Also, the QMI software is not based on a time quotient. A lot of times people say, okay, there's a time element associated with this and so therefore, that time component shows your ability to be conscientious or less conscientious. However, that measure is being accounted for, but there is no time quotient, for this Individual has as much time as they need.

Speaker 2:

The software evaluates traits, not characteristics, and so I'm going to define a trait here. A trait is a personality characteristic that meets three criteria it must be consistent, it must be stable and it must vary from person to person. It can't be the same Like if you're a nice person lots of people are nice people but that doesn't mean that they necessarily should be a good fit within your company. And then, last, the QMI uses the big five traits, which have a psychometric coefficient of about 8.88, which is very high in this world in terms of predictability standards. So with knowing those elements and being able to incorporate them into your algorithmic process, you're better able to avoid a lot of the EEOC cautions and guidelines. But, more importantly, you're going to be able to select and pick a candidate who really does fit more with your company and therefore they're not just going to be hired and brought on, they're actually going to be valued and appreciated and ultimately retained. That's the employer fit component.

Speaker 1:

That is fascinating.

Speaker 2:

There's a lot to it, right so?

Speaker 1:

I feel like it's not for beginners but for people that want to dive in. There's a lot of information likely out there to kind of get started and get the skills that you need to really work with these tools.

Speaker 2:

Yes, you're right, it's hard to put a fence around, and that's you know. I'm working on that work now, because now that I have a tool and I have something that that's available and it's out there in the market and people are using it, yeah, now you got to kind of help people understand the best ways to use it, and putting a fence around it is not easy, as you said. So if you're going to get into this work, that's going to be helpful for you to be able to know where it all fits.

Speaker 1:

Excellent. Are there specific resources or places where people could find more information if they want to get started with this?

Speaker 2:

Yeah, you know I have a reference list that I'll send out that I'll include with this particular podcast that I think might be helpful. The problem is is that you know so much of this work is proprietary and so it's really hard to get people to open up and talk about the inner workings of their program and their process. I mean, what I reveal to you is you probably have to sign an NDA if you're working with me on that, but I think it's for the betterment of our constituents that they understand the complexity but also understand that it's not undoable, and many employers are taking this stuff on. We don't learn to adapt and take on this particular what do you want to call it? Segment of career development, then we're missing out. You know we're not going to be able to give our customers the right information, nor are we going to be able to satisfy the needs of the employer.

Speaker 1:

Agreed. Yeah, innovation is happening everywhere, so we've got to stay up to date, right with the trends and the tools.

Speaker 2:

Yeah, we do we do, and again, that's not an easy thing to do, but once you're doing it, I found it to be a fascinating area of study. And again, I get lots of folks in the consultation process who, once they get what you and I just described there, they they really want to do much, much more and oftentimes have to hold them back and say no, no, no, no, Because you know there's a reason why we structured the algorithmic process this way and so you want to stay with that until you know more about how it's impacting your hiring and recruitment process know more about how it's impacting your hiring and recruitment process.

Speaker 1:

No, that makes a lot of sense. Informed, purposeful improvement Absolutely yes.

Speaker 2:

Yes, slow and steady, we're not into instant gratification. We were here for the long haul, so all right.

Speaker 1:

Is there anything else you want to do before I do the wrap up?

Speaker 2:

No, but I do want to say that you say that it's really helpful that we have programs like yours and just institutions like NCDA, because being able to get this information into the hands of the public without them having to pay for it or use some sort of dramatic effort, I think is very, very important, and so I'm hoping that not only me but others take this on and then do more with it and just contribute to this study of work, because it's really valuable and it's helping people overall.

Speaker 1:

Oh gosh, that's great, Brett. Thanks so much for that. Shout out to NCDA and we'll add lots of resources in the show notes, so listeners go ahead and take a look at that online for more, for more details. You got it. Thanks so much for being here, Brett, and sharing your time and expertise in this area with our audience.

Speaker 2:

Thank you, Melissa.

DEI and AI in Corporate Culture
Algorithmic Hiring and EEOC Compliance
Employer Fit Algorithmic Tool Discussion
Importance of Public Education and Resources