categories

HOT TOPICS

Thought Leaders in Artificial Intelligence: Stuart Nisbet, Chief Data Scientist, Cadient Talent (Part 4)

Posted on Thursday, Oct 29th 2020

Sramana Mitra: What else is interesting in your technology?

Stuart Nisbet: If you are researching AI in general, I think this will go well beyond what we’ve talked about today. The trust and explainability of what an AI algorithm does is a trend in the industry and is one of the things that I address the most. It’s quite interesting because it’s one of the strengths of the technology, but it is also regarded as one of the weakest points of the technology.

There are a lot of examples where people feel that yielding a human decision to a machine will only lead to cold, hard decisions. In particular, it’s going to codify or it’s going to strengthen biases from the past. It’s a valid question to ask and a valid point to raise.

I believe that the only way that we do that is through trust and explainability. Could the choice that you made be explained? The way that it came to me was in the area of doing applications for home loans. If you have something as simple as a decision tree that asks how much your income is, how long you were at the job, have you defaulted on a loan, and how much outstanding you have, you just punch that in, and it’s pretty easy to see.

Some of the deep learning algorithms require you to dig deep to understand and explain them. The idea of being able to explain exactly how these recommendations are being made is critically important. That is one of the areas where we want to lead the way.

All of my colleagues want to be extremely ethical and the ethical application of AI in machine learning needs to be the first and foremost thing that we do. We need to be able to stand up and say exactly how these are working. For example when we have the age group of the applicant, we would never use that in terms of determining their fit for a job.

That variable is not taken into account and we can demonstrate that clearly. However, it’s different from humans. I’m a 56-year-old white male. I can tell you that I don’t recognize the age, gender, and ethnicity of the person sitting across from me in an interview, but I couldn’t say that credibly. As much as I say that I didn’t look at those variables as a human, it’s hard not to at least have that information whether you use it or not.

For an algorithm, it’s easy to not have that information; it’s just not provided. One of the things that is powerful about what we are doing is that for our clients, we can generate models that do not use age, ethnicity, or gender.

We can make recommendations based on their past hiring practices and give them a list of candidates that they would have hired. Only then can we introduce those variables and say what would have been done differently if the algorithm didn’t look at those specific variables. It’s trained off your data and if it comes back with the same recommendation, then it proves that age, gender, and ethnicity were not taken into account in past hiring decisions and that the recommendations are the same with or without those variables. What I would expect in the case of some industries is that there will be slight differences in models that introduce those variables and those that don’t.

That gives a company a true opportunity to say that not only are they trying to reduce the bias by taking those out but they are also trying to study how they could have done better. We always want to be vigilant and transparent.

We want to have the highest of ethics and trust from our clients. As such, if you introduce those variables and it makes a difference, it means that you didn’t have some unconscious bias that had entered into your hiring practices. 

This segment is part 4 in the series : Thought Leaders in Artificial Intelligence: Stuart Nisbet, Chief Data Scientist, Cadient Talent
1 2 3 4 5 6

Hacker News
() Comments

Featured Videos