Kay Giesecke: Then, there are the individual loans and credits. The individual mortgage loans are not traded. It’s the same problem at the individual loan level. What can we say about this borrower? These are different verticals that we can expand into.
One interesting initiative that we’re focusing on is trying to understand the impact of climate on these markets. If you have a flood in Kentucky, then there’s an impact on the homes. That’s very unfortunate.
Then there are the implications of these types of events in the capital markets. The investors who have backed the loans of these homes take a hit as well. It’s important for them to understand their exposure to these types of events. If you talk about climate change as a general development, then that risk is growing.
We have initiatives trying to really understand these types of things and building out tools for customers and allowing them to get a better handle on these types of risks. They could then say, “I have a security that’s heavily exposed in northern California. We’re seeing increased frequency of wild fires.” We want to give people tools to better understand risks and make decisions accordingly.
Sramana Mitra: Very interesting. AI is already a bit of a black box for most people. Here, you’re dealing with a whole black box of a bunch of stuff going into a security package. Then you’re trying to apply a black box to make another black box more transparent.
Kay Giesecke: I have to jump in. You mentioned several things. Let me untangle. We talked about the financial crisis and the many bad things that came to light. It’s really important to invest more into an understanding of the underlying economic and human mechanisms. We bring modern AI into this to get a deeper and comprehensive understanding.
This clearly also has a societal component. If you understand better what’s going on, then we can reduce future crises. The other one is, I wouldn’t categorize these deep learning models as black boxes. We have invested heavily in making these types of tools a lot more transparent.
We need this. Our clients demand this. They don’t want to get the output of black boxes. They want an understanding on how the machine makes its projections and decisions. It’s a key element for us to be able to tell people how it works.
Sramana Mitra: Your customers are asking for explainable AI.
Kay Giesecke: Absolutely. I have started research initiatives on this topic. It was clear that this was going to be a significant obstacle in making these methods more mainstream. We have a range of rigorous statistical tools that allow us to make these complex and rich models a lot more transparent, giving people a concrete sense of how this works. No one wants to follow a black box’s recommendation.
Sramana Mitra: Very interesting conversation. Thank you for your time.
This segment is part 5 in the series : Thought Leaders in Financial Technology: Infima Founder and Chief Scientist Kay Giesecke
1 2 3 4 5