You are the Chief Science Officer at RBC and you also oversee its AI research institute. Describe the bank’s interest in this arena.
There are many aspects to our interest in AI. First of all, financial services is a very data-driven business. From the retail side to capital markets, we are making decisions based on data every single day — whenever we estimate risk, work on cases of fraud, or use cyber security models. So for us, it was a natural investment to bring state-of-the-art machine learning to the traditional statistical systems that have been in place in the financial sector forever. But this is about more than upgrading our statistical models: We want to figure out what banking will look like in the future, and how we can use artificial intelligence to enable that. So we created Borealis AI — an AI research institute within RBC where the mandate is to push the boundaries of science and develop intellectual property that will allow us to remain competitive for years to come.
It’s quite common to find groups like Borealis AI within tech companies, but it’s very unusual within the financial services sector. We have about a hundred employees, primarily scientists — PhDs or Master’s in Machine Learning and Computer Science — doing both fundamental and applied research. For us, this was an obvious choice because we’re interested in using AI intellectual property as a competitive advantage for the future of our business.
The data collection process is something we need to pay close attention to.
One of the other reasons we got so involved with AI research was that, a few years ago when we looked at the landscape in Canada, we saw something very troubling: Many of our brightest computer scientists — the inventors of Deep Learning, Reinforcement Learning, and all kinds of algorithms that were making great strides in consumer products — were inventing things here and then heading south of the border. RBC is one of Canada’s largest employers, so we have a vested interest in keeping our best talent at home. A big focus for us has been creating jobs that are interesting enough to attract people who would otherwise leave.
You have said, “We should never assume that data is an objective entity.” Please elaborate.
Data is an entity that is created by real-world functions and operations, and by human beings like you and me. Whether we are accessing our phones, going to the doctor, travelling, or using our credit card, we are generating data that reflect the real world and how we interact with it. As humans, we often make biased decisions, and that bias is reflected in the data that is captured by machine learning systems. Apart from the inherent bias in how we live our lives from day to day, there is also the issue of biased business practices. How is data being collected? Who is collecting it, and how are they going about it? The data collection process itself is something we need to pay close attention to.
How might such bias affect financial services?
Whether you work in financial services or healthcare, you need to ensure that you’re being fair and inclusive of all the different types of people you are serving. Looking specifically at financial services, one of the most critical areas for the application of AI algorithms is in lending, and understanding the risks involved. That is a common application of machine learning in banking. The challenge comes when we use models that are built with historical data that was collected at a time when we weren’t aware of the bias issue. If you can’t be certain that there has never been any bias in your lending decisions in the past, you have to be really careful about using traditional models.
That is actually one of the biggest obstacles right now for AI in financial services: In order to apply it in some sensitive areas, we have to ensure beyond a doubt that there is no bias. That means ensuring that these models are explainable. Broadly speaking, in the financial services sector, and specifically at RBC, our relationship with our clients is built on a foundation of trust, and we take the privacy and security of personal and financial information very seriously. With brand new technology, there is a recognition that we don’t yet completely understand it. But this is not how AI has been approached so far in the tech sector. Many companies just put new tools out there and wait to see if there is any backlash. But because we operate on trust, it is unacceptable for us not to recognize that there are real risks associated with this technology.
You touched on the concept of ‘explainability’. Talk a bit more about what that means.
The explainability of an algorithm has to do with the degree to which you can explain the context around how the AI makes a particular decision. It’s important to remember that virtually all of the great machine learning models that are brought to life today through many different products that we use today are, unfortunately, unexplainable. You have an input and an output, but you don’t really know how the AI got there. For certain sectors — like healthcare and financial services — this is extremely limiting. In areas like lending, which has a serious impact on people’s lives, you simply cannot be extending (or not extending) credit without understanding exactly why the algorithm made the decision. It goes back to the trust that we seek to maintain with our clients. In financial services, if there is no trust, your business won’t last.
If you look at the state of diversity in AI today, it is extremely sad.
In addition to bias, one of the big problems with AI is that it is being developed by homogenous groups of individuals who think in similar ways. How is your institute tackling that issue?
If you look at the state of diversity in AI today, it is extremely sad. At Borealis AI, we are making a concentrated effort to ensure that we bring a variety of voices to the table — people of diverse genders and from varied ethnic and experiential backgrounds. We recognize that you can’t take for granted that a product will be successful if you have a small group of similar-looking people with similar experiences agreeing that it is great. Along the way, we have uncovered some very interesting ideas thanks to all of the different voices involved.
We also have people who are responsible for the ethical use of AI. It is really important to have designated people whose expertise is to proactively evaluate a product from this vantage point. A lot of the pushback on AI has been aimed at groups that are very tech-focused without any consideration for what could go wrong. They just don’t prioritize having an accountable person or process in place to evaluate the risks posed by these technologies.
The papers on your institute’s website have titles like ‘Stochastic Scene Layout Generation from a Label Set’. Clearly, this is very technical research. How can the average consumer expect to be impacted by it?
As indicated, Borealis AI does research as part of our product development. Since we’re dedicated to building intellectual property in this space, we are focused on the state-of-the-art and how it can impact our business down the road. The paper you cited was from the research area of Language Understanding, which is of great interest to RBC. For example, one of the ways we currently apply Natural Language Processing is to sort through financial news articles. Our goal is to understand how markets are evolving from day-to-day and hopefully to predict how world events might escalate and impact North American markets. To do this effectively, you have to have machine learning that can understand information in a contextual way. It must also have an understanding of historical events that may have led to a particular outcome in the market. We’re doing research in this area to figure out how to build contextual understanding of language.
We also publish this research in scientific venues because we believe in academic freedom. Our goal is to contribute to the AI community with algorithmic advancements that we make while working on RBC products. This is part of our commitment to the Canadian AI ecosystem.
We all heard about that famous Google memo by James Damore, where he complained about the company’s diversity initiatives and said that “women’s brains are different from men’s”. What did you make of that?
It made me angry, but the fact is, this is a notion that is common in the tech sector. I’ve seen various forms of it since I was in school — even grad school — and I’ve also seen it in my career, particularly early on, in start-ups. There is still a belief that people are better at certain things and worse at others because of their gender. While that is disturbing, it is a reflection of our society and of how people understand gender. As a result of this mindset, I truly don’t believe women have come close to reaching their true potential yet.
Dr. Foteini Agrafioti
is the Chief Science Officer at RBC and Head of Borealis AI, the bank’s AI research institute. She is an alumnae of the Creative Destruction Lab at the Rotman School of Management. Dr. Agrafioti founded and served as CTO at Nymi, a biometrics security company and maker of the Nymi wristband. She is also an inventor of HeartID, the first biometric technology to authenticate users based on their unique cardiac rhythms.
This article appeared in the Winter 2020 issue. Published by the University of Toronto’s Rotman School of Management, Rotman Management explores themes of interest to leaders, innovators and entrepreneurs.
Share this article: