Main Content

Thought Leader Interview: Gillian Hadfield

Interview by Karen Christensen

The inaugural Schwartz Reisman Chair in Technology and Society at the University of Toronto discusses the alignment challenge for AI—and how her Institute is tackling it.

Portrait of Gillian Hadfield

 

As director of the Schwartz Reisman Institute (SRI) for Technology and Society at the University of Toronto, your goal is to ensure that technology produces outcomes that align with society’s values. Can you unpack this mission statement for us?

The key thing to know about artificial intelligence (AI) is that in many cases, it automates human judgment and decision-making — which means it is making decisions that impact the world. You could even say we’ve begun to delegate many decisions to machines: Who should get a loan; who should get into an educational program; how a car should steer when a human appears in front of it. We need to make sure that these machines are making decisions that are aligned with what communities and societies want. This is known as ‘the alignment problem’ for AI, and it isn’t that different from the alignment problem for markets and politics. But it does entail a very challenging technical piece to it, because in this case we’re trying to make machines do what we want them to do. 

 

You have said that AI is “racing ahead furiously” in some areas that should deeply concern us. Please explain.

Most of these machines are being built by private technology companies, so the main goal is to maximize their profits. The problem is that all of this is happening outside of an adequate regulatory structure. As we have seen recently, when big social media platforms build recommender engines and systems that send targeted advertising, it can have the side effect of creating problems around social stability and political polarization.

One thing that has developed shockingly fast is facial recognition, as well as gait recognition (which analyzes how people walk) and sentiment recognition (which analyzes how people speak or interact with their computers). This is mostly coming from companies that want to know how people feel about different touchpoints in their customer experience. For instance, when they call the customer service line, are most people angry? This type of technology can be very invasive of people’s privacy. It can really disrupt society’s ‘privacy bargain’ when we leave our home to go out in public — and it is now finding its way into police departments and immigration offices.

Clearly there is a lot to think about here; but all of this is happening very fast and there aren’t enough people who fully understand all the implications. Part of my research and teaching is about making sure many more people in our society — besides computer scientists — know what’s happening and participate in building AI. We should absolutely be building these powerful systems; but we need to ensure that they are built in a way that makes our world better, not worse.

 

We all need to be concerned that there might be

biases against certain groups within AI algorithms.

 

You believe ‘regulatory markets’ are part of the answer here. Can you explain what they are?

This is a new term that my co-author Jack Clark and I coined to describe the challenge of regulating these extremely complex, fast-moving, autonomous technologies. We normally think about regulation in terms of detailed requirements for organizations — like the health and safety rules for companies, for example, which are written by and implemented by government officials. These regulations ensure that everything from consumer products to restaurants to workplaces are safe. The challenge with AI is that it is moving so rapidly — and is so complex — that our governments can’t keep up.

Frankly, I don’t think our existing approaches to regulation will work with AI. For instance, the Canadian government is exploring laws to prevent the harm caused by automated newsfeeds that present people with fake news and other unsavoury content. But so far, it is approaching this like a problem we’ve solved before: Specify what is not allowed online, create a right to have bad stuff taken down, and set up a tribunal to adjudicate claims and threaten big fines. But that approach won’t work when we’re talking about billions of online posts that a technology company can’t possibly track and control in the way a conventional news publication can.

The challenge of regulating powerful technologies like AI is actually an innovation challenge, and it’s the same innovation challenge we face in the rest of our economy: We need to attract investment and high quality engineers, entrepreneurs and business people to the challenge. That’s the only way, for instance, to ensure that the algorithms created for a particular use aren’t biased against any minority groups or women.

My colleagues and I believe we need technologies to help meet this challenge, and that we need markets to help incentivize investment to build those technologies. We want to get to a place where people can make money if they develop a technology that keeps other technology and companies in check. At the same time, these technologies have to be overseen by politically accountable groups who decide what that technology should and shouldn’t do. What kinds of discrimination should we be concerned about? How much of a trade-off should we be willing to incur between being fair and being efficient? These are some of the key questions right now.

In short, a regulatory market is one where we have private regulators — companies or non-profits — who are themselves regulated by government. The government would decide, ‘The accident rate for autonomous vehicles needs to be X or ‘algorithms have to be fair, as defined by this community’. All companies using AI would be told by the government that they must pay for the services of a licensed regulator. Regulatory infrastructure is the most fundamental building block for market economies and democracies. If there were no food safety rules in place, how often would you eat in a restaurant? In much the same way, we need a robust regulatory infrastructure for AI.

 

This sounds like an entirely new sector for innovators to embrace.

It is. Banks are already using algorithms to help them figure out who to offer what kinds of products. But we all need to be concerned that there might be biases against certain groups within these algorithms. The AI that was used to build it might have used a biased dataset or data that wasn’t representative of the community the bank serves. To ensure that doesn’t happen, the government would say to that bank, ‘You must purchase regulatory services from one of these licensed regulators’.

Instead of having a bunch of lawyers or compliance officers coming in and checking a company’s records (although we might still have to do some of that), we’d like to be able to have a machine learning system go through the high volume of decisions that are being made by AI in a particular company and identify things like, ‘There is a clear bias here against people in this postal code’. Governments will decide what ‘fair’ looks like, but to track it, we won’t be stuck using the technology that is available to governments.

Currently, all governments can do is write rules down and hire people to go check to see if people are complying; we need technology that can help us make sure that the results we want are the results we’re getting. It won’t be a pure technology solution, but technology has to be part of it — and that means we need to figure out how to incentivize people to build this technology in the market.

 

Your Institute has partnered with the Creative Destruction Lab (CDL) to build a pipeline of responsible AI technologies. Please describe this initiative.

To build the kind of regulation that can match the speed and complexity of AI, we need to foster the development of regulatory technologies — which we can also call ‘responsible AI technologies’. As indicated, this is a really nascent industry, and together with CDL, we want to help grow those businesses. CDL has proven to be great at helping any technology translate into a profitable business model. What SRI is bringing to the table is a focus on nurturing entrepreneurs to start thinking about how they could build a technology that supports responsible AI. SRI wants to plant the seeds and share our expertise about what kinds of regulatory technology could find a home in the emerging regulatory landscape. And if we’re successful, there will be more ventures that can then benefit from the CDL magic to grow.

 

You believe that partnerships between public and private organizations are critical to achieving trustworthy AI. How is your Institute embracing this principle?

As indicated, we currently don’t have much regulation around AI. It’s a bit of a Wild West situation: Anyone can go out and build a machine learning model. You don’t have to be licensed as you do to practise law, sell accounting services or build a bridge. We don’t have effective regulation in place to test whether or not these systems are doing what we want them to do. In order to get there, we need government to play its part in helping to build the regulatory structure we need, and we need private companies investing in building regulatory technology.

We are already seeing CDL start-ups going to the banks with exactly the type of product we’ve been talking about — to check their machine learning to determine whether it’s fair and in compliance. There is a market for that, which is great, but if the government created a regulatory structure, it would increase demand and we’d get more investment in these start-ups. That’s an example of the kind of partnerships that are necessary. What we don’t want is self-regulation by companies. It’s not really the business of any bank or technology company to decide the values for our society. That needs to be something that our governments and communities do. And the concept of regulatory markets is a way to achieve that.

 

Do you have a favourite example of ethical AI?

There are actually two, and both came through the CDL last year. The first is a start-up called Private AI, founded by Patricia Thaine, who is a PhD student in Engineering at U of T. She and her co-founders have developed a technology that strips away personal information from text and images. This means that, when Company A wants to share data like company records with Company B in order to build something together, they can use this technology to make sure the data it is privatized. The reason this helps promote ethical AI is that AI feeds on massive amounts of data, and we will get better AI if we have more and better data. One thing we’re seeing right now is that there are many obstacles in place that don’t allow for data sharing that could be extremely useful to society. This technology would mitigate the privacy issues and enable that.

We have also seen companies that are building machine learning systems that can check on other machine learning systems for compliance. Armilla AI is building machine learning models to check the compliance of machine learning systems with regulatory rules in the banking context, but they’re also working on ways to define what fairness means.

At SRI, we’re thinking about how to develop certification systems that will do things like verify the provenance of data in various domains. Where did the data come from? Because again, promoting ethical AI means promoting ethical and safe, fair data use. How do we make sure the data came from the right places?

My son, Dylan Hadfield-Menell, is a new Assistant Professor of Computer Science and AI at MIT. He’s the person I work with on these issues a lot, but he’s also the co-founder of a new company that is building technology to ensure that the recommender systems we encounter every day on Netflix and Facebook are safe. His company, Preamble, is building technology that will allow people to establish what values they would like to see reflected in these systems. For instance, someone might say, ‘My values are aligned closely with those of the YWCA and the New York Times’. And maybe organizations will create clear value systems that can feed into this technology. Maybe we can even generate a market for metrics that works to discover and support the values people want to see implemented in their recommendations. Those are just a few examples of the innovations out there.

 

It’s not the business of any bank or technology

company to decide the values for our society.

 

Which industries will be transformed the most by AI?

AI is what we call a ‘general purpose technology’, and what it does mostly is make predictions. It’s a ‘prediction machine’ — as my Rotman School colleagues Ajay Agrawal, Joshua Gans and Avi Goldfarb wrote about in their book of the same name. We rely on prediction in just about everything we do in the economy, and increasingly, AI is being used to make judgments. It can recommend how a business should act or, in some cases, automatically implement decisions — like what products to show customers on a website, what prices to offer drivers on a delivery platform, or which job applicants to interview. OpenAI, where I’m a senior policy advisor, has introduced AI that can write computer code. So, AI has the potential to disrupt just about all of our jobs. But that doesn’t mean it’s going to automate us all out of existence. We face some real challenges in terms of transitioning, but in the long run, I think we’ll see a dramatically transformed economy, not a fully automated one.

One new job that I believe will emerge is what we’ve been talking about — translating particular societal values into machine learning algorithms. Another will be validating and overseeing AI systems, detecting errors and shaping directions. When the industrial revolution hit, we invented all kinds of new jobs like professional management, HR, market research and jobs related to regulation and legal oversight. I think AI will generate a whole suite of new things for people to do. That’s why we all need to be involved in shaping this powerful technology.

 

What can leaders do to ensure their organization is part of the solution here, rather than part of the problem?

First of all, they need to learn something about the way AI works and the risks that it presents to businesses and to society. I teach about this in my Responsible AI course for the CDL program and the Rotman School’s Masters in Management Analytics. Business leaders have to be on top of this. They need to do more than understand the buzz words. They need to understand how AI works and what the challenges are.

Leaders also need to recognize that the regulatory infrastructure we’ve been discussing needs to be put in place as soon as possible. If it isn’t, the big banks, tech companies and manufacturers will be the ones deciding what is fair and what is not. We all need to work together to fill in the blanks in our regulatory infrastructure. Once we build this shared infrastructure, we can all focus on building our organizations in a way that benefits society. As we say at SRI, powerful technologies like AI should improve life — for everyone.


Gillian K. Hadfield is Director of the Schwartz Reisman Institute for Technology and Society and the Schwartz Reisman Chair in Technology and Society at the University of Toronto. She is also a Professor of Strategic Management at the Rotman School of Management, Professor of Law at the University, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. She is the author of Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy (Oxford University Press, 2016).


Share this article:


Read More Follow Us on twitter Email List Subscribe Today