You have said that the future of AI is being built “by a relatively few like-minded people within small, insulated groups.” Why is that such a big problem?
In any field, if you have a homogenous group of people making decisions that are intended to benefit everyone, you end up with a narrow interpretation of both the future itself and the best way to move forward. When we’re talking about a transformational technology like AI, it involves systems that will be making decisions on behalf of everyone, so it follows that a lot of people are going to be left out of those decisions. The way that these systems behave and make decisions and choices is going to exclude a lot of people.
What is an artificial narrow intelligence system (ANI)?
This is what AI’s various ‘tribes’ are building. ANI’s are capable of performing a singular task at the same level or better than we humans can. Commercial ANI solutions — and by extension, the tribe — are already making decisions for us in our email inboxes, when we search for things on the Internet, when we take photos with our phones, when we drive our cars and when we apply for credit cards or loans.
They are also building artificial general intelligence (AGI) systems, which will perform broader cognitive tasks because they are machines that are designed to think like we do. The question is, Who exactly is the ‘we’ that these systems are being modelled on? Whose values, ideals and worldviews are being taught?
In recent years, Google launched unconscious bias training for its employees; yet at the same time, it was rewarding bad behaviour among its leadership ranks. Talk a bit about this paradox.
A single training program cannot solve the bias problem — just as an MBA program offering a mandated ethics class doesn’t stop unethical behaviour. When information gets siloed in that way and is not more deeply integrated throughout a company, people tend to dismiss what they learn as something to ‘tick off’ on a checklist to meet requirements.
I do think the American members of the Big Nine — Amazon, Apple, Google, Facebook, Microsoft and IBM — recognize that there are problems with diversity and inclusivity in their ranks, which is why, in recent years, many have launched unconscious bias training. Hopefully the goal of these programs is not just to deal with employee behaviour — but to recognize that biases are also creeping into the AI systems that they are building.
It’s great to have these training programs, but wherever they exist, we should expect to see substantive change throughout the organization — not just changes in personnel, but also efforts to address systems and databases that themselves are riddled with bias. In most cases, these things have to be completely rebuilt, and that’s a costly time-consuming process. I don’t see anybody making great strides in this regard.
One key issue is that there is no singular baseline or set of standards to evaluate bias — and no goals for overcoming the bias that currently exists throughout AI. Until we have that, the personal experiences and ideals of a select few will continue to drive decision-making.
The G-MAFIA is deciding the future of AI — and by extension, what the future of humanity will look like.
You have said that the U.S.-based members of the Big Nine function as a mafia in the purest sense. Please explain.
I refer to the U.S. members of the Big Nine as the G-MAFIA, which is an acronym for Google/Microsoft/Apple/Facebook/IBM/Amazon. But it’s more than just an acronym, because for many years these companies, like mafias everywhere, have been working independently of the government — and at the same time, not collaborating with each other.
Overwhelmingly, these companies now control the lion’s share of patents, and they control or have access to more data than any other organization — including the government, which is pretty scary. The G-MAFIA have built systems that are marketed as open-source systems, but in effect, once you start using them, you can’t untether yourself from them. They’re building the hardware; they’re making the investments; and they have the partnerships with universities. In a very real sense, these companies are deciding the future of AI — and by extension, what the future of humanity will look like.
Describe how China is approaching the future of AI.
That’s a really interesting piece of this. As indicated, in the U.S., the Big Nine function independently of the federal government. As a result, it’s the market and the whims of Wall Street that help to determine what these companies are doing. A couple of weeks ago, Google had its investors call, and it announced that it was going to spend a significant sum of money on R&D — which investors responded to negatively, because they saw that as an indication that margins would shrink. As a result of that, we’re probably going to see some shifts in their direction going forward. The fact is, nobody else is funding this. The government, especially under the Trump administration, has actually stripped away funding. We don’t have enormous sums of money for basic research. We have basically left it to these companies to build it all for us.
That is definitely not the case with China and its Big Nine companies — Baidu, Alibaba and Tencent. There has been not just consolidation among the industry but consolidation of power. Many years ago, the Chinese Communist Party (CCP) decided that AI would be a central focus for its activities going forward. There was an initial plan announced in 2013, and there have been subsequent strategic plans announced since then.
Some might argue that China has made many announcements over the years that have gone nowhere, but the key difference now is that there is a younger government in place — political leaders who are also trained in technology. There is also a critical mass of people using Baidu, Alibaba and Tencent, and while these are independent companies, they are very much under the control of Beijing.
For example, for a while, Apple and Google were competing in the autonomous driving space. But the Chinese government went in a different direction, decreeing that Baidu would be the company that investigates autonomous driving, while Tencent would investigate health and AI. With that kind of strategic alignment, as well as parts of the sovereign wealth fund being dedicated to AI, these companies are shaping the country’s education system. This is trickling down into other areas of life, including some domestic programs that are engineered for social control and the free flow of information.
On top of everything else, China’s plans and AI are tethered to its Brick and Road Initiative, which is touted as an infrastructure initiative along what was the old Silk Road Trading Route. What’s interesting is that this is not about building bridges and making trade easier; there are already 58 pilot countries on the receiving end of things like 5G and small-cell technology, fiber and data collection methods, and smart cameras. All of these systems that collect data on citizens to keep them under some form of control are now being exported. While we’re busy dealing with market demands in the U.S., China sees AI as a pathway to a new world order — and one in which it is very much in control. China isn’t just trying to tweak the trade balance; it is seeking to gain an absolute advantage over the U.S. in economic power, workforce development, geopolitical influence, military might, social clout and environmental stewardship.
With so many people spread out across such a large geographic space, they are about to see upward social mobility at a scale that we have never seen before in modern history. They are grappling with this and trying to figure out how to manage their population, give it what it wants and still, in the process, maintain control. And AI is one of the keys to that.
You have developed three broad scenarios for the future of AI: optimistic, pragmatic and catastrophic. Please summarize them for us.
Looking ahead under the optimistic scenario, we have made the best possible decisions about AI. AI’s tribes, universities, the Big Nine, government agencies, investors, researchers and everyday people heeded the early warning signs. We’ve shifted AI’s developmental track, we are collaborating on the future and we’re already seeing positive, durable change.
Under the pragmatic scenario, we have acknowledged AI’s problems, but along the way decided to make only minor tweaks in its developmental track, because AI’s stakeholders aren’t willing to sacrifice financial gains, make politically unpopular choices and curb wild expectations in the short term — even if it means improving our long-term odds of living and thriving alongside AI. Worse yet, we have ignored China and its plans for the future.
In the catastrophic scenario, we have completely closed our eyes to AI’s developmental track. We missed all the signals, ignored the warning signs and failed to actively plan for the future. We helped the Big Nine compete against itself as we indulged our consumerist desires, buying the latest gadgets and devices, celebrating every new opportunity to record our voices and faces, and submitting to an open pipeline that continually siphoned off our data. Rather than seeing a wide, colourful spectrum of people and their worldviews entering the field of AI via tenure track positions, top jobs on research teams and managerial roles, we
instead see no change.
At this point, which scenario is most likely to occur?
‘At this point’ is key, because it changes with the whims of the president in my country, and whatever might be happening in DC at a particular moment in time. I always make determinations based on the data that I have and what we know to be true today. Given these things, I would say that there is probably a 10 per cent chance of the optimistic scenario happening; a 50 per cent chance for the pragmatic scenario; and a 40 per cent chance for the catastrophic scenario.
That doesn’t mean we can’t reshape things to change these odds; that’s exactly why I wrote the book. I am hopeful about the optimistic scenario, but the key to achieving it will be widespread collaboration — not just amongst the G-MAFIA but throughout society, and that is no small feat. The future of AI is not just a technological question or a business question; it is also a geopolitical and geo-economics question.
One tangible way forward would be to turn AI on itself and evaluate all of the training data that is currently in use. I believe this can be done, and here’s why: As a side project, IBM’s India Research Lab analyzed entries shortlisted for the Man Booker Prize for literature between 1969 and 2017. The analysis revealed the pervasiveness of gender bias and stereotypes in the books themselves regarding basic features like occupation and behaviour associated with the characters. For example, male characters were more likely to have high-level jobs, while female characters were likely to be described as ‘teacher’ or ‘whore’. If it’s possible to use natural language processing, graph algorithms and other basic machine learning techniques to ferret-out biases in literary awards, those can also be used to find biases in popular training data sets. Once problems are discovered, they should be published and then fixed. The Big Nine—or the G-MAFIA at the very least — could share the costs of creating new training sets.
In addition, universities could redouble their efforts by collaborating with public and private organizations, taking some chances on their curriculum and making sure that there is a broader representation of people involved — not just among students, but among people who are being promoted through the tenure system.
What I’m talking about here entails structural, systems-level change over many years. But the longer we wait to get started on it, the worse off we’re all going to be — and the closer we will come to the catastrophic scenario. Also, leaders must recognize that China is not backing down. It has strategic alignment and a top-down management plan in place, as well as access to the data of 1.3 billion citizens. And it’s about to get data from other countries. In short, China is moving full steam ahead, while we risk getting lost trying to figure this all out.
What I’m talking about entails structural, systems-level change over many years.
You note in the book that we, as individuals, also need to change. In what way?
As more and more people begin to understand exactly what AI is, what it isn’t and why it matters, by default, they become members of AI’s tribes, and that means they have no more excuses. From that day forward, you should learn about how your data is being mined and refined by the Big Nine. You can do this by digging into the setting of all the tools and services you use: your email and social media, the location services on your phone, the permission settings on all of your connected devices. The next time you see a cool app that compares something about you (your face, body or gestures) with a big set of data, stop for a moment to investigate whether you are helping to train a machine learning system. When you allow yourself to be recognized, ask where your information is being stored and for what purpose. Read the terms of service agreements; if something seems off, show restraint and don’t use the system.
Also, in the workplace, we must ask a difficult but important question: How are our own biases affecting those around us? Have we unwittingly supported or promoted only those who look like us and reflect our worldviews? Are we unintentionally excluding certain groups? It is time for all of us to open our eyes and take part in shaping the future.
is an American futurist, founder of the Future Today Institute and Professor of Strategic Foresight at New York University’s Stern School of Business. She is the author of The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
This article appeared in the Spring 2019 issue. Published by the University of Toronto’s Rotman School of Management, Rotman Management explores themes of interest to leaders, innovators and entrepreneurs.
Share this article: