Main Content

How AI Amplifies Human Competencies

Questions for Ken Goldberg, Professor, UC Berkeley and CEO, Ambidextrous Robotics | Interview by Karen Christensen

A machine learning veteran describes the quest for ‘inclusive intelligence'.

Not everyone is on board with AI and machine learning. None other than Elon Musk has suggested that it might lead to World War III. What is your take on the situation?

You’re absolutely right that there is a huge amount of alarmism and ‘automation anxiety’ right now. Elon Musk, the late Stephen Hawking and Bill Gates have all made frightening predictions about super-intelligent machines as an existential threat to humans and jobs. These predictions are exaggerated and counterproductive. The fact is, we are very far from achieving artificial general intelligence (AGI). It’s important to be thoughtful about how we use AI, and in particular, to consider ethics, fairness and how any technology might be abused. But the idea of superintelligence coming to dominate humans is science fiction. Rest assured: Humans have many good years left.

Instead of embracing the notion that robots will eventually surpass and replace us (‘singularity’), you have introduced the concept of ‘complementarity’. Please define it.

In contrast to the fear of robots taking over and becoming superior to humans, complementarity emphasizes the positive potential for AI to complement human abilities by reducing drudgery — giving us more time to do what we do best.


Humans are far superior to machines in terms of general intelligence and dexterity.Tweet this


One of these things is the ability to empathize. Understanding how someone is feeling is a uniquely human quality. We are nowhere close to having a machine be capable of that, because to feel empathy, by definition, you have to be able to put yourself into the position of a human being. AI and robots will never be able to do that. 

Another uniquely human trait is creativity. I have yet to see any evidence that AI or robots can do anything truly creative. They may be able to assist us in creative endeavours — for instance, by helping us quickly visualize designs or looking up facts, which is extremely valuable — but that doesn’t translate into a computer replacing a creative person in their job. I know that some people believe journalists, doctors, and lawyers can eventually be replaced by AI, but in my view that is not even remotely possible. The nuances of communication that these jobs require is far beyond the capabilities of AI.

In addition to job loss, many people are concerned about algorithmic bias — whereby AI algorithms cause harm to under-represented groups in society. Can you touch on that, along with your idea of ‘inclusive intelligence’?

At UC Berkeley, we are trying to build technology that is inclusive in the sense that it is inherently thoughtful about people who may be vulnerable or disadvantaged. For instance, AI is currently being considered for things like mortgage decisions and healthcare diagnostics. In both cases, there are vulnerable populations involved, so it is very important to be aware of the potential for inherent bias in the data used to train AI systems. These systems can easily be misused, because they are often treated as a ‘black box’ that generates outcomes without explanations. If you just blindly follow these systems, you can end up with very biased and unfair outcomes that can have severe consequences.

Another thing you’ve said is that ‘complementarity can also enhance diversity’. Please explain.

There is a technique in AI known as a ‘random forest’, which was developed at UC Berkeley in 2001. Random forests are still widely used and are one of the leading methods for AI and machine learning. In essence, they extend the concept of a decision tree to classify data, but instead of one tree, the idea is to generate many trees. The developers proved that a random forest is always superior to a single decision tree, as long as the trees are sufficiently diverse. This is formal proof of something that we are also seeing evidence of in the realm of human interactions: That diverse teams perform better than homogenous teams.

The rationale in that arena is that homogenous teams — even if they are made up of the smartest people in the world — have similar backgrounds and experiences, and as a result, an ‘echo chamber’ is created: Group members don’t question their assumptions, which often leads to conclusions, designs, or directions that are not as creative or innovative as they could be. As with random forests, diversity in groups is extremely valuable. It is a fundamental property of both collaboration and complementarity.

For more than 30 years, you have been working on a very particular problem. Please describe it.

For many years now, my students and I have been studying the problem of robot grasping. This is an extremely difficult problem — even though grasping an object is extremely easy for humans. You may be holding a pen or a cell phone in your hand right now. We do these things effortlessly. We pick things up, we can hold multiple items at once, we can twirl a pen in our fingers. People don’t appreciate how difficult this is for robots.

Robots are still incredibly klutzy, and the fundamental problem here is uncertainty. This uncertainty arises from a variety of sources, but in particular, it is due to errors in sensing, errors in control, and uncertainty in the physics of grasping itself. All of this combines to make contact between a robot and an object in the environment very difficult to predict exactly. Even tiny, miniscule errors can make all the difference between a robot holding something securely and dropping it.


Anyone whose work involves human interaction, creativity, or dexterity is safe.Tweet this



We have been trying to develop methods that improve robot dexterity, and while we have made some progress in recent years thanks to advances in Deep Learning, don’t expect a dexterous robot in the next five or 10 years.

Broadly speaking, what are machines really good at right now?

Machines are very good at identifying patterns and sifting through vast amounts of data. They’re also really good at doing computations correctly — far better than humans. And they are very systematic and very good at vigilance — which means you can have a system with a camera that just watches a door continuously, 24/7. And obviously, machines have much higher strength than humans.

As a result of these strengths, it’s not surprising that we have machines that surpass humans in certain aspects. But humans are far superior in terms of general intelligence that allows us to reason in complex, novel situations and in terms of dexterity, which allows us to manipulate objects that we haven’t encountered before. I am convinced that this will remain the case for quite some time into the future — for at least 20 and maybe 50 to 100 years. 

Going forward, do you think most people will embrace AI and machine learning?

I do think we’re going to see real advances and benefits in many industries, from healthcare to transportation. In terms of being able to filter data, we are already enjoying the benefits of very fast algorithms for performing search. Most people don’t realize that this is a form of artificial intelligence. We’ve also readily embraced things like Waze and Google maps, which can route traffic extremely efficiently because they have access to so much data. Those are just two examples of tools that we are already using every day.

I do not think we’re going to see a self-driving taxi in our lifetime, because it is extremely difficult to solve the engineering challenges involved in driving a car in an urban environment. I do think we’ll get better and better tools for driving on highways, which will be very valuable. But we’re not going to suddenly replace human drivers.

There has been a huge increase in e-commerce and online shopping, and there are many advantages to that. For one, people in rural areas can now access a huge selection of products at reasonable prices. The challenge is how to manage the delivery of all those orders, and as indicated, we’re working on developing robots that can assist humans in the warehouse to grasp items and package them for delivery. Again, this isn’t going to wipe out all the warehouse jobs; in fact, I think we’re going to have a shortage of human workers.

Overall, I’m not worried about robots or AI as a threat to humans. Anyone whose work involves providing human interaction, being creative, or doing anything that requires dexterity is safe.   


Ken Goldberg is the William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley and holds secondary appointments in Electrical Engineering and Computer Science, Art Practice, and the School of Information. He also holds an appointment in UC San Francisco’s Department of Radiation Oncology and is CEO of Ambidextrous Robotics.

This article appeared in the Winter 2020 issue. Published by the University of Toronto’s Rotman School of Management, Rotman Management explores themes of interest to leaders, innovators and entrepreneurs.

Share this article:


Read More Follow Us on twitter Email List Subscribe Today