Main Content

An Equity Lens on Artificial Intelligence

By Carmina Ravanera and Sarah Kaplan

Business leaders, policymakers and researchers must work together to prevent the reinforcement of inequality through technology.

Humans Standing on Computer Chip Illustration

 

Artificial Intelligence (AI) describes machines that can simulate some forms of human intelligence. Some conceptualizations of AI refer to machines that act indistinguishably from humans, while others focus more on ‘machine learning’ that can identify patterns, achieve an optimized outcome to a given problem, and/or make predictions and decisions based on prior information.

To achieve these goals, AI uses algorithms that ‘learn’ from large data sets and adjust and improve over time based on new data. While not a new concept, AI is increasingly embedded in people’s lives and will only become more pervasive.

Organizations across many sectors use AI for a variety of purposes: hiring employees, performing surgeries, tutoring school subjects, making decisions about criminal sentencing, making lending decisions, automating driving, and predicting where crime will occur, to name a few. AI is also used to make recommendations for what people watch on television or the music they listen to; to select which advertisements to show users on social media; and to display results on online search engines.

It would be difficult to find a field today where AI is not involved in some respect. It has become so ubiquitous that some researchers have suggested it is a new type of infrastructure. Rather than being physical and visible like roads, AI is often invisible, but it is nevertheless a moderator of social relations and organizational practices and actions — including the distribution of power.

Social relations and values have long been reflected and reproduced in technology, and AI is no exception. But this also means that the enduring bias, discrimination and inequality that are deeply rooted in society may also be deeply rooted in this technology.

In 2020 and 2021, the economic impact of the COVID-19 pandemic was felt most acutely by groups who were already marginalized, particularly women, racialized communities and those experiencing low income. Researchers and policy analysts have suggested that recovery policies must be especially attuned to these groups to prevent rising inequality.

Understanding the impacts of AI on the economy and society in Canada — especially in the context of the economic downturn caused by the COVID-19 pandemic — means understanding its impacts on marginalized groups. AI can potentially be used innovatively to generate outcomes that benefit diverse communities. However, research has also shown that a focus on equitable AI for organizations and policymakers is necessary to mitigate harm.

 

Brain illustration inside computer chip

 

The Potential of AI

AI has the potential to improve outcomes for people across all sectors. Ideally it removes the possible impacts of human error by making accurate predictions and assisting humans with decision-making. For example, in workplaces, AI used in hiring could reduce human bias in finding the best candidate for a position; used in healthcare, it can help diagnose diseases and identify treatments; for financial institutions, it can predict the likelihood of people defaulting on mortgages.

The prediction power of AI is significant considering that humans’ predictions and decisions are clouded by cognitive and other biases. People often do not fully understand why they make certain predictions, and their intuition can be impacted by their prior experiences or opinions. As researchers have noted, statistical prediction techniques as undertaken using AI tend to outperform prediction that is undertaken by humans with expertise and experience. Human prediction and decision-making is often opaque — it is difficult to understand and probe the various factors that influence people. Human decision-making is also hard to audit. To the extent that algorithms can be audited and changed, AI could be a tool for mitigating discrimination, bias, and other forms of marginalization.

For example, an algorithmic tool used by Allegheny County’s Office of Children, Youth and Families in Pennsylvania aims to predict children’s risk of harm that call screeners may be unable to do as quickly and accurately, thus better directing resources to high-risk cases. As reported by The New York Times, the tool has increased the rate at which high-risk calls are addressed and reduced the percentage of low-risk cases being needlessly investigated.

But societal inequality can be and is replicated in AI as with all technologies, and mitigating these impacts can be challenging. For example, the Allegheny County risk assessment tool has been critiqued for disproportionately impacting poor families: the algorithm uses poverty as an indicator of high risk for neglect and abuse, when this is in fact an unfair assumption.

Power relations and inequality embedded in society shape the data that are inputs to algorithms, the algorithms themselves, and the way algorithms are used. This means that the transformative potential of AI comes with significant risks and challenges, many of which researchers and advocates are currently working to address.

 

AI and Inequity

Following are some examples of the ways in which AI systems can reproduce existing biases and marginalization.

 

BIASES AND GAPS IN DATA. Because bias and inequality exist across all levels of society, it follows that the data on which some AI is built contains such biases, which AI may then reproduce. Attention to the reproduction of gender or racial or other forms of discrimination through AI is not new, yet it remains a persistent challenge. In 2015, Amazon developed a now-defunct AI recruiting system that was found to have eliminated some women from candidacy, based on previous hiring patterns in which men dominated. The same issues have occurred for racial gaps in data. In healthcare, an AI system used for detecting cancerous skin lesions was trained on a database containing mostly light skinned populations, rendering it less likely to screen accurately for those with darker skin.

Recently, researchers identified how AI facial recognition software from IBM, Microsoft and Face++ is less accurate for darker-skinned subjects and especially darker-skinned women, leading to a higher likelihood of their misclassification when compared to white men. Again, this came about because the data on which they were trained did not have diverse racial and gender representation. Depending on what purposes facial recognition software is used for, this error could reinforce the surveillance and mistaken identification of racialized people, and especially racialized women.

AI is also used by the public sector in areas such as policing. A recent study showed that several police jurisdictions in the U.S. are using racially-biased data for predictive policing systems. The data are biased because of historical over-policing of minority communities, and this bias in turn led to biased predictions about who will commit crimes and where they will be committed. This could thus reinforce the targeting of these minority communities. This type of algorithmic policing is also being developed or used by police forces across Canada as well as in airports, alongside surveillance technology that collects and monitors people’s data online or from images.

 

REINFORCEMENT OF STEREOTYPES AND MARGINALIZATION. Harms through AI not only come about through problematic datasets but also in the way organizations have often unintentionally designed and used it to reinforce stereotypes, marginalization, and erasure of certain groups. For instance, AI-powered facial analysis software has been used to propagate the false idea that people with certain facial features are prone to criminality, and that the software can identify these people. This opens dangerous possibilities for racialized communities who are already stereotyped. Researchers have further pointed to how common portrayals of AI — such as stock images and other representations of robots and robotics — tend to be racialized as white with Eurocentric appearances and voices. This reproduces conceptions of intelligence, professionalism, and power as being associated with whiteness.

Another example is that AI-powered digital assistants, such as Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana are named and gendered as women. Researchers have discussed how the gendering of this technology reaffirms the gender division of labour, where women are placed in caregiving and service roles. These feminized digital assistants act as both assistants and companions, ensuring users’ well-being in a friendly and empathizing manner, further entrenching stereotypes about women in subordination.

In some cases, the reinforcement of stereotypes through AI is explicitly tied to profits. A recent independent audit of Facebook’s algorithmic advertising delivery of job ads found that it perpetuates gendered job segregation based on current gender distributions in different job categories (e.g., a job ad for car sales associates was shown to more men than women, while the opposite was true for an ad for jewelry sales associates.) In theory, developers could adjust the ad delivery algorithm to compensate for data biases or could remove algorithmic delivery from job ads altogether. However, this would come in conflict with the technology companies’ short-term profit motives, which are based on clicks on ads. Thus, addressing these biases will require leadership commitment to making change.

 

AI and Values

The questions are: Which social values should be written into machines? Who decides? How should it be done? And how can makers and users of AI be held accountable?

A lack of transparency for the public and a lack of accountability of those developing and implementing AI can pose troubling scenarios for equity. Another influence is that AI is often created by companies that are not representative of marginalized groups and do not have their needs in mind.

‘Teaching’ fairness and equity to algorithms is challenging because these are highly complex concepts that must be somehow articulated to a machine. How can algorithms be trained to adhere to social norms and values, many of which involve intricate structures such as law or culture? As researchers have asked, can we program AI such that taking certain wrongful actions would impose costs on it — just as humans avoid certain actions to avoid social penalties such as shame or guilt? Others have suggested that ‘supervising algorithms’ can act as ‘moral compasses’, monitoring for bias and changing algorithms accordingly. But which values should be prioritized, and in which cases?

There may also be trade-offs with accuracy when programming such values into AI. For instance, an algorithm making predictions about who will default on a credit loan would have to be explicitly programmed to reduce racial disproportionality, but this could result in less accurate predictions. Yet, not doing so would reinforce inequity, considering histories of disenfranchisement and oppression that have led to increased rates of poverty and financial insecurity for racialized communities.

This brings about the question as to whether AI use should be limited and where, as well as whether humans making the same decisions and predictions are more or less likely to perpetuate bias. Engaging with these problems and connecting AI to social and historical contexts is essential as AI becomes ever more ubiquitous.

 

AI and Diversity

Increased representation of marginalized groups in AI development could lead to more equitable outcomes. While there are few studies on the impacts of diverse teams on creating more equitable products specifically, across a variety of sectors there are examples of teams led by women, racialized peoples, and other marginalized groups who have created products and services that are purposely inclusive.

For instance, Fenty Beauty, a cosmetics company founded by Barbadian pop star Rihanna, creates makeup shades for those with darker skin tones who have often been excluded from cosmetic lines; and AccessNow, an app made by a founder with a disability, Maayan Ziv, indicates to users the accessibility of different locations in a central information source. In the case of AI, products that have been tested and created by a homogenous group logically may not take others’ needs or perspectives into account. If a racially diverse team was working on facial recognition software, one can imagine they would have been likely to notice the potential for race-based misclassification.

Around the world, women and especially racialized women continue to be under-represented in computer science and computer engineering fields. Globally, only 22 per cent of AI professionals are women. In Canada — despite a relatively high concentration of AI professionals compared to other countries — just 24 per cent are women. Just 15 per cent of AI research staff at Facebook are women, and only 10 per cent at Google. Further, only 2.5 per cent of Google’s full-time workers are Black, as are four per cent of Microsoft’s.

In December 2020, Dr. Timnit Gebru, who was a leading AI research scientist at Google, made media headlines when she was fired for a paper she wrote on risks and harms of language models (i.e., AI trained on text data). Some have shown that her firing revealed abusive tactics, including gaslighting, dismissal and discrediting — tactics that are commonly used against Black women who aim to advance justice, not only in technology but across society.

 

Illustration of half brain half computer circuit

 

Addressing Biased Data

AI functions by ‘learning’ from data sets: Algorithms are created to mine data, analyze it, identify patterns and make predictions. Datasets may come from any number of sources including books, photos, health data, government agency data, or Facebook profiles. Societal biases and inequality are often embedded in such data, and AI will not promulgate social values such as fairness unless it is directly programmed to do so. Thus, if an AI hiring system is based on previous hiring data where few women were hired historically, the algorithm will perpetuate this pattern.

On the other hand, data may also be biased due to omissions. Datasets may omit entire populations who do not have internet histories or social media presence, credit card histories, or electronic health records, leading to skewed results. Those omitted are often racialized communities, people with low socioeconomic status, and others on the margins.

Ensuring fairness in the data used for AI is a complex problem considering how inequality and inequity influence people’s lives in complex and overlapping ways. It is not effective to simply remove variables such as gender and race to avoid discrimination by algorithms, because proxy variables may end up creating the same impacts.

Recently, Apple’s credit card was in the news because its algorithm appeared to give lower credit limits to women than men, even in the case of a woman and a man who were married and sharing assets. Initially, Apple and its banking partners said the results could not be biased because gender was not a variable in the algorithm, and that the credit scoring was ‘gender blind’. Ultimately, while an investigation into the Apple Card concluded it did not discriminate against women, experts noted that creating a gender-blind algorithm would not prevent gender discrimination from happening inadvertently.

 

Ensuring Accountability

Research suggests that, in general, there is a lack of accountability to people who are being harmed by AI systems. That is, the scope of AI’s impacts as well as who is responsible for creating and mitigating them is often unclear. This suggests the need for more assessments and audits on what AI-driven products and services mean for people, including evaluations on how fair they are.

The first challenge for accountability is transparency. There is often a lack of transparency around AI systems’ purposes, their algorithms, and the data they use. This is sometimes called the ‘black box’ problem, where the inscrutability of these systems can prevent public understanding of risks and impacts. If people are not aware of how algorithms are being used on them, it is not possible to question or change their predictions and decisions. Even when AI is used by the public sector for processes as wide-ranging as surveillance and immigration decisions, the public may not have knowledge or ownership of it. As such, some researchers have proposed complete transparency of AI, where algorithms and/or data as well as the results they are aiming to achieve are made available for public scrutiny. Helping the public understand how algorithms and AI are influencing their lives can be a step towards mitigating potentially harmful outcomes.

At the same time, there are debates around how transparent AI systems can feasibly be. Some researchers suggest that requiring such transparency would stifle the ability for companies to innovate because intellectual property would not be protected. Further, since algorithmic code is generally inscrutable for the average person, transparency may not necessarily increase people’s trust of AI nor decrease its harms. Software is also proprietary, and transparency may not be possible for security, safety, or legal reasons.

Finally, governments or companies using algorithms may not want to share them for fear that people will figure out how to circumvent them or manipulate outcomes. Thus, transparency to benefit the public will need to be balanced with the benefits of intellectual property and innovation.

The second challenge for accountability is a lack of appropriate governance structures. Researchers are currently working on designing governance structures and auditing procedures that can be put in place within technology companies that explicitly evaluate an AI system in terms of social benefits and values. Even though many companies may already conduct audits on their AI, they are unregulated and not standardized, making it hard for users to assure any results of the audits are used to improve algorithms.

Further, third-party researchers conducting audits tend to face many challenges: companies such as Google and Facebook create barriers to outside audits by prohibiting creating fake profiles for research purposes and often do not make necessary data available. Providing such access involves a balance of privacy and auditability. Third-party auditing is also costly and involves substantial time and effort.

Some researchers have proposed that there should be regulatory mechanisms ensuring companies and governments are held accountable for unfair and unjust impacts. If a law or policy were in place such that those who created and owned algorithms were held directly responsible for its outcomes, this might help ensure that AI is developed with ex ante considerations of its social impacts, rather than through ex post efforts to address harms after they occur.

Following are three areas where progress can be made towards ensuring equitable AI.

 

REGULATION AND POLICY. It is widely recognized that governments have some catching up to do to ensure AI is developed and used in a way that reduces harm for marginalized groups. To establish greater accountability, new policies or laws could ensure that it is clear who created, owns and controls AI, thus attributing responsibility where there currently is little. Others have suggested that audits and impact assessments of AI should be mandatory and undertaken before and during AI implementation. Further, although there are debates around transparency, standard processes of ‘explainability’ can still be put in place so that organizations provide justifications for decisions made about and by AI (including its purpose, design, and datasets) and disclose risks, such as through public records and published reports.

Some advocates have recommended a comprehensive global framework that would broadly govern AI use, similar to those of universal human rights from the United Nations. In the U.S., the Algorithmic Accountability Act was introduced in 2019, proposing that large companies must evaluate their algorithms for bias and the risks they pose to users. In 2021, the EU created a proposal for an Artificial Intelligence Act, the first ever legal framework on AI.

Canada is in the process of developing its own policies and frameworks. Following a $125 million investment in a Pan- Canadian Artificial Intelligence Strategy in 2017, the federal government developed a Directive on Automated Decision-Making and a public Algorithmic Impact Assessment. It further created an Advisory Council on Artificial Intelligence in 2019, although this council has been critiqued for lack of representation of racialized and other marginalized groups. The Office of the Privacy Commissioner of Canada has also recently made recommendations for updating the Personal Information Protection and Electronic Documents Act to better regulate AI, and in Ontario at the time of writing, public consultations are underway to create a provincial Trustworthy Artificial Intelligence Framework. The effects of these policy efforts for fairness, transparency and accountability are yet to be seen.

 

INDUSTRY STANDARDS. Beyond regulation, advocates are working towards AI that prioritizes social considerations instead of discovering and addressing problems after the fact. Indeed, AI can be developed to align with the goals of reducing systemic inequality and inequity, but as mentioned earlier it is not an easy task to program the complex norms and values into AI that humans understand when they are making predictions and decisions.

Another question arises from this challenge: If such AI has not yet been robustly developed, what are the circumstances in which AI should not be used, and what are the best alternatives? This also becomes a moral question involving trade-offs and values. Purposely aligning AI with social values means organizations may have to prioritize equity and other social considerations over profit or efficiency. Such a shift may require significant time and money, such as the costs of conducting research on social impacts or the potential revenue losses from not implementing new AI due to ethical reasons. Thus, there is a need for industry cooperation and collective action involving the establishment of standards, so that safe and responsible AI becomes accepted as a norm.

 

REPRESENTATION. The lack of representation of marginalized communities in the development of technology could be mitigated through more equitable hiring and promotions. There have been many studies on solutions for making workplaces more inclusive to women and racialized groups. These include being more flexible to workers who need to prioritize caregiving (usually women); transforming non-inclusive hiring and recruiting practices that favour certain candidates (such as young men or people who have trained in elite schools); and working towards anti-racist and anti-sexist policies and culture. Schools teaching STEM can also usefully transform their cultures, as studies have shown that young women may be treated poorly by their male classmates in Engineering and made to feel like they don’t belong there.

 

In closing

While AI has the potential to better many lives, it can also lead to significant harms. This is an important moment for leaders, policymakers and researchers to prevent the reinforcement of inequality and inequity through technology. Solutions will involve a combination of regulation and policy, new research towards fairer AI, shifting norms around who develops and makes decisions about AI, and ensuring accountability towards those who are most impacted.

One thing is certain: Without concerted efforts, the reinforcement of systemic bias and discrimination will continue to perpetuate through technology systems that are quickly becoming ubiquitous.


Carmina Ravanera is a Research Associate at the Institute for Gender and the Economy (GATE) at the Rotman School of Management. Sarah Kaplan is Founding Director of GATE, Distinguished Professor of Gender & the Economy and Professor of Strategic Management at the Rotman School. They are co-authors of “An Equity Lens on Artificial Intelligence”, the GATE report from which this article has been adapted. The complete report is available online.


Share this article:


Read More Follow Us on twitter Email List Subscribe Today