Americas

  • United States

Asia

Charlotte Trueman
Senior Writer

Q&A: Why does Google care so much about hiring diverse AI teams?

feature
Aug 10, 20236 mins
Artificial IntelligenceDiversity and InclusionGenerative AI

As generative AI becomes entrenched in our everyday lives, concerns about the potential consequences of biased models on minority groups need to be addressed, says Google cloud executive Helen Kelisky.

diversity gender equality girl power south african women hugging and leadership careers south afric

While concerns around the existential threat to the human race due to possible generative AI advancements have been grabbing headlines of late, there’s a much more current and arguably real concern — discrimination.

Although major players in the AI market have affirmed their commitment to diversity and inclusion in their workplace, with women and people of color still finding themselves underrepresented in the technology industry, there’s a fear that the training received by AI models will be inherently biased.

It’s a concern shared by both industry professionals and political bodies alike. In June of this year, European Commissioner for Competition Margrethe Vestager argued that AI-fueled discrimination poses a greater risk to society than the prospect of human extinction.

Elsewhere, the UK’s Equality and Human Rights Commission (EHRC) has expressed concern that current proposals regarding the regulation of AI in the country are inadequate to protect human rights, noting that while responsible and ethical use of AI can bring many benefits, “we recognize that with increased use of AI comes an increased risk of existing discrimination being exacerbated by algorithmic biases.”

Helen Kelisky, the managing director of Google Cloud UK and Ireland, believes that attracting and retaining a diverse workforce is the key to addressing this challenge, arguing that having teams made up of talent from different backgrounds and with different perspectives is vital to the training of these systems to safeguard models from problems such as replicating social biases.

Computerworld talked to Kelisky about the importance of having diverse AI teams. The following are the excerpts from the interview.

Why is it so important for AI companies to ensure they have a diverse workforce — particularly when it comes to their technical teams? 

Helen Kelisky headshot Google

Helen Kelisky.

As optimistic as I am about the potential of AI, we have to recognize that it must be developed responsibly. If AI technologies are to be truly successful, they cannot leave certain groups behind or perpetuate any existing biases. However, an AI system can only be as good as the data it is trained on, and with humans controlling the data and criteria behind every AI-enhanced solution, more diverse human input means better results.

Outputs of any AI system are limited to the demographic makeup of its creators, and therefore subject to the unintentional biases that this team might have. If an AI tool is only able to recognize one accent, tone, or language, the number of people able to benefit from that tool significantly reduces.

For example, if a technical team is made up of predominantly white men, facial recognition systems could be inadvertently trained to recognize this demographic more easily than anyone else.

What are the consequences of not having diverse teams?

Strong representation means stronger products. AI algorithms and data sets have the power to reflect and reinforce unfair biases related to characteristics including race, ethnicity, gender, ability, and more. Unfortunately, we’re already seeing this reinforcement happen in the real world, with some image recognition software identifying photographs of Asian people as blinking, and one study reporting that Black people encounter almost twice as many errors as white people when using Automated Speech Recognition (ASR) technologies in the US.

An AI tool not able to recognize the face, accent, or language of demographic groups that may have traditionally faced discrimination only serves to add to that discrimination. It can heighten barriers to diversity, equity, and inclusivity across the countless areas where AI should be implemented as a force for good like recruitment, healthcare provision, and security. 

Are AI vendors taking this into consideration when building their teams?

At Google Cloud, this drive for diversity applies to our approach to AI development and as part of our AI principles, we seek to avoid creating or reinforcing unfair bias. One way we are delivering on this is via the AI Principles Ethics Fellowship, through which we trained a diverse set of employees from across 17 global offices in responsible AI.

Additionally, we created an updated version of the program tailored to managers and leaders, embedding Google’s AI principles across 10 product areas, including Cloud.

We also have a number of career development and promotion programs in place and achieved our racial equity commitment goal of increasing leadership representation of Black, Latino, and Native American Googlers by 30%. We are also proud to have the highest-ever representation of women in tech, non-tech, and leadership roles globally.

Discrimination is obviously a complex issue. How can AI vendors work to mitigate it? Are diversity, equity, and inclusion (DE&I) initiatives the solution?

Mitigating discrimination is a cause that impacts every level of an organization, but it starts with the hiring process and prioritizing ways to actively tackle unconscious bias in the recruitment process to attract diverse talent. It’s easy to hire in your own image, but there’s nothing more dangerous than a homogenous leadership team or indeed a homogenous AI development team.

Of course, the work doesn’t stop at the point of entry. Fighting against discrimination is an ongoing battle, and must be a focus for all employees, all of the time. At Google, we ensure our managers are educated and knowledgeable about diversity so they can better support every member of their teams. Whilst increased education and representation can’t guarantee the complete removal of discrimination, it’s a good place to start. 

The skills gap in the technology sector is continuing to grow. Is enough being done to encourage diverse candidates to enter the industry?

An underlying factor contributing to the skills gap is the lack of access under-represented groups have to careers in tech. According to the Alan Turing Institute, only 22% of data and AI professionals in the UK are women. With more people using AI every day, plugging the skills gap and diversifying new talent is a vitally important issue that the industry needs to do more to solve. 

The sector can also improve the talent pipeline through better collaboration. In May 2022, we launched Project Katalyst in collaboration with our partner Generation, which reached out to underrepresented groups in the UK who wanted to gain experience and improve technical skills. As part of the project, we train cohorts of talented young people and are then able to offer them job opportunities through our partners and customers.

For some years now, research has shown that the more diverse a workforce, a leadership team, or a company board is, the better the decision making leading to increased financial performance. The same applies to the AI models. The more diverse the input the more relevant the output.

Charlotte Trueman
Senior Writer

Charlotte Trueman is a staff writer at Computerworld. She joined IDG in 2016 after graduating with a degree in English and American Literature from the University of Kent. Trueman covers collaboration, focusing on videoconferencing, productivity software, future of work and issues around diversity and inclusion in the tech sector.

More from this author