This article is published in the May 2024 issue.

Expanding the Pipeline: Will ChatGPT Expand Diversity in Computing? We don’t think so. Reasons for Concern and Paths Forward


By Lamia Youseff, Ph.D. with the help of Claude 3 Sonnet

Since the launch of ChatGPT, there has been immense public interest in how AI may disrupt industries, replace some human workers and even create new jobs. Like any transformative technology, the recent rise of large language models (LLMs) and generative AI has sparked both hope and fear about the future. A key question is whether these AI advances will improve diversity and inclusion for underrepresented groups and persons with disabilities in the computing field. 

While some are optimistic that AI could help level the playing field, several concerning factors suggest recent AI developments may widen participation gaps for women, minorities, and individuals with disabilities in computing unless proactive steps are taken.

Systemic Bias Baked Into the Data 

A major issue is that large language models like ChatGPT inherit societal biases from the data they are trained on. In a recent article titled, “Gender Bias in Automated Decision Making Systems (ADS),” [https://www.acm.org/binaries/content/assets/public-policy/aigenderbiaspaper.pdf] the authors argue that “machine-learned ADS systems may discriminate against certain groups or individuals by reflecting or reinforcing human or society structural bias, or by even introducing new bias.” This report  discusses the potential biases in artificial intelligence (AI) systems regarding gender, exploring how biases in training data and algorithm design can lead to unfair outcomes. It examines various case studies and presents recommendations for mitigating gender bias in AI, including improving data collection, increasing diversity in AI development teams, and implementing fairness metrics during model development.

Widening Confidence Gaps 

Another risk is that widespread capabilities like ChatGPT could worsen confidence gaps holding back women and minorities in computing fields. Numerous studies have shown that women and underrepresented minority students tend to underrate their own abilities compared to men from majority groups of the same skill level. The sudden emergence of highly capable AI assistants could exacerbate these effects if students would further doubt their skills relative to AI.

Additionally, LLMs systems like ChatGPT can give the user superficial high-level coverage of any topic through few prompts. Coupling this with the original recognized confidence gap in women and minority groups along with the “imposter syndrome” commonly found within these communities will create a vicious circle where women and minorities feel less qualified and will create a bigger confidence gap within the computing field.

Disproportionate Job Displacement 

There are also concerns that, as AI automates certain computing tasks, associated job displacement may disproportionately impact women and minorities first due to existing inequalities in hiring, retention, and career progressions. Vulnerable groups often have less secure footing, making them more exposed to workforce shifts.

Paths Forward 

To realize AI’s potential for expanding diversity rather than hindering it, concerted efforts from industry, educators, and policymakers are needed through a series of activities. Some of these activities are specific to AI and others are a continuation of the current activities which have been underway for increasing participation of women and underrepresented groups in computing. Specific efforts include: 

1. Debiasing training data and models 

New technical approaches to detect, measure, and mitigate systemic biases in AI training data and models will be vital. Increased transparency around datasets and model guardrails are also key, whether they are for closed-source models such as GPT4 and ChatGPT or open-sourced models such as llama-3.

2. Developing inclusive AI governance and Guardrails 

Multi-stakeholder efforts to establish guidelines and governance around inclusive and ethical AI development, deployment, and impact monitoring will be crucial.

Some examples of other activities to promote women and underrepresented groups in computing in general, which are even more critical in the AI era include:

3. Promoting STEM participation and mentorship

Organizations like Women in Machine Learning (WiML, http://wimlds.org/) and AI4ALL(https://ai-4-all.org/), which provide support, mentorship, and education programs for groups underrepresented in AI, should be expanded.

4. Building confidence and skills 

Proactive steps such as workshops on public speaking, interviewing, and other confidence-building activities for women and minority students will help counter AI-driven doubts. Efforts, such as CRA-WP workshops (e.g., https://cra.org/cra-wp/grad-cohort-for-women/, https://cra.org/cra-wp/grad-cohort-ideals/, https://cra.org/cra-wp/mentoring-workshop/) have been instrumental for increasing diversity in computing, closing the pipeline leak in senior roles and reducing the impact of the revolving door for senior women in leadership. It is recommended that mentors and department heads encourage and sponsor participation in such workshops. 

5. Tracking representation metrics 

Consistent data collection and public reporting on the participation of women, minorities, and disabled individuals at all levels of AI education and careers is imperative to gauge progress.

6. Elevating role models and personal stories 

Highlighting diverse role models and sharing personal stories of overcoming challenges can motivate and inspire people to persist in AI and computing paths despite obstacles.

Rather than leaving diversity in computing to chance, deliberate actions for inclusive AI development today can shape a more equitable future. Rapidly evolving AI capabilities present underrepresented groups both risks and opportunities – seizing the latter will require sustained commitment and vigilance.


About the Author: 

Dr. Lamia Youseff is a tech executive and AI/ML expert with over two decades of experience in academia (MIT, Stanford, UCSB) and industry (Google, Microsoft, Facebook, Apple). A founding engineer of Google Cloud, she has led multi-billion dollar AI/cloud businesses. Holder of several patents and author of 20+ papers, she advises startups in generative AI. Currently, she is Chief Executive at JazzComputing.com, a visiting researcher at Stanford, and an MIT research affiliate. Youseff teaches business management, AI strategy, and leadership at Stanford GSB, where she earned her Master’s in Management. She also holds a Ph.D. and M.Sc. in Computer Science.