Baked-In Bias: The Need for Ethical AI

Published on Feb 1, 2024

Written by Layla O'Kane

Ethical AI illustration

It’s been just over a year since ChatGPT was launched, a year with substantial excitement and innovation as well as concern and regulation. Governments, companies, and education providers have spent the year scrambling to figure out how to successfully integrate new AI technologies into their policies and processes while AI capability continues to advance.

Recent research by the Organisation for Economic Co-operation and Development (OECD) shows societal ambivalence towards AI. On average across OECD countries, 35% of adults reported worrying that AI would “mostly harm” in the next two decades whereas 42% of adults believe it would “mostly help”. These attitudes vary by demographics, with women, those without more than a high-school degree, and those who have faced discrimination based on factors such as the color of their skin or their national or ethnic group were more likely to express concern that AI would have a harmful long-term impact.

The fact that those who have faced discrimination express more concern about AI being overall harmful is rational: biases in AI have serious real-world consequences, including in criminal sentencing, who gets hired, and who is offered a bank loan. Despite these risks being well-documented and widely known, there is still substantial skew in the way AI is talked about. In December 2023, the New York Times published an article titled “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement.” Of the twelve people profiled, all are men and almost all are white. Why aren’t the godmothers of AI. like Fei-Fei Li, also included?


Perhaps one of the reasons is that, for all the hand-wringing, few AI employers are actually asking for creators and developers of AI to have ethical AI skills. Recent research published jointly by my team at Lightcast and the OECD looked at job postings for AI workers in 14 OECD countries. Within those job postings, we looked for mentions of ethical AI, including keywords such as “AI ethics”, “responsible AI”, “ethical AI”, and others across languages used in these countries. A tiny fraction (less than 2%) of AI job postings in all countries studied requested these skills in their AI workers. In the United States, where new generative AI technologies are developing rapidly, this proportion is only 0.5%.

There is also good news for ethics in AI. The Biden administration put out an executive order in October to support Safe, Secure, and Trustworthy Development and Use Of Artificial Intelligence. As a part of this strategy, federal agencies are encouraged to hire Chief AI Officers to oversee the use of AI technologies. While there are currently a very small number of postings for Chief AI Officer jobs across public and private sector, the skills they call for are encouraging: almost all contain at least one mention of ethical considerations in AI.

It is also worth emphasizing that there are many ways AI, and specifically generative AI, can be used to further innovation specifically for the greater good. Generative AI can help close achievement gaps for low-income students by providing low-cost tutoring support. Earlier this month, Reshma Saujani, the founder of Girls Who Code, announced a new AI tool that helps working parents and caregivers access paid leave benefits in New York. 

AI and impact work are not inherently at odds with each other. But without requiring a baseline understanding of ethical AI in the workers who support AI creation and development, there is a higher likelihood for baked-in bias. Consideration of ethics in AI should start from the data collection and development stage and included at all levels of an organization rather than being taken up only by post-development regulation or mandates.