Before the drama surrounding his dismissal and reappointment as CEO of OpenAI, Sam Altman stated in an interview earlier this month, “We are heading toward the best world ever.” As a specialist in gender parity in journalism, this remark got me thinking: whose world was getting better than anyone’s?
It turns out that there aren’t many women in the one the Altman team is creating. In the midst of the controversy surrounding his termination, I conducted an analysis that produced some startling revelations. For instance, of the 702 (out of 750) workers who signed the letter calling for Altman’s reinstatement, over 75% were men. This gender disparity is consistent with what McKinsey’s The State of AI in 2022 report found in AI teams.
With Altman’s return, the newly formed board of directors at OpenAI is now entirely composed of white men, a situation that is further exacerbated by the predominance of men among executives. Where are the female leaders and experts in AI voices when it comes to covering this most dramatic of Silicon Valley tales?
I have long been concerned about how women will shape our AI-infused future and the news surrounding generative AI. Through data analysis and expert interviews, I’ve come to the conclusion that women are underrepresented in the AI field, whether as developers, news editors, or experts in AI.
Processing large text, image, and video datasets—all of which historically have included far more men than women—is essential to generative artificial intelligence (GAI).
A narrative about GAI’s risks, limitations, opportunities, and direction that is primarily shaped by men is produced by this inherited male bias, which is reflected in the news and combined with the structural disparities women face in modern society.
According to AKAS’s pronoun analysis of the global online news database of the GDELT Project, men have been quoted in stories about AI in English-speaking countries 3.7 times more frequently than women this year. The results of the most recent Global Media Monitoring Project show that women were the focus of just 4% of news stories on science, technology, funding discoveries, and developments.
In April, AKAS conducted an evaluation of tech news editors and found that just 18% of them were female in the US and 23% in the UK.
Although the long-term risks AI poses to humanity have drawn attention, what are the immediate risks of a world dominated by the perspectives of men? We have to work quickly to stop women’s harmful absence from the workforce and to better understand their needs, concerns, and experiences with AI.
Data from the Pew Research Center for 2022 shows that US women are between 8% and 16% more worried than men about a range of AI developments, including the ability to diagnose illnesses and carry out repetitive tasks.
Vice-president of research integrity at Digital Science Leslie McIntosh states: “You are not in the story if your perspective is not reported.” GAI is constructing and projecting our future using those historical texts. Women’s voices are disappearing, and where they were once cracks, they are now wide spaces.
“Disparities in representation of race, gender, or different occupations [in generative AI models] are important, since if the media uses these kinds of models uncritically to illustrate stories, they could easily perpetuate biases embedded in the training data of the models,” says Nicholas Diakopoulos, a professor of communication studies at Northwestern University in Chicago.
Laura Ellis, the head of technology forecasting at the BBC, says that “we simply don’t know what datasets have been used to train these models,” making it difficult to determine which preexisting biases AI-generated content is amplifying the most. However, these are unasked questions.
Ellis queries, “Where are the ‘godmothers’ of AI?”
Who are the experts who are more vocal about GAI developments, then? Basically, a few Caucasian men from the West, primarily from the United States.
What steps can be taken to prevent the concerns expressed by women on the periphery of the AI industry, like Tasha McCauley and Helen Toner, who were both recently removed from OpenAI’s board, from going unanswered? The usefulness of “guard rails,” or coding meant to correct data biases, is hotly debated, but among the experts I spoke with, there was a general agreement that AI alone could address the diversity gap.
The CEO of Mediacatch.io, Lars Damgaard Nielsen, is a supporter of using AI to monitor gender and ethnic bias in the media, saying that “what gets measured, gets managed.”
He and other experts contend that using AI to gauge the proportion of women in the discourse would be a useful strategy for addressing male bias. This highlights for us as humans the critical need of seeking out the perspectives of all genders, groups, and cultures on one of the century’s most significant stories.