AI and Gender

We’re starting to learn more about the biases buried deep within our data. Many of the knowledge bases that we have long thought of as objective turn out to be systematically skewed. I first became aware of this reading Mark Glezerman’s Gender Medicine, which brought to light the way medical research uses evidence derived from research on men only. This can have literally fatal consequences, for example in the way women’s strokes fail to be recognised and treated. Glezerman’s book was followed by the better-known Invisible Women by Caroline Criado-Perez.

Awareness of this has been boosted recently because of the way AI systems use existing data to bake in biases. If ‘preferred candidates’ mainly conform to one type (eg male, or white) then AI will naturally (or rather unnaturally) reinforce that tendency – burying the bias far deeper in a semblance of objectivity.

So it’s good to see the Turing Institute getting into the issue, with a recent policy briefing on Women in data science and AI. The initiative covers why so few women enter AI, and why so many of those who do enter leave prematurely. Much of this might be common to several other areas of science and technology. The heart of the initiative is in finding which interventions work to increase women in AI, and to explore the ways in which the gender deficit shapes both the research agenda and the applications of digital technologies.

The relevance of the Paula Principle is obvious. It’s not only a question of how to enable more women to enter and progress in the field as it’s currently operating; it’s also a matter of redefining the norms and success criteria of the sector.

I look forward to hearing more about the Turing project. It’s completely right, if poignant, that a man who suffered so cruelly on gender grounds should now be hosting this initiative.

Leave a Reply

Your email address will not be published. Required fields are marked *