Recently, Hotwire Australia's F in Fintech event explored the issue of gender diversity in fintech. One of the topics discussed during the panel session was the role technologies such as AI can play in helping organisations remove gender bias in the selection process and within the workplace.
Indeed, there are many exciting AI-based tools now available for companies to create a more diverse workplace. These include resume anonymiser tools, resources to ensure gender-neutral job ads and unconscious bias assessments.
However, it’s important to point out AI also has the potential to achieve the opposite. AI has the capability to impede the selection of certain candidates because human bias can facilitate “discrimination by design.”
A recent article by two University of Melbourne professors and published by Smart Company argues bias is often inherent in AI technologies, such as those that can help sift through CVs and shortlist candidates.
According to the professors, this is due to a number of factors, including the way in which data is collected and scoped (Amazon’s failed recruitment tool is used as an example of this). The article’s authors recommend incorporating universal design principles in order to overcome the human-generated discrimination.
Ultimately, AI can either help or hinder the creation of a diverse workforce. As with any AI application, value can only be guaranteed if the data fed into it is accurate and unbiased.
AI ‘learns’ by referring to the data that humans ‘feed’ it. If certain groups of people are left out of the data set, the automated process won’t capture their characteristics. Amazon’s now-abandoned recruitment tool is just one example of this. The data used to train the computer model to select the ‘right’ person for the job was drawn from the resumes of employees that had previously been selected for positions — most of them were men.