Pymetrics’ open source tool can detect bias in AI algorithms

Pymetrics’ open source tool can detect bias in AI algorithms Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


New York-based AI startup Pymetrics has announced it has open-sourced its much-needed tool for detecting bias in algorithms.

Earlier this month, I posted an editorial titled ‘Stopping AI’s discrimination will be difficult, but vital’ on our sister site AI News. I stand by the necessity of it, but Pymetrics is making it much easier to prevent unintentional discrimination.

Now available on GitHub, the company’s so-called Audit AI will scan algorithms for anything which is being favoured or disadvantaged.

Pymetrics’ background is “matching talent to opportunity, bias-free” to help ensure businesses are recruiting the best candidates for their job.

Machine learning technologies can vastly and rapidly change or reinforce power structures

There’s an under-representation of certain groups across society, but forcing businesses to recruit them even if they’re not the right fit for the job is not the ideal solution. Instead, Pymetrics wants to level the playing field so everyone has the same opportunity.

Unfortunately, the current under-representation problem is causing unintentional bias. Here in the West, technologies are still mostly developed by white males and can often unintentionally perform better for this group.

A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

Audit AI can detect this kind of bias to make the developers aware, but it will be up to them to correct it.

“From policing, to welfare systems, online discourse, and healthcare – to name a few examples – systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights,” wrote digital rights campaigners Access Now recently in a post.

Of course, some are making their algorithms discriminate on purpose.

What are your thoughts on Pymetrics’ bias-detecting tool? Let us know in the comments.

 

https://www.iottechexpo.com/wp-content/uploads/2018/09/iot-tech-expo-world-series.pngInterested in hearing industry leaders discuss subjects like this and sharing their IoT use-cases? Attend the IoT Tech Expo World Series events with upcoming shows in Silicon Valley, London and Amsterdam to learn more.

The show is co-located with the AI & Big Data Expo, Cyber Security & Cloud Expo and Blockchain Expo so you can explore the entire ecosystem in one place.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *