A new study shows that speech recognition software has higher error rates with black people
Unacceptable, but sadly, not that surprising.
Joining the ranks of facial recognition, which has higher rates of error when identifying African Americans, speech recognition also has this same type of bias.
According to a new study by PNAS (Proceedings of the National Academy of Sciences of the United States of America), voice recognition software from companies like “Amazon, Apple, Google, IBM, and Microsoft” all display racial disparities when looking at the accuracy of speech.
The study states that from their sample size – 73 black speakers and 42 white speakers – the average word error rate (WER) was 0.35 for black speakers and 0.19 for white speakers. The study used speakers from five different US cities and noted that black men had the highest error rates (0.41) compared to black women (0.30). For white people, the error rates were more uniform (0.21 for men and 0.17 for women).
I’ll do the math for you. This means that the error rate for black people was almost double that of white people in the test.
So, what can be done about this? The answer is clear, companies like Google and Amazon need to do a better job of diversifying their data sets and improving their acoustic models to account for different types of speech.
With seemingly more products releasing with speech recognition abilities every day, these types of issues are unacceptable. According to one report, at least 60 million people in America own one or more smart speakers. It’s time for these tech companies to get it together.
- A facial recognition firm says their AI can identify people even if they’re wearing face masks
- Clearview AI, the powerful facial recognition startup, let rich people and friends use its app
- American Airlines will start to use facial recognition to help you board your plane faster
- The U.S. government wants to track your phone to curb the spread of Coronavirus