It’s important to note here that women are responsible for between 70% to 80% of consumer purchase decisions. It stands to reason that if you want women to buy your product, the product should be developed from their perspective and not solely the perspective of male engineers.
The influence of voice on our world
Frantz Fanon says in his book, Black Skin, White Masks, “To speak is to exist absolutely for the other.” Speaking is an integral part of everyone’s individual identity, it carries the past and present, represents where one has lived, and who they’ve interacted with. Every time someone speaks, they are sharing a little part of themselves.
In the context of voice assistants, this statement takes on a little different meaning. As voice AI technology continues to improve and the role of the voice assistant becomes ever more personal, the biases currently inherent in these technologies continue to define who we are and how we are heard and acknowledged.
For voice AI developers, choosing between labeled and unlabeled data or using a combination may have a direct impact on the biases carried by your voice assistant. While labeled data from an existing dataset may be easier to use, these labels are sometimes biased or simply don’t hold all the information. For example, you could label Beyonce as a woman, or she could be labeled as a black woman, a singing black woman, or a Texan singer and actress. These are all acceptable labels.
While not all those labels are relevant to developing an unbiased voice assistant, developers should be focused on three main areas of bias, namely race, gender, and accent. Still, narrowing the fields may not solve all the challenges, since the difference of white men vs. white women and white women vs. mixed raced women have all been shown to affect accuracy rates, according to Joan Palmiter Bajorek.