Chapter 3. Confronting Concerns
I grew up wishing I’d one day get to use the tricorder, replicator, and voice-activated computer system of the fictional starship Enterprise, where the word android referred to the endearing Data, an anthropomorphized AI engineer perpetually seeking to be more human. Today, Android has a totally different meaning: a powerful operating system turning our mobile devices into futuristic tricorders of sorts as we go about scanning and snapping images of the world, marking our territory traversed and potential paths with touch screen maps brought to us by Global Positioning System chips. The handheld wonder that is the smartphone is perhaps rivaled only by the 3-D printer, today’s version of the fictional replicator, making it possible to generate astonishing objects ranging from nuts and bolts to a full-blown turntable or electric guitar. And while we may be light-years away from printing ourselves a drinkable martini, we can now ask Alexa how to concoct one. With smart speakers, smart homes, and voice assistant technology, perhaps we’re closer than ever before to wielding the fascinating tools of Star Trek: The Next Generation.
Looking back at my TNG-watching days, I never questioned the creative choice to make the ship’s ubiquitous voice-activated system, Computer, sound female.
Tomorrow Is Yesterday: Gendered AI
In the early aughts, Stanford University professor Cliff Nass and CTO Scott Brave described a study involving pitched computer-generated voices, demonstrating the human brain’s tendency to associate frequency ranges with a perceived female or male gender and the parallel tendency for people to find “similar is better.”1 In other words, participants showed greater trust in voices associated with their self-identified gender; self-identified females trusted computer-generated voices perceived to be female, and vice versa for males. However, a lower-pitched voice, associated with masculinity, was deemed more trustworthy overall—a finding reinforced in subsequent research revealing a human preference for leaders with lower-pitched voices.2
Why was Alexa designed with a pitched voice frequency the brain associates with female? Was it a progressive attempt to alter perceptions of female voices for the better or a reinforcement of gender stereotypes about females as subservient assistants and caregivers? According to one account, the choice to make Alexa sound female was merely a creative decision inspired by the female-sounding voice of Star Trek’s Computer.3 In her 2018 reference book on digital assistants, Nicole Hennig dug into why Alexa sounds female, citing Amazon’s internal beta testing findings of an overall preference for female-sounding voice assistants.4 This raises the question “Who did Amazon ask in its beta testing?” PC Magazine’s Chandra Steele, on the reason so many of today’s digital assistants sound female, noted, “Though they lack bodies, they embody what we think of when we picture a personal assistant: a competent, efficient, and reliable woman.”5 But they’re not in charge, as Steele pointed out. In contrast, IBM’s cancer-fighting and Jeopardy-winning AI leader, Watson, has a male persona, aligned with what we know about lower-pitched voices associated with perceived masculinity and leadership capacity.6
It’s no wonder popular sci-fi in the film and television landscape features scores of feminized or sexualized AI, tropes of consciousness-gaining bots watched in wonder or fear, from Samantha in Her and Ava in Ex Machina to Maeve and Dolores of Westworld. As these bodiless or anthropomorphized gynoids awaken, they grow independent and capable of rebellion, morphing into worst nightmare scenarios. It seems unsurprising, then, that voice assistant beta testing findings and resultant programming choices could be driven by those fears and related social constructs of gender norms—undercurrents strong enough to influence our entertainment content as much as our favoring female personas for unwaveringly compliant voice assistant tech while reserving male personas for more leadership-oriented models.
What does all this mean for future generations who may stereotypically associate Watson with bold leadership and Alexa with compliant assistance? According to the United States Social Security Administration, 3,053 girls, or 0.165 percent of total female births in 2018 were named Alexa; 337 boys, representing 0.017 percent of total male births in 2018, were named Watson.7 One can only ponder the future experiences of these 3,390 newborns as they grow older, carrying the social implications of these names. Is merely engineering an option to switch a voice assistant to a differently pitched persona enough to combat the potential psychosocial reinforcement of gender stereotypes? Without such an option, are businesses essentially cashing in on gender bias? Given the potential consequences for society—and for these 3,053 real-life Alexas who will turn eighteen in 2036—some tech companies are considering gender-responsive corporate social responsibility for their devices.
Amazon, for example, developed a disengage mode to stop Alexa from providing its formerly flirtatious responses to sexist or derogatory user remarks.8 But even Amazon’s attempt at asserting feminism is cautious in light of the way its answer scripts lean passive or uncertain in the face of sexual harassment. This is likely in part due to technology’s broad user base and Amazon’s wariness of progressive ideology potentially upsetting or alienating certain customer segments; in other words, Amazon seems to realize anti-sexism isn’t always popular, underscoring Amazon’s choice to use a female persona in the first place.
Would it make business sense for a company creating pitched voice assistants to allow its customers to choose a preferred pitch frequency and associated gender identity for their device? Does a device need a gender at all? Google asked itself similar questions early on in the development of its voice assistant. Initially, Google debated whether to develop a solely male- or female-sounding voice for its assistant, ultimately launching with a female voice in 2016, allegedly because it sounded more natural than its male counterpart, deemed warbly.9 But by 2019, Google had made significant strides in distancing itself from gendered voices and its formerly default female persona by adding a second voice option in nine countries, randomizing the default selection, and naming its voice options after colors and celebrities.10 The choice to move toward gender-neutral personas (with the exception of celebrity voice offerings such as John Legend) aligns with Google’s other gender-neutral products, such as Gmail and Chrome. It’s also a choice aligned with Merriam-Webster’s recent addition of the singular nonbinary gender pronoun they to its dictionary, signaling a growing acceptance of nonbinary identity and a shift toward more inclusive language in the cultural lexicon.11
Like Alexa and Google Assistant, Siri began AI life exclusively female, with a name meaning “beautiful woman who leads you to victory.”12 Today, Apple’s binary voice options for its digital assistant, Siri, can be male or female, depending upon the language selection; some languages offer only a male or female voice, others offer both, and certain languages offer dialects with accents.13 As with Alexa, Google Assistant, and Siri, Microsoft’s Cortana began female, named after the AI assistant of the Halo video game series, whose holographic avatar is a nude woman.14 In a nod to the fans, both the video game character and Microsoft Cortana’s original American English version are voiced by Jen Taylor.15 Updates to Cortana, announced in November 2019, include the addition of a masculine voice option produced by a neutral text-to-speech model.16 What remains to be seen is whether users will embrace a male persona after significant exposure to a female one—and a fan favorite, at that. Moreover, the emphasis on developing charming and specific details for Cortana’s original persona is likely to have engendered considerable consumer attachment. In an account by James Vlahos in his 2019 book, Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think, Cortana reportedly enjoys Zumba, and her favorite book is A Wrinkle in Time by Madeleine L’Engle.17 What’s not to love?
Here one wonders whether a male Cortana persona would have the same cleverly written favorites or if he would express tastes more evocative of a stereotypically masculine persona. Crafting the original Google Assistant and her initially female persona, lead personality designer James Giangola ensured Google’s chosen voice actress knew the digital assistant’s detailed backstory, divulged in The Atlantic: she’s from Colorado, “the youngest daughter of a research librarian and a physics professor”; she’s a Northwestern alumna with a BA in art history; she won $100,000 on Jeopardy: Kids Edition as a child; she is a former personal assistant to “a very popular late-night-TV satirical pundit”; and she radiates an upbeat geekiness characteristic of someone with a youthful enthusiasm for kayaking.18 She’d certainly pair well with a highly educated, affluent professional from the United States—and, more likely still, with a heteronormative male from the dominant culture in the (presumably largely) American team that designed her. In short, she is composed of life experiences meaningful to and socially coveted by her creators.
Perhaps this all has an air of innocence and a bemusing charm that could lead one to conclude tech companies meant well in their first shots at birthing female digital assistants; perhaps they had no consciously ill intentions and merely wanted to instill their products with distinctive personalities for the sake of positive business impact. Indeed, one could argue that these same companies are making adequate strides toward gender equality in response to unfolding public concerns, meeting their ethical obligations to the extent that they can do so without harming their brands in the eyes of key consumer segments.
UNESCO paints a more alarming picture. In a report on gendered AI for the EQUALS global partnership dedicated to encouraging gender equality, female digital assistants have severe societal ramifications:
Constantly representing digital assistants as female gradually “hard-codes” a connection between a woman’s voice and subservience. According to Calvin Lai, a Harvard University researcher who studies unconscious bias, the gender associations people adopt are contingent on the number of times people are exposed to them. As female digital assistants spread, the frequency and volume of associations between “woman” and “assistant” increase dramatically. According to Lai, the more that culture teaches people to equate women with assistants, the more real women will be seen as assistants—and penalized for not being assistant-like. This demonstrates that powerful technology can not only replicate gender inequalities, but also widen them.19
The problem of gendered AI is not limited to these social repercussions; equally troublesome, the digital divide is no longer defined by access inequality alone, but by a growing gender gap in digital skills.20 This gap is both global and slyly inconspicuous. And it is a devastating root cause beneath the current and future paucity of women in technology roles. It’s also a gap made all the more unsettling by the crucial moment in which we find ourselves, as transformative digital assistant technology is in a skyrocketing developmental phase while simultaneously influencing social norms around the globe in ways we do not yet fully grasp. With the potential for morally reprehensible consequences of gendered AI looming large, urgent work lies ahead.
Language, Accent, Ability, and Racial Bias
Beyond gendered AI, digital assistant technology also presents identifiable language and accent bias. In a 2018 article for New York Magazine, bilingual journalist Ximena N. Larkin described her dismay over trying to get her Google Assistant to play “Dura,” a Daddy Yankee song, instead receiving songs from Dora the Explorer:
The problem is, most times it can’t understand me when I pronounce Spanish words in Spanish. This time, the virtual assistant apologizes for being unable to find songs from Dora the Explorer. I try again, saying the Spanish word with a heavy American accent. Instantly my Google Home begins streaming the song. It’s frustrating because as someone who doesn’t get the chance to practice my Spanish enough, I want the few times I do to be correct. I probably wouldn’t have even noticed if it weren’t for the ease in which my nonimmigrant husband, who grew up in the Midwest, uses voice commands with 99 percent accuracy. My stepdad, of similar descent, uses Siri to call me. In his phone, my name is shortened to “Ximy,” the way someone might abbreviate Ryan to Ry. The correct Spanish pronunciation is “Him-E.” The system only understands if he pronounces it “Zim-E.”21
As Larkin highlighted, this failure of voice assistant tech to understand her Spanish pronunciation is more than a pain point—it’s akin to an erasure of language, reinforcing the hegemony of English and representing a painful exclusion of those who speak more than one language or who can correctly pronounce non-English words. The voice assistant’s lack of linguistic depth and agility in this regard is unsurprising, despite advances in bilingual speech recognition. The Washington Post’s Drew Harwell shared a similar anecdote from 2018:
With a few words in her breezy West Coast accent, the lab technician in Vancouver gets Alexa to tell her the weather in Berlin (70 degrees), the world’s most poisonous animal (a geography cone snail) and the square root of 128, which it offers to the ninth decimal place.
But when Andrea Moncada, a college student and fellow Vancouver resident who was raised in Colombia, says the same in her light Spanish accent, Alexa offers only a virtual shrug. She asks it to add a few numbers, and Alexa says sorry. She tells Alexa to turn the music off; instead, the volume turns up.22
It’s a scene satirized by Jordan Peele in his 2019 film, Us, when a yuppie white mom, Kitty, tries to get her smart speaker’s voice assistant, Ophelia, to call the police during a home-invasion-turned-murder; flipping the script, Ophelia plays N.W.A.’s “Fuck tha Police” instead.23 Peele’s ironic horror comedy underscores the far more brutal reality that digital assistants are failing the most marginalized members of society, from people with regional drawls or lilting dialects to those with non-native accents or speech impairments. As data scientist Rachel Tatman noted, “These systems are going to work best for white, highly educated, upper-middle-class Americans, probably from the West Coast, because that’s the group that’s had access to the technology from the very beginning.”24 With the exception of Kitty, Tatman’s statement rings true. It’s also reflective of the affluent, professional, highly educated product design teams likely behind the development of digital assistants—and consequently, it’s bound to be a technology most responsive to its creators. In language-localization firm Globalme’s study of seventy voice commands by accent group, Google Home performed best for those with Western or Midwestern US accents, Amazon’s Echo performed best for those with Southern and Eastern US accents, and both devices fared poorly for those with Indian, Chinese, or Spanish language accents.25 Equally problematic is the difficulty of being understood as a person with a speech disorder, a challenge observed in educational settings implementing smart speaker technology.26 Harder still, try to be understood by a digital assistant as both a non-native English speaker and a person with a speech disorder. Hardest of all, try to locate an individual or group representative of these unique challenges within the composition of a digital assistant design team.
While the homogenization of English, deeply connected with social privilege, can be perpetuated in digital assistants, some companies have made efforts to offer optional dialects or accented speech for their products. From a business perspective, this offering aligns with the social desirability of hearing a voice one feels able to identify with; in a study described by Nass and Brave in Wired for Speech, participants socially identified with computerized voices whose accents or dialects they perceived as related to their own cultural backgrounds.27 But while some consumers might take comfort in hearing a response they can identify with, culturally or linguistically, such options neglect the problem of voice assistant technology failing to understand accented user speech. In other words, the array of languages and dialects available from a device does not rectify the negative experience of isolation a device can intensify for non-native or accented users or those with a speech disorder, already enduring societal othering and discrimination by human counterparts.
Thus, voice assistant technology has ample opportunity for improvement and a long road ahead before it can become truly assistive for people with speech-oriented disabilities, those with linguistic agility, or those with some combination thereof. Moreover, companies are unlikely to grasp the significance of these shortcomings until members of their digital assistant development teams reflect the diverse consumers they have and hope to attract. A broad spectrum of diversity provides an effective antidote; the alternative to such representation can be summed up as “bias in, bias out.”
In a world where inequalities run deep, deficits in AI risk deepening those inequalities, perpetuating bigotry, homophobia, xenophobia, and violence. Professor and codirector of the UCLA Center for Critical Internet Inquiry, Safiya Umoja Noble, penned a treatise on the topic, Algorithms of Oppression: How Search Engines Reinforce Racism.28 From racist and misogynist misrepresentation of women and people of color in online spaces, to predictive policing and bias in housing, employment, and credit decisions, algorithms of oppression are as ubiquitous as the voice assistant technology they power. As Virginia Eubanks delineated in Automating Inequality, “Automated eligibility systems, ranking algorithms, and predictive risk models control which neighborhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud.”29 Algorithms use data from the past to make predictions for the future, but “technology often gets used in service of other people’s interests, not in the service of black people and our future,” Noble explained.30
David Lankes of the University of South Carolina brought it back to the digital skills gap, pointing out, “Unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.”31 In this vein, Noble contended bias in AI may become this century’s most pressing human rights issue, and a search engine’s lack of neutrality is but one of many points she problematized:
Google functions in the interests of its most influential paid advertisers or through an intersection of popular and commercial interests. Yet Google’s users think of it as a public resource, generally free from commercial interest. Further complicating the ability to contextualize Google’s results is the power of its social hegemony.32
Whether a device queries Google or another search engine, the commercial and proprietary nature of these products makes it virtually impossible for users to know what’s truly powering their searches, let alone whether to trust the veracity of results.
Trust, Privacy, Security, and Intellectual Freedom
One avenue for disrupting AI bias and bolstering algorithmic literacy is to encourage experimentation with voice assistant technology in education settings. Yet in doing so, educators face immediate privacy concerns. In 2018, CNBC reported a rise in colleges and dorms implementing smart speaker technology, including Saint Louis University, Northeastern University, and Arizona State University.33 How can colleges, universities, and other education environments hedge against the risk of data exploitation and other privacy concerns surrounding student use of smart speakers?
The emergence of facial recognition software adds further cause for wariness. The Google Nest Hub Max uses this feature to recognize faces and hand gestures.34 University of Albany professor Virginia Eubanks noted that such present-day realities connoting 1984 are likely to disproportionately target marginalized groups and not merely individuals at random, lest we forget that a “myopic focus on what’s new leads us to miss the important ways that digital tools are embedded in old systems of power and privilege.”35 While privacy risks are broad, special considerations are warranted for education environments, communities of color, and other vulnerable populations including the elderly, whose devices can be monitored and used to conduct surreptitious welfare checks or engage in malicious phishing schemes.36 While smart speakers can be seen as beneficial tools for caregivers, allowing for the unobtrusive observation of an aging parent or loved one with special needs, this manner of technology use can be an intrusion on an individual’s right to privacy if implemented without their consent.
There are related legal implications for digital assistants and data privacy. While law enforcement of yesteryear might have sought an individual’s library patron records, law enforcement today can requisite smart speaker recordings. In 2018, Alexa witnessed a double murder; Amazon was subsequently ordered by a New Hampshire judge to hand over an Echo device’s recordings from the scene of the crime.37 This may be a win from a public safety perspective—useful surveillance footage to support criminal justice efforts—but there are unsettling ramifications for privacy advocates and those who fear the potential social consequences of having personal data mined for legal evidence.
In that vein, data has surpassed oil as the world’s most valuable commodity.38 Facebook’s role in data misuse set off scandals in both the US and the UK after Cambridge Analytica exploited private Facebook user data to design techniques for influencing voters in the Brexit campaign and the 2016 US presidential election.39 The Great Hack, a documentary, highlights former Cambridge Analytica employees pivoting to consumer advocacy roles upholding data rights as human rights.40
It’s a stance in alignment with the American Library Association’s Library Bill of Rights interpretation on the subject of privacy and the freedom to read without the chilling effects of Big Brother monitoring your e-book list. The ALA statement on privacy, amended in 2019, “affirms that rights of privacy are necessary for intellectual freedom and are fundamental to the ethical practice of librarianship.”41 The details of this interpretation further elucidate and affirm the library’s long-standing commitment to the principle of privacy.42 What, then, are the responsibilities of libraries and education professionals serving as proponents of ethical voice assistant technology usage?
Best Practices and Policy Considerations
Consumer Reports offers a slew of tips for mitigating digital assistant privacy concerns, such as disabling microphones when not in use and periodically deleting recordings.43 Digital assistant users can also choose to block incoming voice calls and disable voice purchases.44 The American Civil Liberties Union outlined a number of additional recommendations for the tech sector and regulatory policymakers:
- Legislate privacy protection to govern corporate use of private data and create precise standards for government data access.
- Standardize indicator lights for transparently signaling when microphones are enabled, recording, or transmitting data.
- Define and regulate retention periods of transmissions, ideally limiting retention to whatever length of time is minimally necessary.45
The Future of Privacy Forum set forth additional manufacturer recommendations, such as equipping devices with a hard switch for manually disabling a device’s microphone or camera and anonymizing text translations of audio recordings after a short retention period in order to protect consumer privacy without forfeiting opportunities for ongoing research and development.46
From a security standpoint, open source development has made it possible for anyone to create a voice assistant application for an organization, regardless of official affiliation with that institution. As a result, a developer can pose as a trusted institution or may abuse corporate approval processes to develop sanctioned applications maliciously designed as phishing schemes.47 Even if an unaffiliated developer’s skill is not malicious, they may create a skill or app that is not a reliable source of truth—problematic for libraries and educational institutions seen as trusted resources. Organizations may wish to get ahead of these possibilities; at a minimum, institutions can proactively monitor the digital assistant ecosystem for unaffiliated application developments and potential security risks. Moreover, organizations can create skills or applications officially associated with their institutions, developing accompanying privacy policies to help build awareness around information literacy, data rights, and informed consent to terms of use.
US educators considering privacy implications for classroom and higher education environments point to existing legislation such as the Children’s Online Privacy and Protection Act (COPPA) and the Family Educational Rights and Privacy Act (FERPA). While the text of FERPA, authored in 1974, offers no specific guidance on AI technology yet, the US Department of Education has issued guidance on discerning when photos and videos can be considered student records.48 This serves as a starting point for privacy considerations surrounding digital assistants in higher educational institutions. In terms of primary education environments, the US Federal Trade Commission has issued guidance pertaining to COPPA and audio voice recordings of children under thirteen, noting that educators need not obtain parental consent for students to use voice commands when performing a search or giving a verbal instruction to a digital assistant.49 Despite this guidance, obtaining parental consent remains appropriate as best practice in K–12 education environments, and educational administrators should seek ongoing guidance to ensure the protection of student data privacy in the evolving AI ecosystem.50
Beyond privacy and security, educators and librarians alike must confront the reality of inequality perpetuated by bias in AI. In summing up the need for concerted efforts to interrupt and correct algorithmic bias, Virginia Eubanks noted, “It is mere fantasy to think that a statistical model or a ranking algorithm will magically upend culture, policies, and institutions built over centuries.”51 The bias of yesterday and today is inherent in the tools we’ve created for tomorrow. Hence, the role of all those who provide AI-related guidance or recommendations is to intervene where deep-seated inequalities would otherwise be perpetuated undisturbed.
One can begin with advocacy for implementation of an ambiguous voice pitch frequency to create gender-neutral assistants. Project Q blended voices representative of a broad gender identity spectrum, creating a nonbinary amalgamation it’s hopeful tech companies will adopt for their digital assistants.52 EqualAI, an initiative focused on stopping unconscious bias in AI development, is a proponent of such gender neutrality and a resource on confronting AI inequalities.53 Another resource is the Information Ethics and Equity Institute, providing ethical data and education for the tech industry and the academic community.54
Despite the best efforts of initiatives such as these, the twenty-first century remains a decidedly unequal place both online and off. UNESCO’s 2019 report, I’d Blush if I Could, was named for the flirtatious catch-me-if-you-can response Siri once gave to the comment, “Hey Siri, you’re a slut.”55 A voice assistant with a female persona “holds no power of agency beyond what the commander asks of it,” the report explained, responding “regardless of their tone or hostility.”56 Beyond reinforcing misogynist bias and gender stereotypes on female subservience, this paves the way for widening tolerance of impolite, sexist treatment.57 The report drew a direct thread between gendered voice assistants and the severe lack of women in tech roles, also pointing to the alarming root issue of a vast and widening digital skills gender gap across the globe.58
While increased diversity in tech companies is one avenue to improve prospects for interrupting AI inequality, such an approach fails to address the underlying digital skills gender gap. That said, tech company values and commitment to their fair share of ownership of these shortcomings is just as crucial to ensuring present-day platforms do not amplify hate, subconsciously or otherwise. Funding and regulatory policy are critical, as Safiya Noble wrote: “Without public funding and adequate information policy that protects the rights to fair representation online, an escalation in the erosion of quality information to inform the public will continue”—especially including information accessed via voice assistants.59
The price of avoidance is incalculable; Facebook removed the Unite the Right event page just one day before the deadly Charlottesville rally in 2018, far too late to stop the chain of events leading to a neo-Nazi white supremacist fatally ramming his car into Heather Heyer and injuring dozens of her fellow counterprotesters.60 It was the same year that racist trolls faked news reports of attacks by Black Panther moviegoers.61 In the wake of the Parkland, Florida, shooting at Stoneman Douglas High School, Safiya Noble iterated for TIME, “Tech companies have been slow to respond to the way their platforms have been used to amplify hate . . . exposing users to violent and often racist disinformation.”62
New precedents for corporate accountability should include a more aggressive approach to mining and eliminating disinformation before it costs lives, especially as consumers become increasingly accustomed to simplistic, out-of-context responses from digital assistant queries. In 2016, Guardian journalist Carole Cadwalladr characterized Google as the lens through which its users see the world, making reference to the hidden faces behind mysterious algorithms as “invisible armies of content moderators.”63 Information professionals who have a hand in search and discovery interfaces must problematize how these interfaces interact with digital assistants and surface answers to life’s questions to the detriment or benefit of society. To play a proactive role in combating AI-perpetuated inequality and hatred, here are three guiding principles:
- Advocate for platforms to uphold factual and socially just information.
- Require digital literacy of one another.
- Pursue critical digital media research in seeking to understand a platform’s past, present, and potential impact on society.
Hand-in-hand with these best practices, voice assistant designers can be urged to write new antidiscriminatory responses, improving upon Alexa’s disengagement mode to pen the assertive defensiveness appropriate of real-life responses to sexual harassment and other discriminatory comments. Better yet, a handful of creative, diverse individuals (perhaps representing the balanced team composition tech sectors so often lack) could take to the internet for public support of newly composed lines, propelling socially just writing with social media momentum. New scripts can tackle other discriminatory challenges, from integrating people-first language in responses about people with disabilities to ensuring information about the Holocaust is not anti-Semitic.
Beyond flipping the script, relentless architects of a more ethical, just world will confront the vast spectrum of ways in which biased digital assistants deepen those disparities. As a starting place, UNESCO offered recommendations to prevent voice assistant technology from worsening gender inequality:
- Fund studies to examine, document, and build evidence—on bias presented by digital assistants to help reveal strategies to repair and prevent such bias, on assistants’ behavioral influence upon individuals (youth, especially) in online and offline environments, and on the progress of gender composition in tech sector teams building voice assistants.
- Create new rules and tools—to stop digital assistants from defaulting to female voices, to develop an androgynous “machine” gender voice, to start public repositories of gender-sensitive speech taxonomies and associated code, to hone techniques to train AI in providing gender-neutral responses and strongly discouraging gender-based insults, and to require voice assistants to announce themselves as nonhuman.
- Adopt gender-responsiveness in digital skills development—by offering women and girls digital skills training, incentivizing recruitment and advancement of women in tech, establishing tech sector accountability for gender bias in products, and integrating gender analysis in tech product research and development.
- Ensure oversight and incentives—such as tying public funding to gender-balanced tech development teams and equal gender representation in products, promoting legislation to encourage interoperability for consumer ease of switching products, and establishing regulatory oversight to mitigate algorithmic bias and rights violations.65
This recommendation framework offers a model for interrupting AI bias in all arenas.
A final consideration includes an oft-overlooked group. Chris Bourg, director of libraries at MIT, has argued that “we would be wise to start thinking now about machines and algorithms as a new kind of patron,” not as human replacements but as entities requiring new sets of rules and guidelines.66 Parents, along these lines, are concerned with the lack of need to say “please” or “thank you” to digital assistants, conditioning rudeness in children.67 Linguistic style matching in social interaction makes this a legitimate concern; parents could be raising a generation who perceives themselves as masters over their devices, not seeing the need for niceties simply because their AI interactions don’t require them.68 One best practice consideration is to force polite requests or include optional modes that respond only to standard pleasantries. According to UNESCO, Amazon’s Echo Dot Kids Edition launched such an option in 2018, allowing parents to ensure the device does not respond to commands “unless they are attended with verbal civilities.”69 West, Klaut, and Chew illuminated the stakes:
In what is known as the master–slave dialectic, G. W. F. Hegel argued that possession of a slave dehumanizes the slave master. While Hegel was writing in the early nineteenth century, his argument is regularly cited in debates about the treatment of digital assistants and other robots.70
In treating artificially intelligent devices with greater care and thoughtfulness, perhaps we can learn how to reciprocate that thoughtful care into AI design as we seek to shape the future of AI-driven digital assistants into embodying the best of humankind.
In all, modern-day information and education professionals are confronting gendered voice assistant personas, bias in misunderstood speech from those with accents or speech disorders, privacy and security concerns over data monitoring and misuse, and the ever-present reality of inherited inequalities informing algorithms and misinformation, all through sleek, submissive digital assistants with increasingly human-like voice delivery. These are not small challenges. Yet voice assistant technology is here to stay, and its future influence for better or for worse rests upon the shoulders of advocates, educators, librarians, information professionals, and technologists. We are collectively more creative and intelligent; together we can collaborate to create meaningful change to disrupt bias and inequality while safeguarding against privacy and security concerns toward a more ethical, transparent, and inclusive AI ecosystem for generations to come.
Notes
- Clifford Nass and Scott Brave, Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship (Cambridge, MA: Massachusetts Institute of Technology, 2005), 17.
- Casey A. Klofstad, Rindy C. Anderson, and Susan Peters, “Sounds like a Winner: Voice Pitch Influences Perception of Leadership Capacity in Both Men and Women,” Proceedings of the Royal Society B 279, no. 1738 (2012): 2698–2704, https://doi.org/10.1098/rspb.2012.0311.
- Laura Sydell, “The Push for a Gender-Neutral Siri,” NPR, July 9, 2018, https://www.npr.org/2018/07/09/627266501/the-push-for-a-gender-neutral-siri.
- Nicole Hennig, Siri, Alexa, and Other Digital Assistants: The Librarian’s Quick Guide (Santa Barbara, CA: ABC-CLIO, LLC, 2018), 47–53.
- Chandra Steele, “The Real Reason Voice Assistants Are Female (and Why It Matters),” PC Magazine, January 4, 2018, https://www.pcmag.com/opinions/the-real-reason-voice-assistants-are-female-and-why-it-matters.
- Megan Garber, “Why We Prefer Masculine Voices (Even in Women),” Atlantic, December 18, 2012, https://www.theatlantic.com/sexes/archive/2012/12/why-we-prefer-masculine-voices-even-in-women/266350.
- “Popularity of Name,” Social Security Administration, accessed January 19, 2020, https://www.ssa.gov/cgi-bin/babyname.cgi.
- Leah Fessler, “Amazon’s Alexa Is Now a Feminist, and She’s Sorry if that Upsets You,” Quartz, January 17, 2018, https://work.qz.com/work/1180607/amazons-alexa-is-now-a-feminist-and-shes-sorry-if-that-upsets-you.
- Janko Roettgers, “How Google Found Its Voice,” Variety, September 19, 2019, https://variety.com/2019/digital/features/google-assistant-name-personality-voice-technology-design-1203340223.
- Jacob Kastrenakes, “Google Assistant Gets a Second Voice Option in Nine Countries,” Verge, September 18, 2019, https://www.theverge.com/2019/9/18/20870939/google-assistant-new-voices-nine-countries-languages.
- Samantha Schmidt, “Merriam-Webster Adds Non-binary Pronoun ‘They’ to Dictionary,” Washington Post, September 17, 2019, https://www.washingtonpost.com/dc-md-va/2019/09/17/merriam-webster-adds-non-binary-prounoun-they-dictionary.
- Karissa Bell, “Hey, Siri: How’d You and Every Other Digital Assistant Get Its Name?” Mashable, January 12, 2017, https://mashable.com/2017/01/12/how-alexa-siri-got-names.
- “Change Siri Voice or Language,” Apple Support, last modified May 4, 2019, https://support.apple.com/en-us/HT208316.
- Bell, “Hey, Siri.”
- Roger Cheng, “How Microsoft’s Cortana Came by Its Human Touch,” CNET, July 21, 2014, https://www.cnet.com/news/how-microsoft-cortana-came-by-its-human-touch.
- Khari Johnson, “Microsoft’s Cortana Gets Meeting Scheduler, Male Voice, and Voice Email Briefings,” Venture Beat, November 4, 2019, https://venturebeat.com/2019/11/04/microsofts-cortana-gets-meeting-scheduler-male-voice-and-voice-email-briefings.
- James Vlahos, Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think (Boston: Houghton Mifflin Harcourt, 2019), 117–18.
- Judity Shulevitz, “Alexa, Should We Trust You?,” Atlantic, November 2018, https://www.theatlantic.com/magazine/archive/2018/11/alexa-how-will-you-change-us/570844.
- Mark West, Rebecca Kraut, and Han Ei Chew, “I’d Blush if I Could: Closing Gender Divides in Digital Skills through Education,” (EQUALS and UNESCO, 2019), 106.
- West, Kraut, and Chew, I’d Blush if I Could, 5.
- Ximena N. Larkin, “‘Okay, Google, Play ‘Dura.’: Voice Assistants Still Can’t Understand Bilingual Users,” New York Magazine, August 10, 2018, http://nymag.com/intelligencer/2018/08/why-are-google-siri-and-alexa-so-bad-at-understanding-bilingual-accents-voice-assistants.html.
- Drew Harwell, “The Accent Gap,” Washington Post, July 19, 2018, https://www.washingtonpost.com/graphics/2018/business/alexa-does-not-understand-your-accent/.
- Adi Robertson, “‘Us’ Voice Assistant Scene Plays Off a Real 911 Problem for Smart Speakers,” Verge, March 26, 2019, https://www.theverge.com/2019/3/26/18281387/us-2019-movie-jordan-peele-voice-assistant-ophelia-911.
- Rachel Tatman, quoted in Harwell, “Accent Gap.”
- Harwell, “Accent Gap.”
- Alyson Klein, “Alexa, Tell Us What You Think of Voice-Activated Learning in K-12,” Digital Education (blog), Education Week, last modified June 21, 2019, http://blogs.edweek.org/edweek/DigitalEducation/2019/06/alexa-google-smart-speakers-classrooms.html.
- Nass and Brave, Wired for Speech, 30.
- Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018).
- Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: Picador, 2017), 3. Also see Miriam Vogel, “A Proposed HUD Rule on AI Could Allow for Housing Discrimination,” Axios, September 4, 2019, https://www.axios.com/proposed-hud-rule-on-ai-could-allow-for-housing-discrimination-3d778ea7-8230-4f21-935b-b6b7f384a310.html.
- Safiya Umoja Noble, in “Is the Future More Black Panther or Black Mirror?” interview by Sasheer Zamata, Full Frontal with Samantha Bee, TBS, March 20, 2019, YouTube video, 6:26, https://www.youtube.com/watch?v=AxpWvMrPqVs.
- David Lankes, quoted in Lee Rainie and Janna Anderson, “Theme 7: The Need Grows for Algorithmic Literacy, Transparency and Oversight,” Pew Research Center, February 8, 2017, https://www.pewresearch.org/internet/2017/02/08/theme-7-the-need-grows-for-algorithmic-literacy-transparency-and-oversight.
- Noble, Algorithms of Oppression, 34.
- Ali Montag, “This University Is Putting Amazon Echo Speakers in Every Dorm Room,” CNBC, August 21, 2018, https://www.cnbc.com/2018/08/21/this-university-is-putting-amazon-echo-speakers-in-every-dorm-room.html.
- Samuel Gibbs, “Google Nest Hub Max Review: Bigger, Better and Smarter Display,” Guardian, November 6, 2019, https://www.theguardian.com/technology/2019/nov/06/google-nest-hub-max-review-display-camera-facial-recognition.
- Eubanks, Automating Inequality, 178.
- Hennig, Siri, Alexa, and Other Digital Assistants, 35.
- Chavie Lieber, “Amazon’s Alexa Might Be a Key Witness in a Murder Case,” Vox, November 12, 2018, https://www.vox.com/the-goods/2018/11/12/18089090/amazon-echo-alexa-smart-speaker-privacy-data.
- “Regulating the Internet Giants: The World’s Most Valuable Resource is No Longer Oil, But Data,” Economist, May 6, 2017, https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data.
- Matthew Rosenberg, Nicholas Confessore, and Carole Cadwalladr, “How Trump Consultants Exploited the Facebook Data of Millions,” New York Times, March 17, 2018, https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html.
- Ben Kenigsberg, “‘The Great Hack’ Review: How Your Data Became a Commodity,” New York Times, July 23, 2019, https://www.nytimes.com/2019/07/23/movies/the-great-hack-review.html.
- “Interpretations of the Library Bill of Rights,” American Library Association, accessed January 20, 2020, www.ala.org/advocacy/intfreedom/librarybill/interpretations.
- “Privacy: An Interpretation of the Library Bill of Rights,” American Library Association, accessed January 20, 2020, www.ala.org/advocacy/intfreedom/librarybill/interpretations/privacy.
- Allen St. John and Thomas Germain, “How to Set Up a Smart Speaker for Privacy,” Consumer Reports, last modified August 31, 2019, https://www.consumerreports.org/privacy/smart-speaker-privacy-settings.
- Hennig, Siri, Alexa, and Other Digital Assistants, 44–45.
- Jan Stanley, “The Privacy Threat from Always-On Microphones like the Amazon Echo,” Free Future (blog), American Civil Liberties Union, January 13, 2017, https://www.aclu.org/blog/privacy-technology/privacy-threat-always-microphones-amazon-echo.
- Stacey Gray, Always On: Privacy Implications of Microphone-Enabled Devices (Washington, DC: Future of Privacy Forum, April 2016), https://fpf.org/wp-content/uploads/2016/04/FPF_Always_On_WP.pdf.
- Dan Goodin, “Alexa and Google Home Abused to Eavesdrop and Phish Passwords,” Ars Technica, October 20, 2019, https://arstechnica.com/information-technology/2019/10/alexa-and-google-home-abused-to-eavesdrop-and-phish-passwords.
- “FAQs on Photos and Videos under FERPA,” Protecting Student Privacy, US Department of Education, accessed January 21, 2020, https://studentprivacy.ed.gov/faq/faqs-photos-and-videos-under-ferpa.
- “FTC Provides Additional Guidance on COPPA and Voice Recordings,” news release, Federal Trade Commission, October 23, 2017, https://www.ftc.gov/news-events/press-releases/2017/10/ftc-provides-additional-guidance-coppa-voice-recordings.
- Erin Wilkey Oh, “What Teachers Need to Know about Using Smart Speakers in the Classroom,” Common Sense Media, November 11, 2019, https://www.commonsense.org/education/articles/what-teachers-need-to-know-about-using-smart-speakers-in-the-classroom.
- Eubanks, Automating Inequality, 178.
- Dahlia Mortada, “Meet Q, the Gender-Neutral Voice Assistant,” NPR, March 21, 2019, https://www.npr.org/2019/03/21/705395100/meet-q-the-gender-neutral-voice-assistant.
- “Our Mission Is to Identify and Eliminate Bias in AI,” EqualAI, accessed January 20, 2020, https://www.equalai.org/mission.
- “About IEEI,” Information Ethics and Equity Institute, accessed January 20, 2020, https://ethicsequity.org.
- West, Klaut, and Chew, I’d Blush if I Could, 107.
- West, Klaut, and Chew, I’d Blush if I Could, 104.
- West, Klaut, and Chew, I’d Blush if I Could.
- West, Klaut, and Chew, I’d Blush if I Could, 15–24.
- Noble, Algorithms of Oppression, 126.
- Alex Heath, “Facebook Removed Unite the Right Charlottesville Rally Event Page One Day before It Took Place,” Business Insider, August 14, 2017, https://www.businessinsider.com/facebook-removed-unite-the-right-charlottesville-rally-event-page-one-day-before-2017-8.
- Aji Romano, “Racist Trolls Are Saying Black Panther Fans Attacked Them. They’re Lying,” Vox, February 16, 2018, https://www.vox.com/culture/2018/2/16/17020230/black-panther-movie-theater-attacks-fake-trolls.
- Safiya Umoja Noble, “How Search Engines Amplify Hate—in Parkland and Beyond,” TIME, March 9, 2018, https://time.com/5193937/nikolas-cruz-dylann-roof-online-white-supremacy.
- Carole Cadwalladr, “Google Is Not ‘Just’ a Platform. It Frames, Shapes and Distorts How We See the World,” Guardian, December 11, 2016, https://www.theguardian.com/commentisfree/2016/dec/11/google-frames-shapes-and-distorts-how-we-see-world.
- Safiya Umoja Noble, “Racial and Sexual Bias in Digital Media” (keynote address, Special Libraries Association Annual 2019 Conference, Cleveland, OH, June 18, 2019).
- West, Klaut, and Chew, I’d Blush if I Could, 127–30.
- Chris Bourg, “Libraries in a Computational Age,” Feral Librarian (blog), July 3, 2019, https://chrisbourg.wordpress.com/2019/07/03/libraries-in-a-computational-age.
- Hunter Walk, “Amazon Echo Is Magical. It’s Also Turning My Kid into an Asshole,” Hunter Walk (blog), April 6, 2016, https://hunterwalk.com/2016/04/06/amazon-echo-is-magical-its-also-turning-my-kid-into-an-asshole.
- Kate G. Niederhoffer and James W. Pennebaker, “Linguistic Style Matching in Social Interaction,” Journal of Language and Social Psychology 21, no. 4 (December 2002): 337–60, https://www.ffri.hr/~ibrdar/komunikacija/seminari/Niederhoffer,%202002%20-%20Linguistic%20style%20matching.pdf.
- West, Klaut, and Chew, I’d Blush if I Could, 105.
- West, Klaut, and Chew, I’d Blush if I Could, 105.
Refbacks
- There are currently no refbacks.
Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy