At one time, it looked like AI-powered voice assistants from big Silicon Valley companies were going to be the next big thing in the consumer tech space. But now, in hindsight, it looks like these AI-powered voice assistants are plagued with a wide range of privacy issues and problems. In some cases, these voice assistants are listening to conversations, even if the wake word or trigger word has not been used. Moreover, the process of “teaching” AI-powered voice assistants how to recognize certain words, dialects or accents often requires a human-powered manual review process. Under growing legal and regulatory pressure, the biggest tech companies in the United States – Amazon, Google and Apple – are now stopping all human review of recordings from voice assistants.

The German privacy case at the center of the debate over voice assistants

Until recently, these big Silicon Valley tech giants only had to deal with embarrassing glitches and small-scale PR disasters when it came to their AI-powered voice assistants. But a recent court ruling out of Europe has the potential to fundamentally change the game when it comes to the future of voice assistants such as Amazon Alexa, Apple’s Siri or Google Assistant. After more than one thousand voice recordings from the Google Assistant service were leaked to the Belgian news site VRT, the Hamburg Data Protection Agency (a notable German privacy watchdog) went into action. The Hamburg DPA cited the urgent, emergency powers under Article 66 of the General Data Protection Regulation (GDPR) in order to shut down the human review of any voice recordings from Google.

The leaked audio snippets appeared to show a wide range of embarrassing and disturbing incidents – such as people having sex, people discussing sensitive medical conditions, women encountering distress as the result of physical violence and criminals talking about their drug deals. All of these were being monitored and analyzed by Google contractors around the world. Even worse, it was possible to identify people within the audio recordings, including their physical addresses and contact information. Needless to say, if that information fell into the wrong hands, it could be a privacy nightmare. And for human contractors called on to listen to these recordings and audio clips, you can just imagine the shock and revulsion that might result from being forced to listen to a sexual assault in progress recorded by a smart speaker, all while being powerless to do anything about it.

READ  All the News and Key Details From Google Stadia Chief's AMA

Before you continue reading, how about a follow on LinkedIn?

What is particularly notable about this German privacy case is that it marks the first use of Article 66 powers since the European GDPR went into effect in May 2018. In order to invoke Article 66, there needs to be a clear, present and urgent danger to the privacy of European citizens. If this need for protecting user privacy can be shown, then the relevant data protection agency can order all data processing to stop. And that is exactly what is being argued here, thanks to the sensational expose in the Belgian media. Google will now shut down all manual review of voice assistant recordings for a period of at least three months.

The case for human review of voice assistant audio recordings

The big question now, of course, is whether Google Assistant (or any other voice assistant) complies with the EU data protection law. The case could be made that any digital device that makes use of this technology – whether it is your smart watch or the smart speaker inside your home – represents a huge violation of the fundamental privacy principles in the GDPR. After all, who gave their consent for their voice recordings to be shared with people all over the world?

Google, in its defense, says that manual review of voice recordings is very limited, representing only 0.2% of all audio clips generated by the Google voice assistant. Moreover, according to Google technologists, it is absolutely essential that some voice recordings are made available to help train their AI algorithms. Without an ever-growing supply of different voices, accents and dialects, how is the technology ever going to progress past a certain point? Maybe the AI algorithms shouldn’t be trained on voice recordings from domestic violence incidents, but surely, there is a case to be made that some voice recordings should be checked, analyzed or monitored by humans?

READ  How to develop a superstar strategy

For now, the Hamburg DPA has only said that there is “significant doubt” that voice assistant technology complies with the GDPR. Moreover, legal experts are now opining that the road has been cleared for future Article 66 challenges. This Google voice assistant case is just the beginning, they warn.

Damage control in Silicon Valley

Perhaps not surprisingly, the big U.S. tech giants at the heart of the debate over voice assistant technology – Amazon, Apple and Google – are now rushing to go into damage control mode. Google, for its part, says that it actually halted all manual review of voice recordings even before the Hamburg DPA ruled that all activity must stop immediately. Apple was next up, saying that it was halting all manual review of voice recordings on a worldwide basis. From now on, human contractors would not be listening to snippets of audio from Siri, the company’s voice assistant.

However, Apple was careful to defend its use of the AI technology, saying that users need to be able to accept a compromise between “a great Siri experience” and “user privacy.” Going forward, users will be able to opt out of human review of audio snippets and Apple admitted that its global grading system for voice recordings was not working and needed adjustment.

German #privacy case to shut down human review of voice recordings from #Google marks the first use of #GDPR Article 66 powers. #respectdata Click to Tweet

And then, following the example of Google and Apple, Amazon also grudgingly got into the act of shutting down manual review process of voice recordings for data security purposes. Back in April, there had been a mini-scandal involving contractors listening to audio recordings, but Amazon did not act. But it was the mega-expose in July, when the more than one thousand Google Assistant voice recordings were leaked to the Belgian media, that finally convinced Amazon to act. Amazon is now updating its privacy policy, so that users can opt out of human review of voice recordings entirely for the Alexa app.

READ  AOC blasts border officials after new report on offensive Facebook posts: ‘Looks like CBP lied’

The privacy trade-off

More than anything else, this latest brouhaha over AI-powered voice assistants showcases the fundamental tension between the user experience and user privacy. The big tech companies will continue to claim that users need to give up some of their privacy if they want technology to work accurately and efficiently. But this is where tech consumers need to stand up to the big Silicon Valley giants and tell them that they are no longer willing to accept this forced compromise. They should have the expectation of user privacy, all without having to navigate a confusing set of privacy settings in order to protect themselves from the invasion of their privacy. In fact, it should be an opt-in rather than an opt-out.

 




READ SOURCE