Connect with us

AI

Amazon wants to make Alexa sound like your dead relatives

The idea that someone could grab a one-minute snippet of my voice and permanently turn me into a digital assistant is horrifying.

amazon echo on shelf with alexa
Image: KnowTechie

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

Amazon is developing technology that would allow Alexa to mimic the voice of anyone it hears, based on a one-minute recording.

The company announced the feature on Wednesday at its Re:MARS conference, which is currently taking place in Las Vegas. The Re:MARS event is a showcase for Amazon’s AI tech. MARS itself is an acronym and stands for Machine Learning, Automation, Robotics, and Space.

Rohit Prasad, Amazon’s Head Scientist, said the feature could be used to replicate the voices of deceased relatives. In one demonstration, the reconstituted voice of an older woman is heard reading her presumed grandson a bedtime story. Watch it below.

Computers have long enjoyed the ability to mimic human voices. In fact, the technology is well established and increasingly commoditized.

In addition to the various commercial tools like Resemble AI and LyreBird, you can find several free open-source packages offering the functionality, with many based on the GPT-3 AI model.

This Alexa update merely builds on this. It lowers the barrier to entry dramatically, making it possible for anyone to create faithful renditions of their loved ones’ voices. But it’s not without its ethical questions.

The Thorny Ethical Questions

First, there’s the thorny issue of consent. I’m not discounting the possibility that people will gain a sense of comfort from being able to hear their loved ones’ voices. But would you want to be turned into a voice assistant after you die?

It feels almost like a discarded plot line from Black Mirror. You die and suddenly you’re encased in a small plastic sphere, dutifully performing any task barked at you.

Dead people can’t consent. Users have no way of knowing whether this feature goes against the wishes of their relatives. Additionally, how will Amazon determine whether a voice belongs to a dead person or a living person?

Again, the idea that someone could grab a one-minute snippet of my voice and permanently turn me into a subservient digital assistant is horrifying.

Consequently, there are the other more serious — and less philosophical — concerns.

The Problem of Vishing

https://youtu.be/Wgc8EEKtpK4

As mentioned, the ability to make a computer sound like a person is nothing new. This Alexa update would simply lower the barrier to entry. With this in mind, it’s not hard to see how a malicious third party could weaponize this.

I’m talking about “vishing,” of course. Vishing stands for “voice phishing.” It’s a relatively new take on something most of us have witnessed, if not directly fallen victim to.

The premise is simple. Someone mimics another person’s voice and gets them to do something, like transfer a sum of money to an offshore bank account, or hand over their login credentials.

This approach is invariably devastating. For example, in 2021, scammers tricked a UK energy firm into transferring €200,000 (around $210,000) to a foreign bank under their control, after they successfully impersonated a company executive.

Speaking to the Washington Post, the company’s insurer described the terrifying accuracy of the deepfake used. “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” they said.

It’s reasonable to worry that, with this feature, Amazon is opening Pandora’s box with serious ethical and security ramifications. Without any guardrails, the consequences could prove dire.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Follow us on Flipboard, Google News, or Apple News

Matthew Hughes is a journalist from Liverpool, England. His interests include security, startups, food, and storytelling. Past work can be found on The Register, Reason, The Next Web, and Wired.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI