Friday, Apr 26, 2024 | Last Update : 07:56 PM IST

  Technology   In Other news  11 May 2018  Researchers warn of growing voice AI vulnerabilities

Researchers warn of growing voice AI vulnerabilities

ANI
Published : May 11, 2018, 3:42 pm IST
Updated : May 11, 2018, 4:09 pm IST

Researchers are warning that with the rise of voice AI, there's a rise in vulnerabilities as well.

Latest research claims that these vulnerabilities could leverage inaudible commands at a frequency beyond human ability to launch attacks such as sending messages, making purchases, and so on, all without the user realizing it. (Photo: Pixabay)
 Latest research claims that these vulnerabilities could leverage inaudible commands at a frequency beyond human ability to launch attacks such as sending messages, making purchases, and so on, all without the user realizing it. (Photo: Pixabay)

Tech giants across the globe are working incessantly on voice-controlled AI systems to make human life easier. However, researchers are warning that with the rise of voice AI, there's a rise in vulnerabilities as well.

Latest research claims that these vulnerabilities could leverage inaudible commands at a frequency beyond human ability to launch attacks such as sending messages, making purchases, and so on, all without the user realizing it.

Researchers at the UC Berkeley Nicholas Carlini and David Wagner claim that they were able to fool Mozilla's open-source DeepSpeech voice-to-text engine by hiding a secret, inaudible command within audio of a completely different phrase, Cnet reported.

The researchers also claim that they were also able to hide the rogue command within brief music snippets and it worked.

This is not the first instance of voice AI being vulnerable. Last year, researchers in China were able to use inaudible, ultrasonic transmissions to trigger popular voice assistants such as Siri, Alexa, Cortana, and the Google Assistant. This attack method is called 'DolphinAttack' and requires the attacker to be within a small distance of the phone or smart speaker.

However, latest studies claim that the attack can be amplified and executed even from a distance of 25 feet.

Tags: artificial intelligence, vulnerability