AI capable of stealing passwords by analyzing keyboard sounds

0

Caution is advised when entering passwords within earshot, as cybercriminals can exploit an innovative method to deduce the keys being pressed by utilizing a meticulously trained AI model to listen to typing sounds. Even over phone calls and virtual meetings on platforms like Zoom, this technique, known as an acoustic side channel attack (ASCA), can be effectively employed.

A recent publication details this technique, which involves recording the audio of a keyboard being used to type data using a nearby smartphone or a remote conference session. Each key produces a subtly distinct sound, imperceptible to the human ear, but detectable when captured and analyzed by a meticulously trained AI model.

Researchers performed tests using a standard iPhone 13 to capture the sound of typing on an Apple MacBook Pro 16-inch laptop keyboard at a standard 44.1kHz quality. The captured audio was then converted into visual mel-spectrograms, which were input into a deep learning AI classifier that matched the visual data to training images representing the sounds of known keypresses.

Remarkably, the AI-driven technique achieved a 95% accuracy rate in identifying pressed keys when the typing sounds were captured using a smartphone. When the typing sounds were recorded via Zoom’s built-in recording option during online meetings, the accuracy rate was 93%. This indicates that passwords and sensitive data typed by non-muted participants during virtual meetings could potentially be intercepted.

Notably, this method did not require any access to the victim’s surroundings or infiltration of their devices or connections. Laptops are more vulnerable to ASC attacks compared to desktops, as they can be moved to various environments where keyboard sounds could be overheard.

The researchers highlighted the need for vigilance in protecting against ASC attacks. Options for defense include touch typing or using passwords with mixed cases to confuse the AI model by masking the sound of Shift key presses. Playing background sounds or using software to mix white noise and fake keystrokes into transmitted audio can also be employed.

As microphones are increasingly embedded in devices like smartphones, smartwatches, laptops, webcams, and smart speakers, the threat posed by ASCAs is growing, prompting researchers to explore both attacks and countermeasures in this area. The rise of powerful microphones and deep learning models is making acoustic attacks on keyboards more feasible than ever before.

LEAVE A REPLY

Please enter your comment!
Please enter your name here