Rampant misuse of AI voice generation is stirring fear

0

With the continuous advancement of technology – misuse of Artificial Intelligence (AI) voice generation is already ringing alarms of fear and speculation amongst multiple industries, as this rapidly developing technology shoots increasing cases of imitation, political deepfakes and even security disruption. Despite such growing concerns, a large number of countries in the world, including Bangladesh, does not currently have a specific law regulating artificial intelligence or rampant misuse of deepfakes.

Deepfakes can generate voice of anyone, which can be in some cases devastating particularly for politicians or famous people. Five years on from the now-infamous PSA clip showing a deepfake of US president Barack Obama forewarning the dangers of misinformation due to burgeoning artificial intelligence technologies. AI technology has vastly improved at producing fraudulent images, voice and video content – and is widely accessible to anybody with a computer and a modest budget.

In 2023, a widespread adaptation of AI voice generation, which is a product of Artificial Intelligence has been on rampant use to create synthesized voices which sound exactly as natural human speech.

According to Nishan Mills, principle architect at Centre for Data Analytics and Cognition, La Trobe University, “Voice synthesis is not new – think Wavenet and most recently Vall-E, Deep Voice – but what has changed is the access to the technology and the ease of use”.

Ms. Mills said, “We see more widespread applications by lay users”.

One of the biggest social media trends during April 2023, particularly on TikTok platform has been AI-generated clips of prominent politicians including the US President Joe Biden and former US President Donald Trump sharing uncharacteristic announcements on video games and pop culture.

According to experts, the advent of public-facing AI tools has given way to countless mock clips of public figures in doubtful circumstances – whether it is an AI-Biden singing an executive order on the brilliance of Minecraft or Pope Francis sporting a fashionable Balenciaga jacket. Although such creations have so far been a topic of amusement and laughter, the technology has already been espoused for far more despicable uses.

In March 2023, images generated on AI program fooled innumerable number of Twitter users into thinking former US President Donald Trump has been arrested, and conservative commentator Jack Posobiec aired a fairly believable fake video of Joe Biden declaring the return of the US military draft in preparation for war.

While urging tech companies to “proceed responsibly”, US President Joe Biden in a meeting with science and technology advisers said it remains to be scrutinized whether Artificial Intelligence is dangerous.

President Biden said, “Tech companies have a responsibility, in my view, to make sure their products are safe before making them public”.

The US president also said social media has already illustrated the harm which powerful technologies can do without the right safeguards.

Australian journalist Nick Evershed said in March that he was able to access his own Centrelink self-service account using an AI-generated version of his voice – effectively highlighting a serious security flaw in the voice identification system – which can be a matter of serious concern for financial institutions as well as high-security areas in various government offices, as AI can generate fake voice, fake image and even fake biometric images.

Amidst growing concerns over AI’s threat to voice-authentication systems, Evershed’s investigation suggested a clone of his own voice, in combination with his customer reference number, was enough to gain access to his Centrelink self-service account.

Despite criticism, Artificial Intelligence will continue to gain improvements as time grows, while for a number of parties, including politicians, financial institutions and government’s high-security operatives, including intelligence agencies need to adopt effective methods of combating fake voices, videos, photographs and biometric data in order to safeguard security breach as well as embarrassment to politicians and prominent individuals.

LEAVE A REPLY

Please enter your comment!
Please enter your name here