Artificial intelligence (AI) technology was hailed for its apparently endless application across all human fields: science, education, environment, agriculture, healthcare, religion, fashion, arts, music, politics, government, business, and even relationships. Its sinister side—that of commission of crime and manipulation—remains a terrifying possibility, nevertheless. For good or for evil, the development of artificial intelligence voice cloning was revolutionary and presented hitherto unimaginable opportunities in daily life.
The two facets of deepfake
To show the good and bad aspects of artificial intelligence, two cases involving voice cloning connected to two AI technologies happened in the US last week. The events also underlined the reality that authorities still find it difficult to provide sufficient rules to safeguard individuals and the ever hazy line separating what is true from what is false. Deepcopy is not new, to be sure. For example, a Reddit user uploaded a modified video of celebrities in compromising positions in 2017. But thanks to advances in artificial intelligence, which can create shockingly lifelike images and audios, accessibility, and lack of strong laws, deepfake is on an alarming increase lately.
Virginia Congresswoman Jennifer Wexton spoke about artificial intelligence to the US House of Representatives on Thursday, 25 July. Progressive supranuclear palsy illness causes Wexton speech problems. But the congresswoman "...the ability to speak as [she] once did," thanks an artificial intelligence voice copying tool created by ElevenLabs. ElevenLabs stated it aims to use technology to give everyone a voice. On July 26, Elon Musk uploaded a deepfake video of Vice President Kamala Harris on his X account without adding, as advised by X, that it was artificial intelligence created.
Kamala Harris's voice was replicated in an advertisement allegedly mocking her main President Joe Biden using artificial intelligence voice cloning technologies. Musk only confirmed that it was a deepfake video two days later on Sunday, July 28. Millions of individuals saw the movie during that time, Friday through Sunday, and many of them might have thought it was a real video. Musk is a well-known supporter of Donald Trump, hence releasing such a film without the necessary disclaimer is manipulative and most likely meant to incite Americans who support Biden. With the American general elections, where Kamala will probably duel with Trump for the president, about 90 days away, the posting had a political undercurrent.
Growing use of artificial intelligence voice cloning for crime
Criminals are using artificial intelligence voice cloning more and more to con gullible victims. Based in the US, popular Nigerian journalist and content producer Adeola Fayehun told how she was almost duped twice this year using artificial intelligence voice cloning. First, someone utilized deepfake acting as her uncle to attempt to defraud her and another time it was her cloned voice the scammers used to try to con a family member.
Early this year, fraudsters utilized video footage of Aliko Dangote's appearance on a Channels TV show to advertise a fake investment scheme. An honest customer who watches the video where Dangote allegedly discussed the technique on a reputable platform like Channels TV will believe it right once. Early on, Channels TV caught on the deepfake and released the original video.
Using deepfake AI voice cloning for investment scams, cyberbullying, political smear campaigns against well-known personalities and against frauds on phones, fraudsters are becoming more imaginative and aggressive. For instance, the broad deployment of deepfakes by politicians and supporters to get advantage over their opponent defined the 2023 general elections in Nigeria.
Reason behind the increase
One reason AI voice cloning for criminal activity is becoming more and more popular is accessibility. Excellent artificial intelligence tools are freely available online. Moreover, the techniques are so simple that anyone with rudimentary understanding of artificial intelligence may create a convincing deepfake with AI. Generative Adversarial Networks is the manipulating method employed; once a sample of that voice is imputed, AI can clone a voice in a few minutes.
All of us run danger.
Whichever your social level, everyone runs the danger of these deepfakes. For example, an envious or furious coworker at a company might employ artificial intelligence voice cloning to have you utter unprintable things about your superiors, therefore undermining you. Alternatively picture an envious partner AI coning the voice of a partner by uttering some nasty words directed against someone they highly value. To fool you, someone may potentially clone the voice of your father, children, brother, or sister. The scariest aspect of these altered movies and audios is that the scammers are typically quite thorough, which lends some believability. They probe the particular circumstances of their target and leverage it.
Imagine your son is in school somewhere and you receive a deepfake call from him; the clone has exactly his voice and just ran across some difficulties. You most likely to slip into anxiety right away and stop whatever you are doing. For Adeola Fayehun, the con artists knew her uncle had a WhatsApp group of relatives and friends where he routinely shared links to crucial information or for virtual meetings. Riding on that, they made her click on a dangerous link using her uncle's cloned voice. Companies are also victims as well as offenders of unethical deepfakes using potentially misleading deepfake advertising. Deepfake is also increasingly used in combat, predictably. Russia lately cloned Ukrainian TV and spread false information using it.
Against deepfake, what legislation guards you?
There is no strong legislation in Nigeria yet against the unethical use of deepfake; so, for an offended party, legal action is not very available. Still, there are regulations that might be piggybacked on the Cybercrime Act of 2015 and the Nigeria Data Protection Act of 2023. Not unique for Nigeria, though, is the lack of sufficient legal defense against deepfake attacks. Particularly in politics, the US, the UK, and other wealthy countries are still working to create a thorough policy against deepfakes.
Considered are laws requiring watermarking, disclaimer, and adding sources of digital content as well as legislation mandating By building some defenses against deceptive deepfakes, social media sites have also sought to shield consumers. Regarding X, the policy states "You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm." But Musk's recent release of the Kamala video amply illustrates how readily one may ignore such protections.
How might one guard against deepfake
Not much one can do to safeguard oneself. Still, one can follow many actions to reduce attack vulnerability. Share very carefully what you put online regarding your family or yourself. Turn off friends' requests from individuals you hardly know or strangers. Steer clear of dubious-looking websites and refrain from sharing personal information on obscure ones. You should have a secret code or phrase only known to you and your family that you use when you contact; a unique identification with members of your family. And, if doubt, personally phone the person to get confirmation.
As artificial intelligence voice cloning develops, dishonest people will keep using it to trick or control gullible targets. Sadly, legislation might not provide victims any consolation. Being consumers of digital content, we are the first gatekeeper; it is our responsibility to examine and check the validity and source of materials before we embrace them. We also have to behave ethically in the digital environment.
Top comments (0)