Deepfakes a new weapon of crass destruction

Top Stories

Alarmingly, deepfake videos online are spiking at an annual rate of 900 per cent, according to the World Economic Forum.
Alarmingly, deepfake videos online are spiking at an annual rate of 900 per cent, according to the World Economic Forum.

Dubai - Synthetic media content proving to be a gaping challenge as it exposes how shallow user recognition can be

by

Alvin Cabral

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Fri 9 Jul 2021, 9:04 PM

Last updated: Fri 9 Jul 2021, 9:07 PM

Whether we like it or not, advancements in innovation present an equal opportunity for the yin and yang of society. Photoshop, for example, made Microsoft Paint look like child’s play — and that same notion is applicable to how new tech can enable anyone to become posers.

The UAE’s latest move on cracking down on deepfakes is a stark reminder that nobody is safe from an ingenious method of duping people, from illicit financial gain to political swaying and cyberbullying to plain misinformation.


Deepfakes, or synthetic media, are, in layman’s terms, audio or video content doctored with digitally-altered likenesses to make them look genuine. The technique isn’t new, but powerful new tech like artificial intelligence and machine learning have made it, most of the time, extremely difficult to discern.

When Photoshop grew to become the holy grail of digital alteration, it also became a playground for fakers. Adobe, the software’s creator, did roll out a metadata programme last year to help ID fakes.


With videos, it gets trickier. The masses, thanks to the influence of social media, can easily be reeled in, unwillingly baited into the no-no of not verifying sources before sharing content. For the corporate world, it gets even dicier: The bigger the stakes, the better weapons are launched by bad actors.

More alarmingly, deepfake videos online are spiking at an annual rate of 900 per cent, according to the World Economic Forum.

A Carnergie Endowment for International Peace study showed that in 2019, US residents lost $667 million via fake phone calls to impersonate the government or relatives in distress, among others, forcing them to transfer funds. Corporates face potentially — realistically, really — billions in damages, not to mention the harm it will do to their brands.

AI, despite its promise for progress, is also equally playing a huge role in spreading fake content. Its democratisation and increasing ease of use, while made with good intentions, inevitably comes with unintended consequences. It is a given that the sources of these deepfakes can be tracked down, but the general idea is that the tech-for-all ideology makes it all the more easier to get into the action.

“Even unskilled operators could download the requisite software tools and, using publicly-available data, create increasingly convincing counterfeit content,” analyst Laurie Harris wrote in a study for the Congressional Research Service, the US Congress’ think-tank.

Organisations and watchdogs themselves must also be equipped to keep pace. Research firm Gartner, in a recent report, cited that, by 2023, 20 per cent of successful account takeover hacks will use deepfakes to “socially engineer users to turn over sensitive data or move money into criminal accounts”.

“Countering synthetic media in the financial system will require new technologies, institutional practices and education in the financial sector,” Carnegie fellow Jon Bateman said.

Last year’s hotly-contested US elections dragged deepfakes further into the spotlight. Facebook banned deepfake videos ahead of the polls, well over a year after a doctored video of a ‘drunk’ House Speaker Nancy Pelosi surfaced, which was tweeted out by then-president Donald Trump. The social media network took down that video but, curiously, didn’t do the same for a deepfake involving its own CEO, Mark Zuckerberg.

Even world leaders have fallen victim — Queen Elizabeth in 1995, former British prime minister Tony Blair in 1998 and former Cuban leader Fidel Castro in 2003. They were technically prank calls, but the whole idea holds up: Deepfakes are the next step in disinformation and economic damage.

“Financially-motivated extortion and social engineering, and influence operations aimed at destabilising democracies, are just the start,” Trend Micro security research vice-president Rik Ferguson said in a report aptly titled Weaponised deepfakes are getting closer to reality.

The question is, as Ferguson pointed out in closing, “what can we do about it?”

— alvin@khaleejtimes.com


More news from