iProov: 70% of organizations will be greatly impacted by generation of AI deepfakes

iProov: 70% of organizations will be greatly impacted by generation of AI deepfakes


Sign up for our daily and weekly newsletters to stay up to date with the latest updates and exclusive content on industry-leading AI coverage. More information


In the hugely popular and award-winning HBO series “Game of Throneswas a common warning that “the White Walkers are coming” – referring to a race of ice creatures that posed a grave threat to humanity.

We must consider deepfakes similarly, claims Ajay Amlani, president and head of the Americas region at a biometric authentication company iProov.

“There’s been widespread concern about deepfakes over the last few years,” he told VentureBeat. “What we’re seeing now is that winter is here.”

About half of organizations (47%) recently surveyed by iProov say they have encountered a deepfake. The company's new survey published today also revealed that nearly three-quarters of organizations (70%) believe that Generative AI-Created Deepfakes will have a major impact on their organization. At the same time, only 62% say their company takes the threat seriously.

“This is going to be a real concern,” Amlani said. “You can literally create a completely fictional person, make them look the way you want, sound the way you want, and have them react in real time.”

Deepfakes are on par with social engineering, ransomware and password breaches

Deepfakes – false, fabricated avatars, images, voices and other media distributed via photos, videos, phone calls and Zoom meetings, usually with malicious intent – ​​have quickly become incredibly sophisticated and often undetectable.

This has posed a major threat to organizations and governments. For example, a financial officer at a multinational paid out $25 million after being scammed by a deepfake video call with their company's “chief financial officer.” In another high-profile example, cybersecurity firm KnowBe4 discovered that a new employee was in fact a North Korean hacker who made it through the recruitment process using deepfake technology.

“We can now create fictionalized worlds that go completely unnoticed,” said Amlani, who added that the findings of iProov's research were “quite astonishing.”

Interestingly, there are regional differences when it comes to deepfakesFor example, organizations in Asia Pacific (51%), Europe (53%), and Latin America (53%) are significantly more likely to have encountered a deepfake than organizations in North America (34%).

Amlani pointed out that many malicious actors are based internationally and attack local areas first. “That is growing globally, especially because the Internet is not geographically bound,” he said.

The study also found that deepfakes are now ranked third as the biggest security concern. Password breaches scored the highest (64%), closely followed by ransomware (63%) and phishing/social engineering attacks and deepfakes (61%).

“It’s very hard to trust anything digital,” Amlani said. “We have to question everything we see online. The call to action here is that people really have to start build defensive works to prove that the person is the right person.”

Threat actors are getting better at creating deepfakes thanks to increased processing speeds and bandwidth, a greater and faster ability to share information and code via social media and other channels — and of course, generative AIAmlani noted.

While there are some simplistic measures to address threats — such as embedded software on video platforms that attempts to flag AI-altered content — “that’s just one step into a very deep pool,” Amlani said. On the other hand, there are “crazy systems” like captchas that are becoming increasingly challenging.

“The concept is an arbitrary challenge to prove that you are a living human being,” he said. But it is becoming increasingly difficult for people to verify themselves, especially older people and those with cognitive, visual or other disabilities (or people who, for example, cannot identify a seaplane when challenged because they have never seen one).

“Biometrics is a simple way to solve these kinds of problems,” says Amlani.

iProov found that three-quarters of organizations are using facial biometrics as their primary defense against deepfakes, followed by multi-factor authentication and device-based biometric tools (67%). Enterprises are also educating employees on how to recognize deepfakes and the potential risks associated with them (63%), regularly auditing security measures (57%), and regularly updating systems (54%) to address deepfake threats.

iProov also assessed the effectiveness of different biometric methods in the fight against deepfakes. Their ranking:

  • Fingerprint 81%
  • Iris 68%
  • Facial treatment 67%
  • Advanced behavior 65%
  • Palm tree 63%
  • Basic behavior 50%
  • Vote 48%

But not all authentication tools are created equal, Amlani noted. Some are cumbersome and not that comprehensive — requiring users to move their head left and right, for example, or raise and lower their eyebrows. But threat actors using deepfakes can easily bypass these, he pointed out.

iProov’s AI-powered tool, on the other hand, uses light from the device’s screen to reflect 10 random colors onto the human face. This scientific approach analyzes the skin, lips, eyes, nose, pores, sweat glands, hair follicles and other details of real humanity. If the result doesn’t come back as expected, Amlani explained, it could be a threat actor holding up a physical photo or a picture on a mobile phone, or they could be wearing a mask, which can’t reflect light like human skin does.

The company is deploying its tool in both the commercial and government sectors, he noted, calling it easy and fast, yet “very secure.” It has what he called an “extremely high success rate” (over 98%).

All things considered, “there is a global understanding that this is a huge problem,” Amlani said. “There needs to be a global effort to combat deepfakes, because the bad actors are global. It’s time to arm ourselves and fight against this threat.”