Friendly faces mean a lot in commerce, from high-stakes executive meetings to calls with trusted customers. Facial recognition underpins plenty of our human relationships, as well as our commercial interactions. Unfortunately, the significance of a familiar face also makes it a top target for scams and cyberattacks. Here, we’ll explore the importance of facial recognition and impersonation, before discussing how businesses protect themselves against AI-powered threats.
Cybercriminals increasingly exploit the human face, launching sophisticated scams and AI-driven deceptions. With reputations, careers, and millions of dollars on the line, businesses must account for the risks and solutions around facial recognition.
Facial Recognition and the Rise of AI Impersonation Attacks
Today, a business leader’s face represents more than just an image—it’s a symbol of trust and authority. Scammers leverage these symbols with fake websites, misleading advertisements, fraudulent videos, and even live deepfake video calls. Attackers steal clips and photos from company websites and private social media accounts, then doctor them using deepfakes and advanced photo editing. Deepfakes, or AI-enhanced facial fakes, convincingly replicate a person’s likeness, making it appear as though they’re doing or saying something they never did. Scammers weaponize these doctored images and videos, using their targets’ faces to launch cyberattacks that appear authentic and trustworthy.
Impersonation scams have evolved into sophisticated threats, using stolen facial IP to create convincing deceptions. Scammers don’t just rely on deepfakes; they plaster business leaders’ and colleagues’ faces across fake websites, misleading advertisements, fraudulent emails, and fake company pages. These scams combine various tactics to create seamless illusions, leading colleagues, customers, and prospects into dangerous traps. When these attacks succeed, team members transfer hard-earned funds, clients unknowingly download malware, and confidential data leaks online. Scammers associate trusted faces with digital chaos, striking at both the heart of companies and their reputations in a single blow.
Recent Attacks Using Facial Impersonations
One recent example involves Mugur Isarescu, the world’s longest-serving central bank governor from Romania. In a recent deepfake video, Isarescu appeared to promote fraudulent investments, causing confusion and concern. This video, along with a similar one targeting Romania’s prime minister, used AI to manipulate both the image and voice of these leaders. These scams leverage the trust and authority of familiar faces to direct viewers towards fraudulent platforms. The National Bank of Romania issued a public warning in the wake of the attack, emphasizing that neither Isarescu nor the bank endorse any of the investment recommendations promoted in the clips.
Another striking example occurred when scammers tricked a financial company by deepfaking their Chief Financial Officer (CFO). Despite initial suspicions, a cybercriminal gang manipulated their contact into making multiple bank transfers after other «colleagues» joined the call, all of whom were deepfakes. Gangs of scammers, all impersonating different team-members, often target vulnerable or newer employees, posing problems for businesses even when they prepare for cyberattacks. These kinds of incidents highlights the growing sophistication of deepfake technology and its potential for high-stakes commercial fraud.
Facial Recognition and Impersonations in Advertisements
Beyond deepfake calls, business leaders also face impersonations in the still image. Recognizable faces help cyberscammers build fraudulent marketing campaigns to sell their wares online. For example, Elon Musk recently seemed to promote a cryptocurrency scam on a popular social media channel. The AI-generated CEO directed viewers to a site promising doubled crypto returns. Drawn in by the familiar face, over 30,000 viewers engaged with the ad before the platform took it down. This case exemplifies just one of many similar scams. CEO images and voices appear across the internet, as criminals disseminate scams and lure targets to malicious pages.
Developing impersonation technology only seems to multiply the frequency and complexity of these kinds of attacks. With scams so prevalent, business leaders, brands, and consumers must increase their defenses.
Fighting Back: Protecting Your Business
How can businesses protect themselves from these sophisticated attacks? They must take a proactive approach to facial impersonation scams. Proactive risk protection means patrolling digital channels to detect and eliminate fakes before they spread. It’s no longer enough to monitor text and infrastructure alone. Businesses must now include images, videos, and faces as critical elements of their intellectual property (IP) protection.
A future-proof Digital Risk Protection (DRP) strategy starts with establishing a brand’s identity. This identity encompasses assets like domain names, brand colors, and logos. But to truly safeguard against facial impersonation, brands need to integrate faces into their protection portfolio. This means mapping facial recognition components onto colleague headshots. With these extra assets in hand, DRP platforms effectively detect and counter social media scams and impersonation attacks. Enhancing your anti-scam arsenal with facial recognition, alongside technical takedowns and legal actions, ensures your business remains one step ahead of cybercriminals.
To learn more about how to implement your next facial recognition strategy, or for a free risk audit, reach out here.