Skip to content
Connect with us
    March 11, 2025

    Security in Communications: Dampening the Deepfake Disaster

    The rise of AI is something many people approach with fear, for both rational and irrational reasons. While many are simply anxious about changes in the labor force and legal system, others see the portents of Terminator becoming a reality.  Deepfake-Feat

    Regardless of your perspective, there’s one particularly scary thing that’s already emerged from the primordial ooze that birthed AI: deepfakes. Advanced technology allows bad actors to create fake calls and messages to gather sensitive data and other resources from their victims.

    For organizations, these are particularly dangerous. No matter how large your user-base or complex your environment, a deepfaker only has to pull the wool over an employee’s eyes one time to get in. It’s more important than ever that organizations take steps to protect their communication systems from this ever-present threat. 

    Deepfakes on the Hunt for Civilians and Businesses 

    In January 2025, a French woman divorced her husband and paid $850,000 to scammers using AI generated images of actor Brad Pitt. Given how widespread this story was across both mainstream and alternative media, it’s likely most reading this article have both heard the story and seen the pictures.

    Why do I bring this up? 

    The woman in question received mass ridicule for falling for this scam. The pictures she was scammed with absolutely do not hold up to scrutiny, and the story of a hospitalized Brad Pitt hoping to start a love affair with this random woman was full of holes (to put it generously). 

    But she’s not alone. While her story is highly sensational, perfect for news broadcasts, clickable articles, and YouTube videos, most ordinary victims of deepfaking don’t go viral.  

    Civilian Deepfakes 

    According to a survey commissioned by the call-blocking provider Hiya, 31% of the 12,000 respondents report receiving deepfake calls in the past year. 45% of that group was scammed in some capacity. Of the scammed, 34% lost money, while 32% had personal information stolen.  

    The average monetary loss for these people was $539 each. For a person living check-to-check, that could be financial game over.  

    Much harder to calculate is the emotional toll. A survey from McAfee gives us possibly the closest approximation we can get, reporting that 35% of deepfake scam victims suffer significant stress. 

    Who can blame them? Deepfakes attempt to imitate everything from Amazon’s customer service department to the IRS in hopes to either trick or terrify victims. Perhaps the most evil and terrifying of them all are deepfakes that imitate a victim’s friends or loved ones.  

    It’s no wonder that people, overcome with fear, would fall for this sort of thing. For a business owner, this is important to know, because these flawed, emotional, occasionally irrational human beings are the same ones who work for your company.  

    Business Deepfakes 

    In May 2024, the British engineering company Arup suffered from a very serious deepfake attack. An employee was invited to a mysterious video conference, wherein cybercriminals used convincing replicas of the CFO and other staff members to carry out their scheme. deepfake2

    The deepfakers compelled this employee to make 15 money transfers to 5 separate bank accounts in Hong Kong, of all places. All told, the fiasco cost Arup £25 million, and led them to realize just how much they needed to change their security strategy. 

    Arup’s not alone. According to new data from Regula, 92% of businesses have experienced some degree of financial loss from deepfakes. For at least 28% of them, their losses add up to more than half a million dollars. 

    A combination of newly accessible tools and age-old psychological manipulation makes deepfakes a more effective form of cyberattack than ever before. Luckily, modern unified communications (UC) and customer experience (CX) platforms have taken steps to make deepfake protection easier and more effective. 

    Deepfake Protection in Business Communications 

    Security features in cloud-based UC and CX platforms have grown more powerful and sophisticated in recent years, keeping up in an arms race with cybercriminals. There are many features and strategies businesses can leverage to ward off deepfakes. 

    AI-Powered Authentication 

    Fight fire with fire. Fight AI with AI. If modern technology can be used to make more effective deepfakes, it can also be used to make an effective defense. 

    Fight fire with fire. Fight AI with AI.

    AI can analyze voice and video material for signs of deepfakes. With a backlog of vocal and facial information, it can spot deviations from the norm and warn users of potential risks. 

    As the deepfakes grow harder and harder to distinguish, AI tools may be a necessary step. With real-time analysis and fraud detection, they would quickly eclipse the detection skills of a normal human.  

    Access Control Mechanisms 

    Access-ControlModern communication platforms allow robust access control, ensuring users in your organization only have so much access to sensitive information. If users don’t meet the prerequisites to access certain accounts or channels, they’ll have to go through a process to be allowed in.

    As annoying as red tape may be, features like this could prevent something like the Arup incident. With other users or multi-step processes standing between befuddled employees and precious assets, there are more opportunities for someone to say, “did the CFO really just ask me to wire thousands of pounds to Hong Kong?” 

    This, of course, requires credentials to remain uncompromised. If an attacker is able to compromise a leader's credentials, the system won't know the difference. If the company uses hardware tokens for access control, that's a different story. Hardware-based multi-factor authentication makes it way harder for attackers to enter the system.

    Human-in-the-Loop Security Measures 

    AI and automation are necessary upgrades to an organization’s deepfake defense measures. Still, combining human-in-the-loop security measures with AI is a devastating combo.  

    Trusted human security experts will be able to easily spot when an AI mislabels a real video call or email as an AI-generated fake. Conversely, AI, unburdened by blind trust in authority or love for celebrities, will see past emotional manipulation. Like a true symbiotic relationship, humans and AI work together to cover each other’s weaknesses and execute brilliant security strategies. 

    Higher Deepfake Security with Continuant 

    For almost 30 years, Continuant's helped various organizations keep their communications secure, be it on legacy systems or modern cloud platforms.  

    We provide preventative measures to minimize risk, running services like patch and configuration management. This helps cover known vulnerabilities to make security attacks such as deepfakes much harder to pull off. One of the ways we do this is by monitoring for failed login attempts to stop perpetrators from forcing their way into your systems. 

    As security threats evolve, so do we. Our solutions and services are always developing new ways to keep our customers’ data secure. We can all hope that soon, these crazy deepfake stories become a thing of the past. 

    Want to learn more about Continuant’s security services? Call us today. 

    Connect with us

    Tag(s): Security , AI

    David Shelby

    David Shelby graduated from George Fox University in 2018 with a bachelor's degree in English and began writing for Continuant soon after. With the help of Continuant's world-class engineers and subject matter experts, he's dedicated himself to understanding all things business communications. When it comes to UC, AV,...

    Other posts you might be interested in

    View All Posts