
"For decades, trust in business hinged on simple human instincts. It used to be that when we saw a familiar face or heard a trusted voice, we instinctively believed we were dealing with the real person. That assumption is now dangerous. In the past 18 months, deepfakes have moved from novelty to weapon. What started as clumsy internet pranks has become a mature cybercriminal toolset. Finance teams have been duped into wiring millions after video calls with "executives" who never logged on."
"What's changed is not intent; fraudsters have always been inventive. What's changed is accessibility. Generative AI (GenAI) has democratised deception. What once required specialist labs and heavy computing power can now be done with an app and a laptop. A single audio clip scraped from a webinar, or a handful of selfies on social media, is enough to create a credible voice or face."
Trust in business historically relied on human instincts such as recognizing familiar faces and voices. Deepfakes have evolved from novelties into weapons, enabling fraudsters to impersonate executives and public officials and trick finance teams into wiring millions. The impact includes financial loss and erosion of confidence in digital interactions. Generative AI has democratised deception by lowering technical barriers; minimal audio clips or selfies can generate convincing forgeries. Gartner found 43% of security leaders saw deepfake audio calls and 37% saw deepfake video calls. Vendors are embedding detection: neural-network voice scoring, liveness checks, metadata inspection and device telemetry.
Read at ComputerWeekly.com
Unable to calculate read time
Collection
[
|
...
]