Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How Threat Hunting Builds Readiness

    October 14, 2025

    SonicWall VPNs face a breach of their own after the September cloud-backup fallout

    October 14, 2025

    The best Apple TV VPNs of 2025: Expert tested and reviewed

    October 14, 2025
    Facebook X (Twitter) Instagram
    Trending
    • How Threat Hunting Builds Readiness
    • SonicWall VPNs face a breach of their own after the September cloud-backup fallout
    • The best Apple TV VPNs of 2025: Expert tested and reviewed
    • npm, PyPI, and RubyGems Packages Found Sending Developer Data to Discord Channels
    • India’s Airbound bags $8.65M to build rocket-like drones for one-cent deliveries
    • Vom CISO zum Chief Risk Architect
    • Beware of getting your product buying advice from AI for one big reason, says Ziff Davis CEO
    • New Rust-Based Malware “ChaosBot” Uses Discord Channels to Control Victims’ PCs
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»75 million deepfakes blocked: Persona leads the corporate fight against hiring fraud
    AI

    75 million deepfakes blocked: Persona leads the corporate fight against hiring fraud

    TechurzBy TechurzJune 26, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    75 million deepfakes blocked: Persona leads the corporate fight against hiring fraud
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    As remote work has become the norm, a shadowy threat has emerged in corporate hiring departments: sophisticated AI-powered fake candidates who can pass video interviews, submit convincing resumes, and even fool human resources professionals into offering them jobs.

    Now, companies are racing to deploy advanced identity verification technologies to combat what security experts describe as an escalating crisis of candidate fraud, driven largely by generative AI tools and coordinated efforts by foreign actors, including North Korean state-sponsored groups seeking to infiltrate American businesses.

    San Francisco-based Persona, a leading identity verification platform, announced Tuesday a major expansion of its workforce screening capabilities, introducing new tools specifically designed to detect AI-generated personas and deepfake attacks during the hiring process. The enhanced solution integrates directly with major enterprise platforms including Okta’s Workforce Identity Cloud and Cisco Duo, allowing organizations to verify candidate identities in real-time.

    “In today’s environment, ensuring the person behind the screen is who they claim to be is more important than ever,” said Rick Song, CEO and co-founder of Persona, in an exclusive interview with VentureBeat. “With state-sponsored actors infiltrating enterprises and generative AI making impersonation easier than ever, our enhanced Workforce IDV solution gives organizations the confidence that every access attempt is tied to a real, verified individual.”

    The timing of Persona’s announcement reflects growing urgency around what cybersecurity professionals call an “identity crisis” in remote hiring. According to a April 2025 Gartner report, by 2028, one in four candidate profiles globally will be fake — a staggering prediction that underscores how AI tools have lowered the barriers to creating convincing false identities.

    75 million blocked deepfake attempts reveal massive scope of AI-powered hiring fraud

    The threat extends far beyond individual bad actors. In 2024 alone, Persona blocked over 75 million AI-based face spoofing attempts across its platform, which serves major technology companies including OpenAI, Coursera, Instacart, and Twilio. The company has observed a 50-fold increase in deepfake activity over recent years, with attackers deploying increasingly sophisticated techniques.

    “The North Korean IT worker threat is real,” Song explained. “But it’s not just North Korea. A lot of foreign actors are all doing things like this right now in terms of finding ways to infiltrate organizations. The insider threat for businesses is higher than ever.”

    Recent high-profile cases have highlighted the severity of the issue. In 2024, cybersecurity firm KnowBe4 inadvertently hired a North Korean IT worker who attempted to load malware onto company systems. Other Fortune 500 companies have reportedly fallen victim to similar schemes, where foreign actors use fake identities to gain access to sensitive corporate systems and intellectual property.

    The Department of Homeland Security has warned that such “deepfake identities” represent an increasing threat to national security, with malicious actors using AI-generated personas to “create believable, realistic videos, pictures, audio, and text of events which never happened.”

    How three-layer detection technology fights back against sophisticated fake candidate schemes

    Song’s approach to combating AI-generated fraud relies on what he calls a “multimodal” strategy that examines identity verification across three distinct layers: the input itself (photos, videos, documents), the environmental context (device characteristics, network signals, capture methods), and population-level patterns that might indicate coordinated attacks.

    “There’s no silver bullet to really solving identity,” Song said. “You can’t look at it from a single methodology. AI can generate very convincing content if you’re looking purely at the submission level, but all the other parts of creating a convincing fake identity are still hard.”

    For example, while an AI system might create a photorealistic fake headshot, it becomes much more difficult to simultaneously spoof device fingerprints, network characteristics, and behavioral patterns that Persona’s systems monitor. “If your geolocation is off, then the time zones are off, the time zones are off, then your environmental signals are off,” Song explained. “All those things have to come into a single frame.”

    The company’s detection algorithms currently outperform humans at identifying deepfakes, though Song acknowledges this is an arms race. “AI is getting better and better, improving faster than our ability to detect purely on the input level,” he said. “But we’re watching the progression and adapting our models accordingly.”

    Enterprise customers deploy workforce identity verification in under an hour

    The enhanced workforce verification solution can be deployed remarkably quickly, according to Song. Organizations already using Okta or Cisco’s identity management platforms can integrate Persona’s screening tools in as little as 30 minutes to an hour. “The integration is incredibly fast,” Song said, crediting Okta’s team for creating seamless connectivity.

    For companies concerned about the user experience, Song emphasized that legitimate candidates typically complete verification in seconds. The system is designed to create “friction for bad users to prevent them from getting through” while maintaining a smooth experience for genuine applicants.

    Major technology companies are already seeing results. OpenAI, which processes millions of user verifications monthly through Persona, achieves 99% automated screening with just 18 milliseconds of latency. The AI company uses Persona’s sanctions screening capabilities to prevent bad actors from accessing its powerful language models while maintaining a frictionless signup experience for legitimate users.

    Identity verification market pivots from background checks to proving candidates exist

    The rapid adoption of AI-powered hiring fraud has created a new market category for identity verification specifically tailored to workforce management. Traditional background check companies, which verify information about candidates after assuming their identity is genuine, are not equipped to handle the fundamental question of whether a candidate is who they claim to be.

    “Background checks assume that you are who you say you are, but then verify the information you’re providing,” Song explained. “The new problem is: are you who you say you are? And that’s very different from what background check companies traditionally solve.”

    The shift toward remote work has eliminated many traditional identity verification mechanisms. “You never had a problem knowing that if someone shows up in person, you know with relatively high certainty, you are who you say you are,” Song noted. “But if you’re interviewing by Zoom, this could all be a deepfake.”

    Industry analysts expect the workforce identity verification market to expand rapidly as more organizations recognize the scope of the threat. According to MarketsandMarkets, the global identity verification market is projected to reach $21.8 billion by 2028, up from $10.9 billion in 2023, representing a compound annual growth rate of 14.9%, with workforce applications representing one of the fastest-growing segments.

    Beyond detecting deepfakes: The future of digital identity lies in behavioral history

    As the technological arms race between AI-generated fraud and detection systems intensifies, Song believes the ultimate solution may require a fundamental shift in how we think about identity verification. Rather than focusing solely on detecting whether content is artificially generated, he envisions a future where digital identity is established through accumulated behavioral history.

    “Maybe the question long term really isn’t whether it’s AI or not, but really just who’s responsible for this interaction,” Song said. The company is exploring systems where identity would be proven through a person’s digital footprint — their history of legitimate transactions, course completions, purchases, and verified interactions across multiple platforms over time.

    “All the previous actions that I’ve done — ordering from DoorDash, finishing a course on Coursera, buying shoes from StockX — those interactions long term probably are the ones that will really define who I am,” Song explained. This approach would make it exponentially more difficult for bad actors to create convincing false identities, as they would need to fabricate years of authentic digital history rather than just a convincing video or document.

    The enhanced Persona Workforce IDV solution is available immediately, with support for government ID verification in more than 200 countries and territories, and integration capabilities with leading identity and access management platforms. As the remote work revolution continues to reshape how businesses operate, companies find themselves in an unexpected position: having to prove their job candidates are real people before they can even begin to verify their qualifications.

    In the digital age, it seems, the first qualification for any job may simply be existing.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Blocked corporate deepfakes Fight Fraud hiring leads Million Persona
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCrowdStrike is cutting jobs in favor of AI. Here’s why you shouldn’t.
    Next Article Xiaomi adds new colors to the 15 Ultra alongside a new photography kit
    Techurz
    • Website

    Related Posts

    Security

    Satellites Are Leaking the World’s Secrets: Calls, Texts, Military and Corporate Data

    October 14, 2025
    Security

    Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits

    October 10, 2025
    Opinion

    Datacurve raises $15 million to take on Scale AI

    October 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    How Threat Hunting Builds Readiness

    October 14, 2025

    SonicWall VPNs face a breach of their own after the September cloud-backup fallout

    October 14, 2025

    The best Apple TV VPNs of 2025: Expert tested and reviewed

    October 14, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.