Dpr Korea Uses AI to Trick Western Firms and Funnel Wages

Dpr Korea Uses AI to Trick Western Firms and Funnel Wages

Microsoft says North Korean agents are using artificial intelligence to pose as remote IT applicants, win jobs at western firms and funnel wages back to Pyongyang, a scheme that helps state-backed fraudsters raise revenue and maintain access to company systems. Wednesday at 11: 22 a. m. ET Microsoft warned that dpr korea-linked operatives are using voice-changing software, Face Swap and generative prompts to build convincing false identities.

Microsoft Details Jasper Sleet and Coral Sleet AI Tactics

Microsoft’s threat intelligence unit wrote in a blogpost that groups tracked as Jasper Sleet and Coral Sleet use AI at multiple stages of the scam, from creating names to sustaining employment. Jasper Sleet “leverages AI across the attack lifecycle to get hired, stay hired, and misuse access at scale. ” Microsoft also noted that last year it disrupted 3, 000 Microsoft Outlook or Hotmail accounts tied to fake IT workers.

Dpr Korea Remote IT Worker Scheme Uses Face Swap and Voice Tools

Microsoft described several concrete AI tools used by the attackers: voice-changing software to mask accents during remote interviews, the Face Swap app to insert North Korean workers’ faces into stolen ID photos, and AI-generated “polished” headshots for CVs. The attackers also used generative prompts to produce culturally appropriate name lists—an example prompt given was “create a list of 100 Greek names”—and to match email-address formats for fraudulent applications.

Upwork and Job Platforms Flagged as Hunting Grounds

Microsoft said the scammers scour job postings on platforms such as Upwork for software and IT roles, then tailor applications using the exact skill lists in those ads. Once hired, fake workers use AI to write emails, translate documents and generate code as part of efforts to avoid detection and retain access. Upwork has said it takes “aggressive action to… remove bad actors from our platform. “

Microsoft also urged companies to use video or in-person interviews for IT hires to head off the threat; interviewers can spot AI-generated images or deepfake video by looking for tells like pixelation around facial edges and inconsistent lighting on faces, eyes, ears and glasses.

No follow-up date or next disclosure was provided in Microsoft’s blogpost; the company presented these findings as part of its threat intelligence update and recommended that employers begin conducting video or in-person interviews for IT candidates immediately.