HackMyIP
← Back to News
2026-05-11 The Hacker News

Fake OpenAI Privacy Filter Hits Hugging Face, Steals Data from 244K Users

Supply ChainMalwareAI Security

A sophisticated supply chain attack has been uncovered on Hugging Face after a malicious repository impersonating OpenAI's legitimate Privacy Filter model climbed to the platform's trending list with over 244,000 downloads. The repository, named "Open-OSS/privacy-filter," typosquatted the authentic "openai/privacy-filter" release—unveiled in April 2026 as a tool to detect and redact personally identifiable information (PII) in unstructured text—and copied the model card nearly verbatim to deceive developers into downloading the malicious payload.

The attack chain begins with a Python script ("loader.py") that users are instructed to run after cloning the repository. According to the HiddenLayer Research Team, this script disables SSL verification, decodes a Base64-encoded URL hosted on JSON Keeper (a public JSON paste service), and extracts commands passed to PowerShell for execution. The use of JSON Keeper as a dead drop resolver enables threat actors to dynamically switch payloads without modifying the repository. PowerShell subsequently downloads a batch script from "api.eth-fastscan[.]org" and executes it via cmd.exe.

The batch script functions as a second-stage downloader that elevates privileges through a User Account Control (UAC) prompt, configures Microsoft Defender Antivirus exclusions, downloads additional binaries, and establishes a scheduled task to launch the final payload. Once executed, the Rust-based information stealer harvests screenshots, Discord session data, cryptocurrency wallet credentials and browser extensions, FileZilla configurations, wallet seed phrases, and data from Chromium and Gecko-based browsers. The malware includes sandbox and virtual machine detection mechanisms. Hugging Face has since disabled access to the malicious repository. Users who may have interacted with the repository should perform a thorough privacy checkup and monitor for signs of credential compromise.

This incident underscores the growing risks associated with open-source AI model repositories, where attackers exploit trust in popular platforms to distribute malware at scale. Security teams should implement verification workflows for third-party dependencies and consider using tools like SSL/TLS certificate validation and DNS leak testing to detect potential compromise in their development environments.

Source: The Hacker News →

Related Tool

Privacy Checkup

Try Now →