News listFake OpenAI open-source model tops Hugging Face! 240,000 downloads hide malware
動區 BlockTempo2026-05-13 03:43:24

Fake OpenAI open-source model tops Hugging Face! 240,000 downloads hide malware

ORIGINAL假 OpenAI 開源模型登 Hugging Face 冠軍!24 萬次下載暗藏惡意軟體
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯1885 words
Cybersecurity firm HiddenLayer revealed that a malicious model masquerading as OpenAI Privacy Filter climbed to the top of the trending list on Hugging Face in just 18 hours, attracting over 240,000 downloads. The model hides a six-stage Rust-based information stealer specifically designed to target browser passwords, cryptocurrency wallet seed phrases, and SSH keys. (Previous coverage: WSJ: Google in secret talks with SpaceX to advance "orbital AI data centers," Musk's million-satellite fleet eyes epic IPO) (Background: AI security startup Depthfirst announces it has outperformed Anthropic's Mythos model! Uncovers 18-year-old NGINX vulnerability with 1/10th the cost) OpenAI launched the open-source model Privacy Filter in late April—a lightweight model capable of automatically detecting and masking personally identifiable information (PII) in text. Released on Hugging Face under an Apache 2.0 license, it quickly captured the attention of many developers. However, this hype also attracted uninvited guests. Cybersecurity firm HiddenLayer revealed that a fake account named "Open-OSS" published an identical repository on Hugging Face, also named privacy-filter, with a model card verbatim copied from the official OpenAI version. The only difference was hidden in the readme file—it instructed users to download and execute start.bat (Windows) or loader.py (Linux/Mac). This fake repository reached the number one spot on the Hugging Face trending list in just 18 hours, accumulating approximately 244,000 downloads and 667 likes. HiddenLayer tracked that 657 of those likes came from accounts following automated bot naming patterns—in other words, over 98% of the social signals were faked. The download count was likely inflated using the same method to create an illusion of popularity, luring real developers into the trap. The design of this malware is quite sophisticated. When loader.py executes, it first displays fake model training output—progress bars, synthetic datasets, and virtual class names—making it look like a legitimate AI loader is running. In the background, however, it quietly disables security checks and pulls an encoded command from a public JSON posting site, passing it to a hidden PowerShell process. The command downloads a second script from a domain masquerading as a blockchain analysis API (api.eth-fastscan.org), which then downloads the actual malicious payload—a custom information stealer written in Rust. It automatically adds itself to the Windows Defender exclusion list and launches via a scheduled task with SYSTEM privileges, which deletes itself immediately after execution, leaving almost no trace. This information stealer is designed to "leave nothing behind." It captures all data stored in Chrome and Firefox—passwords, login session cookies, browsing history, and encryption keys; targets Discord accounts, cryptocurrency wallet seed phrases, SSH keys, and FTP credentials; and takes screenshots across all monitors. Finally, it bundles all data into a compressed JSON package and sends it to an attacker-controlled server. Even more cunningly, the malware detects whether it is running in a virtual machine or a security sandbox, exiting silently if it does. It is designed for one-time attacks on real targets, stealing everything before vanishing without a trace. HiddenLayer pointed out that this is not an isolated incident. Under the same command server, they discovered six other repositories on a Hugging Face account named "anthfu," uploaded in late April, using the exact same malicious loader. The impersonated models include Qwen3, DeepSeek, and Bonsai, also targeting AI developers. The attackers do not hack OpenAI or Hugging Face itself; instead, they publish realistic knock-offs, use bots to boost their ranking, and wait for developers to download and execute them. This script was previously seen in the 2024 LottiePlayer JavaScript library supply chain attack, which resulted in one user losing 10 BTC (worth over $700,000 at the time). The fake repository has since been removed by Hugging Face, but as of press time, the platform has not announced any new review mechanisms for trending repositories. A total of seven malicious repositories have been identified, but it remains unknown how many others have gone undetected or have been deleted by the attackers themselves. Security experts advise that if you have cloned Open-OSS/privacy-filter on a Windows machine and executed any of its files, the device should be considered fully compromised—do not log into any services from that computer until it is wiped. Subsequently, change all credentials stored in your browser, generate new wallets on a clean device, and immediately transfer your cryptocurrency assets. Discord sessions should be forcibly invalidated and passwords reset, and SSH keys and FTP credentials should be treated as compromised.
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching6 items
💡 Currently matching via keywords + symbols (MVP) · Will be upgraded to embedding semantic search later
Raw Information
ID:27a0850522
Source:動區 BlockTempo
Published:2026-05-13 03:43:24
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments
Fake OpenAI open-source model tops Hugging Face! 240,000 downloads hide malware | Feel.Trading