AI-Powered Cyberattacks: How Hackers are Exploiting Gemini AI (2026)

Google Warns of Hackers Exploiting Gemini AI for All Attack Stages

State-sponsored hackers are leveraging Google's Gemini AI model to support every stage of an attack, from reconnaissance to post-compromise actions. This includes target profiling, open-source intelligence gathering, generating phishing lures, translating text, coding, vulnerability testing, and troubleshooting.

The Google Threat Intelligence Group (GTIG) has reported that Advanced Persistent Threat (APT) adversaries use Gemini to support their campaigns, from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.

Chinese threat actors, for instance, employed a fabricated scenario to request that Gemini automate vulnerability analysis and provide targeted testing plans. Another China-based actor frequently used Gemini to fix code, conduct research, and offer technical advice for intrusions.

The Iranian adversary APT42 leveraged Google's LLM for social engineering campaigns, using it as a development platform to speed up the creation of tailored malicious tools, including debugging, code generation, and researching exploitation techniques.

Additionally, cybercriminals are showing increased interest in AI tools and services that could aid in illegal activities, such as social engineering ClickFix campaigns, delivering AMOS info-stealing malware for macOS.

AI-Enhanced Malicious Activity

GTIG notes that no major breakthroughs have occurred in terms of integrating AI capabilities into malware toolsets, but expects malware operators to continue doing so. HonestCue, a proof-of-concept malware framework, uses the Gemini API to generate C# code for second-stage malware, then compiles and executes the payloads in memory.

CoinBait, a React SPA-wrapped phishing kit, masquerades as a cryptocurrency exchange for credential harvesting. Its development was advanced using AI code generation tools, as indicated by artifacts in the malware source code.

Model Extraction and Distillation

Google also warns of model extraction and distillation attempts, where organizations leverage authorized API access to query the system and reproduce its decision-making processes, potentially enabling attackers to accelerate AI model development quickly and at a lower cost.

These attacks are flagged by Google as a significant threat, constituting intellectual theft, scalable, and undermining the business model of AI-as-a-service, which could impact end users soon.

To combat these threats, Google has disabled accounts and infrastructure tied to documented abuse and implemented targeted defenses in Gemini's classifiers to make abuse harder. The company emphasizes its commitment to designing AI systems with robust security measures and strong safety rails, regularly testing the models to improve their security and safety.

AI-Powered Cyberattacks: How Hackers are Exploiting Gemini AI (2026)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Eusebia Nader

Last Updated:

Views: 5541

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Eusebia Nader

Birthday: 1994-11-11

Address: Apt. 721 977 Ebert Meadows, Jereville, GA 73618-6603

Phone: +2316203969400

Job: International Farming Consultant

Hobby: Reading, Photography, Shooting, Singing, Magic, Kayaking, Mushroom hunting

Introduction: My name is Eusebia Nader, I am a encouraging, brainy, lively, nice, famous, healthy, clever person who loves writing and wants to share my knowledge and understanding with you.