Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News
Whatsapp

Hackers Use AI in 4 Dangerous New Ways, Google Report Warns

Highlights

  • Hackers use AI inside live malware, letting it rewrite and hide its own code.
  • Attackers use trick prompts and fake personas to bypass AI safety guardrails.
  • Dark-web AI kits make phishing, malware, and deepfakes easy – even for low-skilled attackers.
  • State-backed groups from North Korea, Iran, and China now use AI across the full attack lifecycle.

Google report: AI is reshaping cyberattacks and making them harder to stop

Google’s Threat Intelligence Group (GTIG) says hackers use AI in new and worrying ways. The report shows that hackers are not just using AI to write emails – they are running AI inside malware, tricking AI systems into revealing guarded information, and buying ready-made AI tools on underground markets. State-backed groups are also adopting these methods. Google is tracking and blocking many of these efforts and sharing guidance for defenders.

Google report
Image Source: Google

Hackers Use AI to Outsmart Cyber Defenses

The GTIG report highlights four major ways in which hackers use AI to conduct and evolve cyberattacks:

AI inside malware

Some malware now calls large language models (LLMs) while it runs. Cases like PROMPTFLUX and PROMPTSTEAL request AI models to create or modify code on the fly. Traditional antivirus tools that rely on identifiable signatures won’t detect the malware, since it’s constantly changing.

Trick prompts and social engineering

Attackers pose as students, researchers, or CTF participants to obtain AI tools that provide restricted advice. They rephrase requests and build small stories so the AI thinks the request is harmless. Once they get that output, they use it to build phishing lures, exploit code, or other tools.

AI tools sold on the Dark Web

There is a dark web for things like AI phishing kits, malware gurus, and deepfake generators, to name a few. These tools lower the skill needed to attack. Someone with little coding knowledge can run complex attacks after buying a kit. Vendors sell on English- and Russian-language forums, and demand is growing.

Dark Web in 2025
Image Credit: Freepik

State-backed groups use AI too

GTIG links misuse of AI to groups tied to North Korea, Iran, and China. These actors use AI across the attack lifecycle: scouting targets, writing lures, controlling infected systems, and stealing data.

The Spanish-language crypto scam with deepfake images is also in the report: a North Korea-nexus group used Spanish emails and deepfake lures to target crypto victims.

Why this matters

For security teams

  • Old defenses based on static signatures are not enough. AI-driven threats can change shape.
  • Teams need behavior-based detection and tools that spot unusual activity, not just known code.
  • Protect API keys and control who can use AI tools inside your network.

For regular users

  • Phishing messages will look more real and may include deepfake images or voice clips.
  • Be careful with links, verify sender details, and use two-factor authentication.
  • Don’t trust media or requests just because they look professional.

What Google is doing

Immediate actions

Google is taking down accounts and assets tied to misuse. The company is feeding threat data into its classifiers to improve detection. It is also improving model safety to reduce risky outputs.

Online Scams
Image Source: Google

Broader steps

Google follows its Secure AI Framework (SAIF) to guide safe AI design and use. GTIG shares findings with other companies and partners to raise industry awareness.

Practical advice from the report (simple steps)

For companies

Restrict access to AI services and monitor usage. Rotate and protect API keys like any other sensitive credential. Add behavior-based monitoring to catch odd patterns early.

For everyone

Be aware of the sender of the email, and don’t click on unknown links. Turn on two-factor authentication on valuable accounts. And be wary of images and videos – deepfakes can be persuasive.

Quick short list: Suspect signs to watch for

  • Emails with urgent demands and odd language or small mistakes.
  • Unexpected files or links from people you trust.
  • Media that looks real but comes from unknown sources.
  • Accounts or services requesting API access without an apparent reason.

Looking ahead

The report makes clear that AI is changing how attacks are built and run. Right now, many of these AI uses are experimental, but the trend is real. As attackers adopt AI, defenses must change too. That means better monitoring, more staff training, and tighter controls on AI access.

Digital Sovereignty
AI generated image. Image Source: freepik

Final words

Google’s GTIG report is a wake-up call. AI can help people and teams do good work, but attackers are using the same tools for harm. Simple steps – careful email habits, two-factor authentication, and more innovative security tools – can reduce risk while defenders catch up. Stay aware and treat AI-powered content with a little extra caution.

The Latest

Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Recommended