Tech

When Machines Turn Malicious: How Cybercriminals Are Weaponizing AI to Deceive the World

It began innocently enough one November afternoon in Bengaluru. A manager at a mid-sized tech startup received an email appearing to come from the CEO. It was polite, well-written, addressed to her by name, referencing a proposal she’d drafted only days earlier — details that suggested someone had intimate access to internal memos. The email asked her to transfer a large sum of money to a “vendor” — the bank account given was new, and overseas. Hesitant, she glanced around, then approved. The funds vanished. Only later did she learn that the “CEO” in the message was an AI-driven impersonation.

That case is not fictional. Across India in 2024, AI-tools were involved in an estimated 80% of phishing mails. The Times of India Criminal groups cloned or spoofed websites, created counterfeit apps made to look like official government or bank portals, and mimicked trusted communications to extract money or personal data. The Times of India+2The Times of India+2

One extraordinary case in the UK illustrates how deepfake technology has become a weapon. A British engineering firm, Arup, lost £20 million (~US$25-30 million) after an employee was fooled by what the firm described as a “video call” with a senior official. The voice and face seemed real: but the entire presentation was fraudulent, generated with AI. The Guardian


In Europe, law enforcement agencies have begun to sound the alarm. Europol has reported that criminal networks are now using AI to automate large parts of their social engineering, execute multilingual phishing campaigns, generate fake identities using synthetic images and audio (“deepfakes”), and even set up entire operations to manipulate, deceive, and defraud on a scale previously unthinkable. AP News+2Reuters+2

Take “voice cloning” scams. In one earlier case (not strictly recent, but illustrative), criminals impersonated a senior executive’s voice on the phone to convince another executive in a different company to release funds — several $100,000s changed hands. The AI-powered mimicry was convincing enough to bypass ordinary checks. digitalresistance.org.uk+1


In India, the scale is vast. A report named “The State of AI-Powered Cybercrime: Threat & Mitigation Report 2025” found that cyber criminals cost individuals and companies ₹23,000 crore (about US$2.7-3 billion, depending on exchange rate) in 2024 alone via AI-driven frauds. The Times of India Losses soared, not simply because more people were targeted, but because AI allowed attacks to be personalized, believable, fast, and able to adapt.


There’s also been misuse of AI for non-monetary harms. Deepfake pornography and image manipulation are growing threats. In Scotland, a helpline reported nearly 30,000 women are affected annually by AI-generated “revenge porn” or manipulated intimate images. The Times In Jharkhand and Chhattisgarh (India), a gang was arrested for using AI-generated or AI-enhanced images of women in fake social media profiles to run a large online “matrimony scam,” extorting payments from people with prospective matrimonial expectations. The Times of India


One of the darker trends is the scale at which criminals are now using AI to streamline what used to take teams of people and weeks of planning. For instance, Microsoft’s Cyber Signals reported that fraud attempts leveraging AI on its platforms had to be blocked to the tune of US$4 billion, and they detected over 1.6 million bot signup attempts per hour over a one-year period from April 2024 to April 2025. The Economic Times

Another paper, “AI versus AI in Financial Crimes and Detection: GenAI Crime Waves to Co-Evolutionary AI” (2024) lays out how generative AI is being used in everything from spear-phishing (targeted phishing) to biometric spoofing. It predicts that fraud losses driven by Generative AI will quadruple by 2027 if current trends continue. arXiv


What this all shows:

  1. Authenticity is under threat. Deepfakes (fake video/audio), voice clones, messages with personal detail all make it much harder to tell real from fake.
  2. Scale + automation = more damage. AI tools allow attackers to generate many variations of phishing emails, fake websites, or scam calls quickly, reaching many more victims.
  3. Personalization increases trust. When scams reference personal details, mimic speech or style of known people, or use images from social media, victims are more likely to be deceived.
  4. Detection lags reality. Traditional defenses often depend on fixed signatures (e.g. known malware, known phishing templates). AI-enabled criminals shift tactics, use polymorphism or novelty to evade detection.
  5. Societal harm is broad. Beyond just money loss, there’s emotional, psychological harm, erosion of trust, reputational damage, and legal/regulatory challenges.

What might be done (and what is being tried):

  • Better regulation of AI generation tools, especially those that produce images, voices, or can be misused for impersonation.
  • Awareness & training — individuals and organizations need to know how to recognize deepfakes, verify requests (phone/other independent channel), use multi-factor authentication.
  • AI-powered detection tools that can spot anomalies in audio/video/media and trace manipulation.
  • Cooperation across borders, because many of these threats are transnational. Europol’s report, for example, highlights how AI-criminal gangs operate across jurisdictions. Reuters+1
  • Legal recourse and enforcement, to penalize use of deepfakes, fraud, identity theft powered by AI.

As for our manager in Bengaluru — after losing money and enduring embarrassment, she insisted on process changes at her company. Every wire request now needed verbal confirmation; independent audits of accounts were introduced; staff trained to spot AI impersonations. She still wonders how many others didn’t see signs, or saw them too late.

The story isn’t over. AI is a tool: it can protect, or it can be weaponized. And as criminals grow more sophisticated, so too must our vigilance, defenses, and ethical guardrails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
×