\ Cybersecurity and artificial intelligence share the foundational concept of an ‘adversary’.
\ In cybersecurity, the adversary is a threat actor— a lone hacker, organized crime group, or even a nation-state—seeking to exploit vulnerabilities in a system. In AI, an adversary is a mechanism designed to manipulate models into making incorrect decisions.
The two adversaries converge in the ongoing AI revolution, where a silent arms race is underway - attackers leverage advanced AI to craft hyper-personalized scams, poison training data, and fool real-world systems, while defenders scramble to harden algorithms and infrastructure.
\ To be protected in this new era, organizations must adopt a holistic and adversarial view of their systems - tightening up all the links in the chain and leveraging the attackers’ tools in defensive setups. In this article, I discuss novel cybersecurity threats and how to defend against them.
1. Hyper-personalized phishingIn February 2024, a Hong Kong finance worker transferred $25 million to fraudsters, persuaded of the legitimacy of the transaction by a video call where all the other attendees were deepfake representations of the company staff. A similar attack, impersonating WPP’s CEO in an audio message, was foiled in May 2024. Darktrace reports a 135% surge in novel social engineering attacks. Attackers are using AI in increasingly creative ways for phishing attacks. AI impersonation technology today is the weakest it’s ever going to be - and it’s already scarily good. How should organizations guard against this?
\
It is a mistake to assume that AI attacks only exist in the digital world. Researchers have shown that it’s possible to alter medical reports in an imperceptibly small way to change the classification of a tumor from malignant to benign, and to use innocuous stickers to fool self-driving cars into thinking a stop-sign is a 45 mph speed limit sign. The reason these attacks work is that AI models operate within a high-dimensional space where changes that appear insignificant to us can put the model into ‘uncharted territory’ where it makes incorrect decisions.
\ If you’re building an AI product, it’s important to put it to the test adversarially - your cyber adversaries will certainly do the same. Large companies often deploy strong ‘adversarial models’ that find inputs that break the core model and then use those inputs to make the target model more resilient.
\ Attackers can also target the physical components of your product. If you’re operating in a high-risk product space (such as medicine or autonomous robotics), you cannot assume that the hardware has not been compromised. Trusted Platform Modules (TPMs) provide hardware-level security guarantees and are increasingly deployed in self-driving cars. Think adversarially about what an attacker could do if they had full access to the product hardware - could they compromise it in a way that’s hard to detect but can cause disastrous consequences?
3. Data poisoningResearch has shown that contaminating a training dataset by even a small percentage (less than 1%) can profoundly alter the quality of the final model. In a world where training data is often scraped from the internet, it is easy for attackers to sneak in poisoned data that effectively builds backdoors into the models it will be used to train.
\ Some essential preventative measures:
The coding prowess of LLMs can be put to use in nefarious ways. One emerging attack vector is deploying agents that continually search for vulnerabilities to exploit inaccessible code bases. The cost of detecting zero-day exploits has become lower than ever. Implement the following protections for your code base:
\
Use LLMs to comb over any code for security vulnerabilities before it can be submitted.
Be careful when taking on any third-party dependencies - your system is only as secure as its weakest link.
\
Soon enough, the use of LLMs as security reviewers will become increasingly common, to the point where it will be considered as essential to software developers as hand-washing is to medical professionals.
Think adversarially and holisticallyThe advent of generative and agentic AI promises a time of great upheaval and change. The internet was an enormous boon to human productivity, but it also gave malicious actors new tools and landscapes with which to harm others.
\ The AI revolution will be no different. Companies looking to stay ahead of novel security threats should think adversarially - leveraging the same tools as the attackers to make their systems more secure and resilient. Companies should also think holistically, not limiting themselves to the digital realm, and examining all the links in their product chain as potential targets.
\n
\n
All Rights Reserved. Copyright 2025, Central Coast Communications, Inc.