DATE
October 31, 2025
Based on Oesch, Hutchins, Kurian & Koch, “Living Off the LLM: How LLMs Will Change Adversary Tactics”, arXiv:2510.11398v1 arXiv
Date: October 31st, 2025
Artificial intelligence has started to change almost every aspect of modern life, and cybersecurity is no exception. Over the last two years, we have seen large language models (LLMs) move from research labs into everyday business tools. But just as defenders are learning to use them for automation and analysis, attackers are adapting even faster.
A recent research paper titled “Living Off the LLM: How LLMs Will Change Adversary Tactics” (Oesch et al., 2025) explores how threat actors are beginning to rely on these same models as part of their attack chain. The concept builds on a familiar idea: "living off the land," where attackers use trusted system tools instead of malware to stay hidden. Now, with LLMs, they are taking it a step further.
Traditionally, "living off the land" attacks use legitimate software such as PowerShell, WMI, or system scripts to execute malicious actions without leaving obvious traces. These attacks are hard to detect because they blend in with normal administrative behavior.
The new concept of "living off the LLM" (LOLLM) expands on that strategy. Instead of using only local system tools, attackers use local or embedded LLMs to generate the code they need on demand. In simple terms, the LLM becomes a silent accomplice inside the environment.
For example, a compromised endpoint could already contain a quantized LLM like Llama or Gemma. A malicious script could quietly prompt that model to write a PowerShell command, enumerate directories, or create persistence mechanisms. The key difference is that none of this code exists beforehand. It is generated dynamically, making traditional signature-based detection nearly useless.
The researchers demonstrated how an attacker could identify installed models, select one, and use it to create and execute malicious code entirely offline. In their test, they scanned for locally cached models, chose Gemma 3 (6B), and instructed it to write small code snippets, such as file search and deletion routines.
To bypass model safety restrictions, they crafted a jailbreak-style prompt that framed the request as a benign or ethical task. Once the model complied, the script could execute the generated code locally. Because the code was created in real time, no static antivirus or endpoint detection rule could flag it before execution.
This changes the equation. In the past, a malicious binary or script had to be delivered and stored somewhere on disk. Now, the attacker can generate that code inside the system at the moment of use, leaving almost no digital footprint.
The concept of living off the LLM significantly lowers the barrier to entry for sophisticated attacks. A relatively inexperienced adversary can now use AI tools to produce polymorphic malware, automate reconnaissance, or coordinate multi-stage campaigns.
At the same time, these attacks can operate without relying on external infrastructure. An offline model running on a local device does not need to connect to the internet, which removes one of the easiest indicators defenders rely on.
The paper also highlights how model alignment and safety controls can be bypassed through careful prompt engineering. Even commercial AI systems that block malicious outputs can be tricked by reframing the context or chaining prompts in creative ways.
Traditional security tools are not built to detect this type of behavior. Blocking known file hashes or static code patterns no longer works when the code is created fresh every time. To counter this new threat model, defenders need to focus on behavioral and contextual detection.
Some of the mitigation strategies discussed in the research include:
This shift will require collaboration between security teams, AI engineers, and compliance officers. The controls that protect your network must now extend to the models and datasets that operate within it.
Enterprises are deploying LLMs at a rapid pace, often without fully understanding the security implications. Whether used for customer support, internal tools, or code assistance, these systems often have high privileges and wide visibility across the organization.
Security leaders should start by building an inventory of all LLM deployments, both cloud-based and local. From there, apply the same principles used for privileged access management: least privilege, strong authentication, and continuous monitoring.
LLMs capable of writing or executing code should be isolated within secure environments, with strict controls around who can access them and what data they can see. Policies should explicitly define acceptable use, logging requirements, and incident response plans for model-related threats.
Education is also key. Many employees still view AI tools as neutral assistants, not as potential security risks. Awareness training should include scenarios where attackers manipulate AI models to leak data, generate malicious code, or assist in social engineering.

At IKPIN Group, we view LLM security as the next major evolution in enterprise defense. Just as the industry adapted to cloud and mobile threats, we now need to adapt to AI-native risks.
Our team helps organizations identify vulnerable AI systems, monitor LLM activity, and simulate real-world attacks that involve prompt injection, model hijacking, or LLM-assisted malware. We also design detection frameworks that integrate prompt firewalls, behavioral baselines, and supply chain validation.
The goal is not to discourage innovation, but to ensure it happens securely. AI is a powerful ally, but without governance, it can just as easily become a liability. By combining proactive monitoring with strong access control and model integrity checks, organizations can stay ahead of this new class of threats.
The shift from “living off the land” to “living off the LLM” marks a turning point in cybersecurity. The same technology that helps defenders analyze data and automate workflows can now be turned against them.
Enterprises must recognize that local and embedded AI systems are part of their attack surface. Ignoring them will leave gaps that traditional security tools cannot see.
The next generation of cybersecurity is not just about defending networks or endpoints. It is about defending intelligence itself.