DATE

February 2, 2026

Introduction

Organizations across legal, professional services, and technology sectors are moving quickly to deploy artificial intelligence and large language models into everyday workflows. The drivers are familiar. Teams want efficiency gains, faster turnaround, and the ability to compete in markets where AI adoption is becoming the norm rather than the exception.

From our work at ikPin™, what we are consistently seeing is that this speed often outpaces the security and governance structures needed to support it. AI tools are being introduced into environments that were never designed to account for new data flows, new access paths, or entirely new abuse models. In many cases, deployment decisions are made long before there is a clear understanding of how these systems are configured or how they might be targeted once exposed beyond internal networks.

Within the span of roughly a month, two independent security findings illustrated this gap from different angles. One documented the first attributed LLMjacking campaign monetized through a commercial marketplace. The other revealed publicly exposed control interfaces for an AI assistant platform. While technically distinct, both incidents highlight the same underlying failure: AI is being operationalized faster than the security programs meant to protect it.

Together, these cases reinforce a reality we are increasingly communicating to clients. AI security cannot be treated as a downstream activity. It has to be embedded from the moment an AI tool is evaluated, carried through implementation, and sustained as part of ongoing operations.

LLMjacking Becomes a Commercial Activity

In early 2025, Pillar Security published research detailing Operation Bizarre Bazaar, the first publicly attributed LLMjacking campaign to demonstrate clear commercial monetization. This was not a controlled test or edge case. Attackers gained access to poorly secured LLM infrastructure and abused those resources at scale. What stands out is that this activity was enabled by the same foundational security gaps we routinely encounters during AI assessments.

In practice, this means organizations may not notice anything is wrong until costs rise, performance degrades, or external researchers identify the abuse first. For teams embedding LLMs into client-facing products or sensitive internal workflows, this creates risk that extends well beyond infrastructure spend. Client trust, regulatory obligations, and contractual assurances can all be impacted by silent misuse of AI systems.

Pillar Security’s research is important because it reframes the conversation. This was not a failure of AI capability. It was a failure to apply basic security principles to AI infrastructure.

Exposed Clawdbot Control Interfaces

Shortly after, security researcher Jamieson O’Reilly identified publicly accessible Clawdbot Control instances exposed on the internet. Clawdbot is positioned as an AI assistant platform, yet the exposed interfaces allowed access to management and control functionality that should have been tightly restricted.

From an operational perspective, this type of exposure is one of the most common issues we see when organizations move AI tools from proof of concept into production. Control planes and administrative interfaces are often treated as secondary concerns, especially when teams are focused on speed of rollout. Once deployed, those interfaces can remain exposed far longer than anyone realizes.

For organizations handling sensitive or regulated data, particularly in legal and professional services, this kind of exposure introduces immediate risk. AI assistants frequently interact with internal documents, client communications, and privileged workflows. An exposed control interface is not simply a technical oversight. It reflects a breakdown in ownership, review, and ongoing governance.

Jamieson O’Reilly’s findings underscore how quickly AI tooling can drift outside an organization’s intended security boundary when there is no structured process to validate configurations and reassess exposure over time.

A Pattern We See Repeatedly

Although these incidents differ technically, they reflect a pattern. AI systems are frequently treated as features rather than as high-risk platforms. They are approved quickly, integrated deeply, and then left to operate with limited oversight.

In many environments, procurement focuses on functionality and vendor assurances while assuming security controls are in place by default. Risk assessments, when performed, are often point-in-time exercises that do not account for how AI tools evolve after deployment. Security configurations are rarely visited, and usage is not monitored with an eye toward abuse or misuse.

These are familiar issues that have always been around, but AI magnifies the impact. AI systems can be abused quietly, at scale, and in ways that are difficult to distinguish from legitimate use.

Why Security Has to Start Before AI Is Approved

One of the clearest lessons from these incidents, and from our client work, is that AI security cannot start after deployment. By the time a tool is live, data is already flowing, integrations are established, and exposure may already exist.

Effective AI security begins during evaluation and procurement. Organizations need to understand what data an AI system will touch, how that data will be processed and retained, how access to models and control interfaces is managed, and whether meaningful monitoring is possible. Without that understanding, organizations are effectively inheriting risk without visibility or control.

This early-stage analysis is often where the most significant gaps are identified, long before a single alert would ever fire in production.

AI Security Is Not Static

Both the LLMjacking campaign and the Clawdbot exposure demonstrate that AI risk changes over time. Configurations shift, integrations expand, and new capabilities are enabled as tools mature. Threat actors are actively testing these systems, looking for ways to exploit that drift.

For this reason, AI security has to be continuous. It requires ongoing configuration review, exposure management, and monitoring that focuses on how systems are actually being used rather than how they were intended to be used. Treating AI security as a one-time control leaves organizations with a false sense of security.

ikPin™’s Perspective on Secure AI Adoption

From our perspective, secure AI adoption is not about slowing innovation. It is about ensuring organizations can deploy AI without creating unnecessary operational risk.

Our work focuses on helping organizations understand AI risk before tools are approved, validating security and configuration during implementation, and establishing governance and monitoring processes that persist after deployment. This includes AI-specific risk assessments, security configuration reviews, and ongoing oversight designed to prevent the types of failures seen in both of these incidents.

As AI becomes embedded in core business functions, security around these tools ,must be seen as a foundational pillar of how organizations operate, deliver services, and maintain trust.

The organizations that recognize this early will be better positioned to adopt AI responsibly and securely. Those that do not will continue to rely on external researchers and attackers to surface their exposure which is not how any organization should want to operate.

The ikPin™ team would like to give kudos to Pillar Security and Jamieson O’Reilly.