TL;DR
Researchers identify that AI agents are composed of two core components: a deterministic ‘Agent Core’ and a non-deterministic language model. This distinction impacts how these systems can be secured and controlled.
Researchers have confirmed that AI agents are composed of two distinct components: a deterministic ‘Agent Core’ and a non-deterministic language model (LLM), which together form what is now described as having ‘two souls.’ This understanding impacts how developers and security professionals approach AI system design and control.
The core of an AI agent consists of a deterministic application, called the ‘Agent Core,’ which manages interactions and executes predefined logic. The second component is the LLM, which provides probabilistic reasoning and generates outputs that can vary even with the same input. This duality creates a fundamental challenge: while the deterministic core can be analyzed, tested, and secured reliably, the LLM’s behavior remains unpredictable and difficult to fully control.
This analysis stems from recent technical discussions, emphasizing that the LLM’s non-deterministic nature is not equivalent to being non-computable but rather involves probabilistic variability. Consequently, traditional security approaches based on deterministic assumptions are insufficient for AI agents, requiring new strategies to constrain the probabilistic ‘soul’ through the deterministic ‘soul.’
Why It Matters
This distinction between the two components of AI agents has significant implications for security, control, and trustworthiness. Developers cannot secure the LLM itself but can architect the deterministic core to limit the LLM’s outputs and behavior. This insight is critical as AI agents become more integrated into critical systems, where unpredictable outputs could lead to security vulnerabilities or operational failures.
AI security testing tools
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
The concept builds on recent discussions within the AI development community, where the term ‘agent’ has been loosely defined. Experts like those from Microsoft and security analysts emphasize that the core of an AI agent involves a deterministic software layer orchestrating the non-deterministic LLM. This understanding clarifies ongoing debates about AI safety, security, and the limits of testing AI behavior, especially as generative models grow more complex and autonomous.
“AI agents are fundamentally composed of a deterministic core and a probabilistic language model, which together form two distinct ‘souls’ within the system.”
— AI researcher
“Understanding the deterministic and probabilistic components separately allows us to better design safeguards against unpredictable AI outputs.”
— Security analyst
deterministic AI core software
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It remains unclear how widely this architectural perspective will be adopted across different AI platforms and whether new security standards will emerge based on this insight. Additionally, the practical methods for constraining the probabilistic ‘soul’ without limiting AI capabilities are still under development.
large language model security
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What’s Next
Next steps include developing security frameworks that explicitly account for the dual nature of AI agents, creating tools to analyze and test the deterministic core, and establishing best practices to manage the probabilistic outputs. Further research will explore how to effectively constrain or guide the LLM’s behavior within safe operational boundaries.
AI system testing frameworks
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
What does it mean that AI agents have two ‘souls’?
It means they consist of a deterministic core that can be tested and secured, and a non-deterministic language model that generates variable outputs, making control more complex.
Can the probabilistic ‘soul’ of an AI agent be secured?
Not directly. Security strategies focus on constraining the deterministic core to limit what the probabilistic component can influence or produce.
Why does this distinction matter for AI safety?
Because it highlights that traditional security approaches are insufficient for AI agents, which require new methods to manage unpredictability and ensure safe operation.
Will this understanding change how AI systems are built?
Yes, developers will need to design architectures that explicitly separate and secure the deterministic and probabilistic parts to improve control and safety.