TL;DR
Researchers highlight that AI agents consist of two core components: a deterministic core and a probabilistic language model. Only the deterministic part is fully controllable, raising security concerns about the unpredictable behavior of the LLM. This distinction impacts how AI systems can be secured and managed.
Recent technical discussions confirm that AI agents are composed of two distinct components: a deterministic core and a probabilistic language model, or LLM. This distinction is crucial because only the deterministic core can be fully controlled and tested, while the LLM’s behavior remains inherently unpredictable. This insight has significant implications for AI security and management.
Experts from the AI development community, including insights shared on Hacker News, explain that an AI agent’s architecture consists of a deterministic ‘Agent Core’ and a non-deterministic ‘LLM.’ The Agent Core orchestrates interactions, processes inputs, and executes actions based on fixed code, making its behavior predictable and testable. In contrast, the LLM generates outputs based on probabilistic reasoning, which can vary even with identical inputs, leading to unpredictable behavior.
This duality is described as the ‘two souls’ of an AI agent: the deterministic soul (Agent Core) and the probabilistic soul (LLM). The core can be analyzed, tested, and secured using traditional methods, but the LLM’s inherent variability makes it impossible to fully secure or predict its outputs.
Security concerns arise because traditional software security relies on predictability and complete testing. Since the LLM’s outputs are probabilistic, malicious actors could exploit this unpredictability, making it difficult to enforce strict safety or security boundaries.
Why It Matters
This development matters because it shifts how developers and security professionals must approach AI safety. Relying solely on the deterministic core is insufficient; systems must be designed to constrain the probabilistic LLM’s outputs. Understanding that AI agents have two ‘souls’ helps frame the security challenges and guides the development of more robust, controllable AI systems.

CompTIA SecAI+ Study Guide: Comprehensive Exam-Focused AI Security Reference with Digital Tools for Smart Learning, Including PBQ Scenarios, Flashcards & Test Simulator
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
Previous discussions in AI security have focused on controlling model outputs and preventing misuse. This new analysis emphasizes the architectural distinction within AI agents, highlighting the fundamental difference between the predictable software components and the inherently unpredictable language models. The concept aligns with ongoing concerns about AI safety and the difficulty of securing probabilistic models against adversarial manipulation.
“The two true components of an AI agent are a deterministic application and an LLM, which is not deterministic.”
— Hacker News contributor
“You cannot fully secure the probabilistic nature of LLMs, but you can architect the deterministic core to limit what the LLM can reach.”
— AI security researcher

Introduction to AI Safety, Ethics, and Society
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It remains unclear how best to design security frameworks that effectively constrain the probabilistic LLM without compromising functionality. The extent to which current mitigation techniques can prevent misuse or unpredictable outputs is still under investigation. Additionally, the impact of this architecture on large-scale deployment and regulation is not yet fully understood.

Fast-Tracking SVA through Exposure: Core Usage, Concepts, AI Integration
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What’s Next
Next steps involve developing security architectures that explicitly account for the dual nature of AI agents, including new testing protocols, control mechanisms, and possibly regulatory standards. Further research is needed to quantify the limits of controlling the probabilistic LLMs and to establish best practices for safe deployment.

The Developer's Playbook for Large Language Model Security: Building Secure AI Applications
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
What does it mean that AI agents have two souls?
It means that AI agents are composed of a deterministic core that can be fully controlled and a probabilistic language model that generates unpredictable outputs. Only the deterministic part is fully manageable, which raises security considerations.
Why is the probabilistic nature of LLMs a security concern?
The probabilistic behavior of LLMs makes their outputs unpredictable, which can be exploited maliciously or lead to safety issues. This unpredictability cannot be fully tested or secured using traditional methods.
Can we fully secure AI agents given this architecture?
No, the inherent variability of the LLM cannot be fully secured. The best approach is to design the deterministic core to limit the LLM’s reach and influence.
What are the implications for AI safety regulation?
Regulators will need to consider the dual architecture of AI agents, focusing on controlling the deterministic components and establishing standards for managing the probabilistic parts.