Intelligent Agents: The Core Architecture Behind Every LLM System
Overview
Before you can teach someone how prompts work or how LLMs behave, you need a mental model of agents. Not the sci-fi kind—more like “software beings” that live in some environment, take actions, and try to achieve goals.
Modern LLM apps? They are agent systems.
Multi-agent debate? Agent systems.
Retrieval-augmented generation? Still agent systems wrapped in external memory.
Let’s break it down in a way that makes sense for builders.
Concept Explanation
Think of an agent as a loop:
Observe → Decide → Act → Observe again
That’s it. But the quality of this loop—what the agent sees, remembers, how it reasons—is what separates a toy bot from a real system.
Russell & Norvig push this idea heavily:
AI = building rational agents capable of taking the “right” action given what they know.
Key pieces:
✔ The Environment
This is everything the agent doesn’t fully control. For an LLM:
- The user is part of the environment.
- External tools (APIs, vector DBs, browsers) are part of the environment.
- Even the model’s own memory state is part of the environment.
✔ The Percepts
Agents need inputs—percepts.
For an LLM:
- User messages
- Retrieved documents
- Observed tool results
- Internal chain-of-thought signals (in invisible form)
✔ The Action Space
This is what the agent can do:
- Generate text
- Call a tool
- Update memory
- Ask clarifying questions
- Spawn other agents (in multi-agent workflows)
✔ The Agent Function
This is the brain:
A mathematical mapping from percept history → action.
In LLM land, the model itself is the agent function.
Why This Matters for Prompt Engineering
Once you see an LLM as an agent, the whole discipline shifts:
- Prompts aren’t “magic words.”
They’re state injections into the percept stream. - System prompts define the policy (the governing rules of the agent).
- Tool instructions expand the action space.
- Memory and RAG define the knowledge base the agent can use.
Everything becomes clearer when you think like an agent designer instead of a prompt magician.
Examples / Prompts
1. Turning an LLM into a “reactive” agent
Reactive = Responds only to current input.
Prompt:
Whenever the user asks anything, respond only with the most relevant fact from the provided dataset. Ignore past conversation.
This creates a memoryless agent—popular for evals or strict compliance tasks.
2. Turning an LLM into a “goal-based” agent
Goal-based = Chooses actions to move toward a target.
Prompt:
Your goal is to help the user complete their machine learning project.
At each step, decide:
1. What information you still need
2. What action moves the user closer to the goal
Now the agent plans, not just reacts.
3. Creating a rational, utility-driven agent
Rational = Chooses the action with the highest expected benefit.
Prompt:
When deciding how to respond, evaluate these outcomes:
- Clarity (0–10)
- Usefulness (0–10)
- Safety (0–10)
Pick the response with the highest total score.
You’re making the LLM do a mini decision-theoretic evaluation.
Practical Use (for AI app builders)
When you’re designing:
- A content generator
- A multi-agent system
- A retrieval-enhanced chatbot
- An AI-powered workflow
You’re really choosing:
- What the agent sees (context window + RAG docs)
- What the agent can do (tooling)
- How it decides (system prompts + agent loop)
- How persistent it is (memory strategies)
Once you frame the system as an agent, debugging becomes systematic:
- Wrong outputs → wrong percepts
- Hallucinations → missing/weak knowledge base
- Overthinking → action space too big
- Bad decisions → poorly specified goals or utility model
It’s exactly how we debug distributed systems or microservices—same mental pattern.
Exercises
- Design a reactive AI agent
- What should its percepts be?
- What actions does it take?
- How do you limit its memory?
- Upgrade it to a goal-based agent
Define a goal and rewrite the agent loop. - Add tools into the action space
Choose 2 external tools (search, calculator, vector DB).
Write a prompt that allows the agent to choose when to call which tool. - Compare two agent designs
In your opinion, which type is better for:- customer support
- code generation
- medical Q&A
- creative writing
Summary
Intelligent agents provide the mental model behind:
- Prompts
- Tools
- RAG
- Memory
- Multi-agent systems
- LLM-based automation
LLMs are not “text predictors.”
They are programmable, rational agents whose behavior you shape through:
- Inputs (percepts)
- Goals
- Actions
- Tooling
- Constraints
Mastering this viewpoint gives you much finer control over LLM behavior.






