class AIAgent(LLM):

The Anatomy of an AI Agent

Demystifying the "ReAct" pattern, Tools, and LangGraph logic.

1. The Brain in a Box

A standard LLM (like ChatGPT) is just a text prediction engine. It lives in a "glass box." It can't browse the web, run code, or save files on its own. It can only hallucinate answers based on training data.

🧠
🦾
System: Waiting for input...

The Shift: When we turn it into an Agent, we aren't changing the brain. We are giving it Tools (functions) and a loop to decide when to use them.

2. Giving it Hands (Tools)

Tools are just Python functions. But how does the LLM know how to use them? It reads the Docstring (the text inside the quotes).

Try clicking the green text below to change the docstring and see how the Agent reacts.

@tool def write_json(filepath: str, data: dict): """ Writes a dictionary to a file as JSON. """ with open(filepath, 'w') as f: json.dump(data, f)
Agent's Internal Thought:
"I understand. I will call this function when the user wants to save data."

3. The ReAct Loop (Logic)

This is the "Heart" of the agent (managed by LangGraph). It's a loop called ReAct: Reason, then Act.

Start
LLM (Think)
Tool (Act)
End
# Debug Console # Waiting for simulation...

Note the arrow going from Tool back to LLM. The Agent sees the tool's output and decides what to do next. It loops until the task is done.

4. The DataGen Agent

Let's put it all together. This is a simulation of the "DataGen" agent from the video. It has two tools: generate_data and save_file.

Chat Interface ● Online
Hello! I can generate sample user data and save it to JSON files.
State Graph History (Raw Logs) JSON
# Session Initialized
Quick Prompts: Generate Users | Save File