Tool Use
Structured function calling with schema-validated tool dispatch.
Tool Use (Function Calling) — Overview
Tool use enables an LLM to interact with external systems by requesting structured function calls. The LLM produces a tool name and arguments; your code executes the function and returns the result. This is the foundational capability that makes agents possible.
Evolves from: Prompt Chaining — adds structured function schemas, argument extraction, and result injection.
Architecture
Figure: The LLM receives tool schemas, generates structured tool calls, your code executes them, and results are injected back into context.
How It Works
- Define tools — Provide the LLM with tool schemas: name, description, parameter definitions (JSON Schema). The LLM uses these to understand what's available and how to call it.
- LLM decides — Based on the user's request and available tools, the LLM generates a structured tool call (or a direct text response if no tool is needed).
- Dispatch — Your code routes the tool call to the appropriate function based on the tool name.
- Execute — The function runs with the provided arguments and returns a result.
- Inject — The tool result is added to the conversation context as a tool response.
- Continue — The LLM can make additional tool calls or produce a final text response.
Tool use can be single-shot (one tool call, then respond) or multi-turn (multiple tool calls in sequence, as in the ReAct pattern).
Minimal Example
Find inactive users and send them a re-engagement email — the LLM decides which tools to call and with what arguments.
from patterns.tool_use.code.python.tool_use import ToolUseAgent, Tool
agent = ToolUseAgent(
llm=your_llm,
system="You are a data assistant with access to the user database and email system.",
tools=[
Tool(
name="query_db",
description="Run a read-only SQL query against the user database",
parameters={
"type": "object",
"properties": {"sql": {"type": "string", "description": "SQL query to run"}},
"required": ["sql"],
},
fn=lambda sql: db.execute(sql),
),
Tool(
name="send_email",
description="Send an email to a user",
parameters={
"type": "object",
"properties": {
"to": {"type": "string"},
"subject": {"type": "string"},
"body": {"type": "string"},
},
"required": ["to", "subject", "body"],
},
fn=lambda to, subject, body: email_client.send(to, subject, body),
),
],
)
result = agent.run(
"Find all users who haven't logged in for 30+ days and send each a re-engagement email."
)
# result.tool_calls_made → number of tool invocations (one query + N emails)
# result.turns → full conversation and tool call history
# result.final_response → the agent's summary of what it did
Full implementation: [`code/python/tool_use.py`](code/python/tool_use.py)
Input / Output
- Input: User message + tool schemas describing available functions
- Output: LLM response (text), potentially after one or more tool calls
- Tool call: Structured request:
{name: string, arguments: object} - Tool result: Return value from function execution, injected as context
Key Tradeoffs
| Strength | Limitation |
|---|---|
| Bridges LLM reasoning with real-world actions | Tool schema quality directly affects call accuracy |
| Structured, typed interface (not free-form text) | LLM may hallucinate tool names or invalid arguments |
| Modular — add/remove tools without changing core logic | Each tool call adds latency (LLM call + execution) |
| Works with any LLM that supports function calling | Limited by the LLM's ability to understand complex schemas |
| Clear contract between LLM and code | Parallel tool calls require explicit support |
When to Use
- When the LLM needs to interact with external APIs, databases, or file systems
- When you want structured, validated function calls (not free-form text parsing)
- As a building block for any agent pattern (ReAct, Plan & Execute, etc.)
- When the LLM needs to compute, search, or retrieve information it doesn't have
When NOT to Use
- When the task is purely text-to-text with no external actions needed
- When actions are predetermined — just call the functions directly from code
- When you need complex multi-step reasoning — compose with ReAct or Plan & Execute
Related Patterns
- Evolves from: Prompt Chaining — see evolution.md
- Foundation for: ReAct (tool use + reasoning loop), all other agent patterns
- Combines with: Every agent pattern — tool use is a component, not a standalone system
Deeper Dive
- Design — Schema design, dispatch patterns, error handling, parallel tool calls
- Implementation — Pseudocode, registry patterns, validation, testing tool calls
- Evolution — How tool use evolves from prompt chaining