Lesson 3.4: Multi-turn Conversations with Tools
Duration: 75 minutes
Learning Objectives
By the end of this lesson, you will be able to:
- Implement tool loops for complex tasks
- Manage conversation history with tool calls and results
- Handle sequential tool dependencies
- Control tool usage with tool_choice parameter
- Build robust multi-turn tool conversations
Why Multi-turn Tool Use
Single tool calls handle simple questions. But real tasks often require:
- Multiple data sources: "Compare weather in Tokyo and Paris"
- Sequential operations: "Find the cheapest flight, then book it"
- Iterative refinement: "Search for X, then search for more details on result Y"
- Complex workflows: "Analyze data, create report, send to user"
Multi-turn conversations let the model orchestrate multiple tools until the task is complete.
The Tool Loop Pattern
The tool loop continues until the model returns a final text response without tool calls:
┌─────────────────────────────────────────┐
│ Start: User message │
└─────────────────┬───────────────────────┘
▼
┌─────────────────────────────────────────┐
│ Send to AI with tools │
└─────────────────┬───────────────────────┘
▼
┌───────────────────┐
│ Tool calls in │──── No ────┐
│ response? │ │
└────────┬──────────┘ │
│ Yes │
▼ ▼
┌─────────────────────────────────────┐ │
│ Execute tools │ │
│ Add results to messages │ │
└─────────────────┬───────────────────┘ │
│ │
└───────────────────────┘
│
▼
┌───────────────────┐
│ Return final │
│ response │
└───────────────────┘
Implementing the Tool Loop
Here is a complete implementation for OpenAI:
import OpenAI from 'openai';
const openai = new OpenAI();
interface ToolExecutor {
(args: string): Promise<string>;
}
const toolExecutors: Record<string, ToolExecutor> = {
get_weather: async (args) => {
const { location } = JSON.parse(args);
return JSON.stringify({ location, temperature: 22, condition: 'sunny' });
},
search_web: async (args) => {
const { query } = JSON.parse(args);
return JSON.stringify({
query,
results: [
{ title: 'Result 1', snippet: 'Information about ' + query },
{ title: 'Result 2', snippet: 'More details on ' + query },
],
});
},
get_details: async (args) => {
const { topic } = JSON.parse(args);
return JSON.stringify({
topic,
details: `Detailed information about ${topic}...`,
});
},
};
const tools: OpenAI.ChatCompletionTool[] = [
{
type: 'function',
function: {
name: 'get_weather',
description: 'Get weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string' },
},
required: ['location'],
},
},
},
{
type: 'function',
function: {
name: 'search_web',
description: 'Search the web',
parameters: {
type: 'object',
properties: {
query: { type: 'string' },
},
required: ['query'],
},
},
},
{
type: 'function',
function: {
name: 'get_details',
description: 'Get detailed information about a topic',
parameters: {
type: 'object',
properties: {
topic: { type: 'string' },
},
required: ['topic'],
},
},
},
];
async function runToolLoop(userMessage: string, maxIterations: number = 10): Promise<string> {
const messages: OpenAI.ChatCompletionMessageParam[] = [{ role: 'user', content: userMessage }];
for (let i = 0; i < maxIterations; i++) {
console.log(`\n--- Iteration ${i + 1} ---`);
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
tools,
});
const assistantMessage = response.choices[0].message;
// Check if done (no tool calls)
if (!assistantMessage.tool_calls || assistantMessage.tool_calls.length === 0) {
console.log('No more tool calls, returning response');
return assistantMessage.content ?? '';
}
// Add assistant message to history
messages.push(assistantMessage);
// Process each tool call
for (const toolCall of assistantMessage.tool_calls) {
const toolName = toolCall.function.name;
const toolArgs = toolCall.function.arguments;
console.log(`Calling tool: ${toolName}`);
console.log(`Arguments: ${toolArgs}`);
const executor = toolExecutors[toolName];
const result = executor
? await executor(toolArgs)
: JSON.stringify({ error: `Unknown tool: ${toolName}` });
console.log(`Result: ${result}`);
// Add tool result to messages
messages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: result,
});
}
}
throw new Error('Max iterations reached without completion');
}
Anthropic Tool Loop Implementation
The pattern is similar for Anthropic with different message formats:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function runAnthropicToolLoop(
userMessage: string,
maxIterations: number = 10
): Promise<string> {
const messages: Anthropic.MessageParam[] = [{ role: 'user', content: userMessage }];
for (let i = 0; i < maxIterations; i++) {
console.log(`\n--- Iteration ${i + 1} ---`);
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
messages,
tools: anthropicTools,
});
// Check for tool use blocks
const toolUseBlocks = response.content.filter(
(block): block is Anthropic.ToolUseBlock => block.type === 'tool_use'
);
// If no tool use, extract text and return
if (toolUseBlocks.length === 0) {
const textBlock = response.content.find(
(block): block is Anthropic.TextBlock => block.type === 'text'
);
return textBlock?.text ?? '';
}
// Add assistant response to messages
messages.push({
role: 'assistant',
content: response.content,
});
// Process tool calls and build results
const toolResults: Anthropic.ToolResultBlockParam[] = [];
for (const block of toolUseBlocks) {
console.log(`Calling tool: ${block.name}`);
const executor = toolExecutors[block.name];
const result = executor
? await executor(JSON.stringify(block.input))
: JSON.stringify({ error: `Unknown tool: ${block.name}` });
toolResults.push({
type: 'tool_result',
tool_use_id: block.id,
content: result,
});
}
// Add tool results as user message
messages.push({
role: 'user',
content: toolResults,
});
}
throw new Error('Max iterations reached');
}
Message History Structure
Understanding the message flow is critical. Here is what the history looks like after a multi-turn interaction:
// After asking "What's the weather in Tokyo and should I bring an umbrella?"
const messages = [
// 1. Original user message
{
role: 'user',
content: "What's the weather in Tokyo and should I bring an umbrella?",
},
// 2. Assistant requests weather tool
{
role: 'assistant',
content: null,
tool_calls: [
{
id: 'call_abc123',
type: 'function',
function: {
name: 'get_weather',
arguments: '{"location":"Tokyo"}',
},
},
],
},
// 3. Tool result
{
role: 'tool',
tool_call_id: 'call_abc123',
content: '{"location":"Tokyo","temperature":22,"condition":"rainy","precipitation":80}',
},
// 4. Final assistant response (no tool_calls)
{
role: 'assistant',
content:
'The weather in Tokyo is rainy with a temperature of 22°C and 80% chance of precipitation. Yes, you should definitely bring an umbrella!',
},
];
Controlling Tool Usage
Use tool_choice to control when tools are used:
OpenAI tool_choice Options
// Auto (default) - model decides
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages,
tools,
tool_choice: "auto",
});
// Required - model must use a tool
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages,
tools,
tool_choice: "required",
});
// None - model cannot use tools
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages,
tools,
tool_choice: "none",
});
// Specific tool - force a particular tool
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages,
tools,
tool_choice: {
type: "function",
function: { name: "get_weather" },
},
});
Anthropic tool_choice Options
// Auto (default)
tool_choice: { type: "auto" }
// Any - must use some tool
tool_choice: { type: "any" }
// Specific tool
tool_choice: { type: "tool", name: "get_weather" }
Handling Sequential Dependencies
Sometimes tools must be called in sequence. The model naturally handles this:
// User: "Find restaurants near my hotel and make a reservation at the best one"
// Turn 1: Model calls get_hotel_location
// Result: { hotel: "Tokyo Grand", address: "..." }
// Turn 2: Model calls search_restaurants with location from Turn 1
// Result: [{ name: "Sushi Master", rating: 4.8 }, ...]
// Turn 3: Model calls make_reservation with restaurant from Turn 2
// Result: { confirmation: "RES123", time: "7:00 PM" }
// Turn 4: Model returns final text response
// "I found several restaurants near your hotel. I've made a reservation
// at Sushi Master (rated 4.8) for 7:00 PM. Confirmation: RES123"
Preventing Infinite Loops
Always implement safeguards:
1. Maximum Iterations
const MAX_ITERATIONS = 10;
async function safeToolLoop(message: string): Promise<string> {
let iterations = 0;
while (iterations < MAX_ITERATIONS) {
iterations++;
// ... tool loop logic
}
return 'I was unable to complete the task. Please try a simpler request.';
}
2. Detect Repetition
function detectRepetition(
toolCalls: Array<{ name: string; args: string }>,
history: Array<{ name: string; args: string }>
): boolean {
const lastThree = history.slice(-3);
for (const call of toolCalls) {
const matches = lastThree.filter((h) => h.name === call.name && h.args === call.args);
if (matches.length >= 2) {
return true; // Same call made 3+ times
}
}
return false;
}
3. Cost Tracking
interface LoopStats {
iterations: number;
toolCalls: number;
tokensUsed: number;
}
async function trackedToolLoop(message: string): Promise<{
response: string;
stats: LoopStats;
}> {
const stats: LoopStats = { iterations: 0, toolCalls: 0, tokensUsed: 0 };
// ... loop logic tracking stats
return { response, stats };
}
Streaming with Tool Calls
Combine streaming with tool use for the best UX:
async function streamWithTools(userMessage: string): Promise<void> {
const messages: OpenAI.ChatCompletionMessageParam[] = [{ role: 'user', content: userMessage }];
while (true) {
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
tools,
stream: true,
});
let assistantContent = '';
const toolCalls: Map<number, { id: string; name: string; arguments: string }> = new Map();
for await (const chunk of stream) {
const delta = chunk.choices[0].delta;
// Handle text content
if (delta.content) {
process.stdout.write(delta.content);
assistantContent += delta.content;
}
// Accumulate tool calls
if (delta.tool_calls) {
for (const tc of delta.tool_calls) {
const existing = toolCalls.get(tc.index) ?? {
id: '',
name: '',
arguments: '',
};
if (tc.id) existing.id = tc.id;
if (tc.function?.name) existing.name = tc.function.name;
if (tc.function?.arguments) existing.arguments += tc.function.arguments;
toolCalls.set(tc.index, existing);
}
}
}
// If no tool calls, we're done
if (toolCalls.size === 0) {
break;
}
// Process tool calls
const toolCallsArray = Array.from(toolCalls.values());
messages.push({
role: 'assistant',
content: assistantContent || null,
tool_calls: toolCallsArray.map((tc) => ({
id: tc.id,
type: 'function' as const,
function: { name: tc.name, arguments: tc.arguments },
})),
});
// Execute and add results
for (const tc of toolCallsArray) {
console.log(`\n[Executing ${tc.name}...]`);
const result = (await toolExecutors[tc.name]?.(tc.arguments)) ?? '{}';
messages.push({
role: 'tool',
tool_call_id: tc.id,
content: result,
});
}
}
}
Best Practices
1. Clear System Prompts
const systemPrompt = `You are a helpful assistant with access to tools.
Use tools when you need current information or to perform actions.
Always explain what you're doing and summarize tool results for the user.
If a tool fails, explain the issue and suggest alternatives.`;
2. Graceful Degradation
async function executeToolWithFallback(name: string, args: string): Promise<string> {
try {
const result = await toolExecutors[name]?.(args);
if (result) return result;
} catch (error) {
console.error(`Tool ${name} failed:`, error);
}
return JSON.stringify({
error: 'Tool unavailable',
suggestion: 'Please try rephrasing your request',
});
}
3. Meaningful Tool Results
// Include context the model needs
return JSON.stringify({
success: true,
query: originalQuery,
results: data,
resultCount: data.length,
hasMore: totalResults > data.length,
suggestion: data.length === 0 ? 'Try broader search terms' : null,
});
Key Takeaways
- Tool loops continue until no tool calls are in the response
- Message history must include all tool calls and results in order
- Use tool_choice to control when tools are used
- Implement safeguards against infinite loops
- Combine streaming with tools for responsive UX
- Return helpful error messages so the model can adapt
Resources
| Resource | Type | Level |
|---|---|---|
| OpenAI Multi-turn Conversations | Documentation | Intermediate |
| Anthropic Multi-turn Tool Use | Documentation | Intermediate |
| OpenAI Streaming with Tools | Documentation | Intermediate |
Next Lesson
In the next lesson, you will put everything together by building an AI assistant that interacts with real external APIs.