From Zero to AI

Lesson 3.3: Handling Tool Calls

Duration: 75 minutes

Learning Objectives

By the end of this lesson, you will be able to:

  1. Detect when the model requests tool execution
  2. Parse tool call arguments correctly
  3. Execute tools and format results
  4. Handle errors during tool execution
  5. Implement parallel tool call handling

The Tool Call Response

When the model decides to use a tool, it returns a special response instead of regular text. Your code must detect this and handle it appropriately.

OpenAI Tool Call Response

import OpenAI from 'openai';

const openai = new OpenAI();

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: "What's the weather in Tokyo?" }],
  tools: [weatherTool],
});

const message = response.choices[0].message;

// Check if the model wants to call tools
if (message.tool_calls && message.tool_calls.length > 0) {
  console.log('Tool calls requested:');
  for (const toolCall of message.tool_calls) {
    console.log(`- ${toolCall.function.name}`);
    console.log(`  Arguments: ${toolCall.function.arguments}`);
    console.log(`  ID: ${toolCall.id}`);
  }
}

The response includes:

  • tool_calls: Array of tool call requests
  • function.name: Which tool to call
  • function.arguments: JSON string of arguments
  • id: Unique identifier for this call (needed for the response)

Anthropic Tool Call Response

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic();

const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024,
  messages: [{ role: 'user', content: "What's the weather in Tokyo?" }],
  tools: [weatherTool],
});

// Check for tool use in content blocks
for (const block of response.content) {
  if (block.type === 'tool_use') {
    console.log(`Tool: ${block.name}`);
    console.log(`Arguments:`, block.input);
    console.log(`ID: ${block.id}`);
  }
}

Anthropic returns tool calls as content blocks with type: "tool_use".


Parsing Tool Arguments

Tool arguments come as a JSON string (OpenAI) or object (Anthropic). Always validate them:

interface WeatherArgs {
  location: string;
  units?: 'celsius' | 'fahrenheit';
}

function parseWeatherArgs(args: string | object): WeatherArgs {
  const parsed = typeof args === 'string' ? JSON.parse(args) : args;

  if (typeof parsed.location !== 'string') {
    throw new Error('location must be a string');
  }

  return {
    location: parsed.location,
    units: parsed.units ?? 'celsius',
  };
}

Using Zod for Safe Parsing

import { z } from 'zod';

const weatherArgsSchema = z.object({
  location: z.string(),
  units: z.enum(['celsius', 'fahrenheit']).optional().default('celsius'),
});

function parseWeatherArgsSafe(args: string | object) {
  const data = typeof args === 'string' ? JSON.parse(args) : args;
  return weatherArgsSchema.parse(data);
}

Executing Tools

Create functions that implement your tool logic:

// Simulated weather API
async function getWeather(location: string, units: 'celsius' | 'fahrenheit'): Promise<object> {
  // In production, call a real weather API
  const temperatures: Record<string, number> = {
    tokyo: 22,
    london: 15,
    'new york': 18,
    paris: 17,
  };

  const temp = temperatures[location.toLowerCase()] ?? 20;
  const displayTemp = units === 'fahrenheit' ? temp * 1.8 + 32 : temp;

  return {
    location,
    temperature: Math.round(displayTemp),
    units,
    condition: 'partly cloudy',
    humidity: 65,
  };
}

// Execute the weather tool
async function executeWeatherTool(argsJson: string): Promise<string> {
  try {
    const args = parseWeatherArgsSafe(argsJson);
    const result = await getWeather(args.location, args.units);
    return JSON.stringify(result);
  } catch (error) {
    return JSON.stringify({
      error: error instanceof Error ? error.message : 'Unknown error',
    });
  }
}

Complete OpenAI Tool Handler

Here is a complete implementation for handling OpenAI tool calls:

import OpenAI from 'openai';

const openai = new OpenAI();

// Tool registry with implementations
const tools: Record<string, (args: string) => Promise<string>> = {
  get_weather: executeWeatherTool,
  calculate: executeCalculatorTool,
};

// Tool definitions for the API
const toolDefinitions: OpenAI.ChatCompletionTool[] = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get current weather for a location',
      parameters: {
        type: 'object',
        properties: {
          location: { type: 'string', description: 'City name' },
          units: { type: 'string', enum: ['celsius', 'fahrenheit'] },
        },
        required: ['location'],
      },
    },
  },
  {
    type: 'function',
    function: {
      name: 'calculate',
      description: 'Perform mathematical calculations',
      parameters: {
        type: 'object',
        properties: {
          expression: { type: 'string', description: 'Math expression' },
        },
        required: ['expression'],
      },
    },
  },
];

async function handleToolCalls(
  toolCalls: OpenAI.ChatCompletionMessageToolCall[]
): Promise<OpenAI.ChatCompletionToolMessageParam[]> {
  const results: OpenAI.ChatCompletionToolMessageParam[] = [];

  for (const toolCall of toolCalls) {
    const toolName = toolCall.function.name;
    const toolArgs = toolCall.function.arguments;

    console.log(`Executing tool: ${toolName}`);

    const executor = tools[toolName];
    let result: string;

    if (executor) {
      result = await executor(toolArgs);
    } else {
      result = JSON.stringify({ error: `Unknown tool: ${toolName}` });
    }

    results.push({
      role: 'tool',
      tool_call_id: toolCall.id,
      content: result,
    });
  }

  return results;
}

Complete Anthropic Tool Handler

Here is the Anthropic equivalent:

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic();

async function handleAnthropicToolUse(
  toolUseBlocks: Anthropic.ToolUseBlock[]
): Promise<Anthropic.ToolResultBlockParam[]> {
  const results: Anthropic.ToolResultBlockParam[] = [];

  for (const block of toolUseBlocks) {
    console.log(`Executing tool: ${block.name}`);

    const executor = tools[block.name];
    let result: string;

    if (executor) {
      // Anthropic provides input as an object, convert to string for our handlers
      result = await executor(JSON.stringify(block.input));
    } else {
      result = JSON.stringify({ error: `Unknown tool: ${block.name}` });
    }

    results.push({
      type: 'tool_result',
      tool_use_id: block.id,
      content: result,
    });
  }

  return results;
}

Handling Parallel Tool Calls

Models can request multiple tools at once. Process them in parallel for efficiency:

async function handleToolCallsParallel(
  toolCalls: OpenAI.ChatCompletionMessageToolCall[]
): Promise<OpenAI.ChatCompletionToolMessageParam[]> {
  // Execute all tools in parallel
  const resultPromises = toolCalls.map(async (toolCall) => {
    const toolName = toolCall.function.name;
    const toolArgs = toolCall.function.arguments;

    const executor = tools[toolName];
    const result = executor
      ? await executor(toolArgs)
      : JSON.stringify({ error: `Unknown tool: ${toolName}` });

    return {
      role: 'tool' as const,
      tool_call_id: toolCall.id,
      content: result,
    };
  });

  return Promise.all(resultPromises);
}

Error Handling Strategies

Always handle errors gracefully so the model can respond appropriately:

1. Return Error as Result

async function executeToolSafely(name: string, args: string): Promise<string> {
  try {
    const executor = tools[name];
    if (!executor) {
      return JSON.stringify({
        error: `Tool '${name}' not found`,
        available_tools: Object.keys(tools),
      });
    }
    return await executor(args);
  } catch (error) {
    return JSON.stringify({
      error: 'Tool execution failed',
      message: error instanceof Error ? error.message : 'Unknown error',
    });
  }
}

2. Timeout Protection

async function executeWithTimeout(
  executor: (args: string) => Promise<string>,
  args: string,
  timeoutMs: number = 10000
): Promise<string> {
  const timeoutPromise = new Promise<string>((_, reject) => {
    setTimeout(() => reject(new Error('Tool execution timed out')), timeoutMs);
  });

  try {
    return await Promise.race([executor(args), timeoutPromise]);
  } catch (error) {
    return JSON.stringify({
      error: error instanceof Error ? error.message : 'Execution failed',
    });
  }
}

3. Retry Logic

async function executeWithRetry(
  executor: (args: string) => Promise<string>,
  args: string,
  maxRetries: number = 3
): Promise<string> {
  let lastError: Error | null = null;

  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await executor(args);
    } catch (error) {
      lastError = error instanceof Error ? error : new Error('Unknown error');
      console.log(`Attempt ${attempt} failed: ${lastError.message}`);

      if (attempt < maxRetries) {
        await new Promise((resolve) => setTimeout(resolve, 1000 * attempt));
      }
    }
  }

  return JSON.stringify({
    error: 'Tool failed after retries',
    message: lastError?.message,
  });
}

Full Example: Single Turn with Tools

Here is a complete example that makes a request, handles tool calls, and gets the final response:

import OpenAI from 'openai';

const openai = new OpenAI();

async function chatWithTools(userMessage: string): Promise<string> {
  const messages: OpenAI.ChatCompletionMessageParam[] = [{ role: 'user', content: userMessage }];

  // First API call
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages,
    tools: toolDefinitions,
  });

  const assistantMessage = response.choices[0].message;

  // Check if tool calls are needed
  if (!assistantMessage.tool_calls || assistantMessage.tool_calls.length === 0) {
    // No tools needed, return the text response
    return assistantMessage.content ?? '';
  }

  // Handle tool calls
  console.log('Processing tool calls...');
  const toolResults = await handleToolCallsParallel(assistantMessage.tool_calls);

  // Add assistant message and tool results to conversation
  messages.push(assistantMessage);
  messages.push(...toolResults);

  // Second API call with tool results
  const finalResponse = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages,
    tools: toolDefinitions,
  });

  return finalResponse.choices[0].message.content ?? '';
}

// Usage
const answer = await chatWithTools("What's the weather in Tokyo and London?");
console.log(answer);

Tool Result Format

Always return tool results as JSON strings. The model expects structured data:

// Good - structured JSON
return JSON.stringify({
  temperature: 22,
  condition: 'sunny',
  humidity: 65,
});

// Bad - plain text (harder for model to parse)
return 'The temperature is 22 degrees and sunny';

Include relevant context in results:

// Include the query in the result for context
return JSON.stringify({
  query: args.location,
  result: {
    temperature: 22,
    condition: 'sunny',
  },
  timestamp: new Date().toISOString(),
});

Key Takeaways

  1. Check for tool_calls in the response before accessing content
  2. Parse arguments carefully using JSON.parse and validation
  3. Return JSON strings as tool results for consistency
  4. Handle errors gracefully so the model can respond appropriately
  5. Process parallel calls using Promise.all for efficiency
  6. Include tool_call_id in every tool result message

Resources

Resource Type Level
OpenAI Function Calling Documentation Beginner
Anthropic Tool Use Documentation Beginner
Zod Documentation Documentation Intermediate

Next Lesson

In the next lesson, you will learn how to manage multi-turn conversations with tools, handling complex scenarios where multiple rounds of tool use are needed.

Continue to Lesson 3.4: Multi-turn Conversations with Tools