From Zero to AI

Lesson 5.2: Anthropic API - Claude Integration

Duration: 60 minutes

Learning Objectives

By the end of this lesson, you will be able to:

  • Set up the Anthropic SDK in a TypeScript project
  • Configure authentication using API keys
  • Make requests to Claude models
  • Understand differences between Anthropic and OpenAI APIs
  • Work with Claude's unique response format
  • Create a reusable Anthropic client

Introduction

Now that you have experience with OpenAI's API, learning Anthropic's API will be straightforward. The concepts are similar, but there are important differences in how requests are structured and how responses are returned. Claude models are known for their thoughtful, nuanced responses and strong performance on complex tasks.


Setting Up the Anthropic SDK

If you are continuing from the previous lesson, add the Anthropic SDK to your project:

npm install @anthropic-ai/sdk

If you are starting fresh, set up the complete environment:

mkdir ai-practice
cd ai-practice
npm init -y
npm install typescript tsx @types/node @anthropic-ai/sdk dotenv

Configure Your API Key

Add your Anthropic API key to .env:

OPENAI_API_KEY=sk-proj-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here

Get your API key from the Anthropic Console.


Your First Claude Request

Create src/first-claude-request.ts:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

// Create the Anthropic client
// It automatically reads ANTHROPIC_API_KEY from environment
const anthropic = new Anthropic();

async function main() {
  console.log('Sending request to Claude...');

  const response = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 1024,
    messages: [
      {
        role: 'user',
        content: 'What is TypeScript in one sentence?',
      },
    ],
  });

  // Claude returns content as an array of blocks
  const textBlock = response.content[0];

  if (textBlock.type === 'text') {
    console.log('Response:', textBlock.text);
  }
}

main();

Run it:

npx tsx src/first-claude-request.ts

Output:

Sending request to Claude...
Response: TypeScript is a statically typed superset of JavaScript that compiles to plain JavaScript, adding optional type annotations and other features to enhance code quality and developer productivity.

Key Differences from OpenAI

Let us compare the two APIs side by side:

┌─────────────────────────────────────────────────────────────────┐
│                    API Comparison                                │
├────────────────────┬───────────────────┬────────────────────────┤
│  Feature           │  OpenAI           │  Anthropic             │
├────────────────────┼───────────────────┼────────────────────────┤
│  Package           │  openai           │  @anthropic-ai/sdk     │
├────────────────────┼───────────────────┼────────────────────────┤
│  API Method        │  chat.completions │  messages.create       │
│                    │  .create          │                        │
├────────────────────┼───────────────────┼────────────────────────┤
│  System Prompt     │  In messages      │  Separate parameter    │
│                    │  array            │                        │
├────────────────────┼───────────────────┼────────────────────────┤
│  max_tokens        │  Optional         │  Required              │
├────────────────────┼───────────────────┼────────────────────────┤
│  Response Content  │  string           │  Array of blocks       │
├────────────────────┼───────────────────┼────────────────────────┤
│  Model Format      │  gpt-4o-mini      │  claude-sonnet-4-      │
│                    │                   │  20250514              │
└────────────────────┴───────────────────┴────────────────────────┘

Difference 1: System Prompts

OpenAI puts system prompts in the messages array:

// OpenAI
const response = await openai.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' },
  ],
});

Anthropic uses a separate parameter:

// Anthropic
const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024,
  system: 'You are a helpful assistant.',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Difference 2: Required max_tokens

OpenAI makes max_tokens optional, but Anthropic requires it:

// This is required for Anthropic
const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024, // Must be specified
  messages: [{ role: 'user', content: 'Hello!' }],
});

Difference 3: Response Format

OpenAI returns content as a string:

// OpenAI
const content = response.choices[0].message.content; // string | null

Anthropic returns content as an array of blocks:

// Anthropic
const block = response.content[0];
if (block.type === 'text') {
  const content = block.text; // string
}

This array structure allows Claude to return mixed content types (text, tool calls, etc.).


Understanding Claude's Response

Let us examine the response structure in detail:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

const anthropic = new Anthropic();

async function exploreResponse() {
  const response = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Say hello in three languages.' }],
  });

  // Message metadata
  console.log('Message ID:', response.id);
  console.log('Model:', response.model);
  console.log('Stop reason:', response.stop_reason);

  // Token usage
  console.log('Tokens:', {
    input: response.usage.input_tokens,
    output: response.usage.output_tokens,
  });

  // Content blocks
  console.log('\nContent blocks:', response.content.length);

  for (const block of response.content) {
    console.log(`- Type: ${block.type}`);
    if (block.type === 'text') {
      console.log(`  Text: ${block.text}`);
    }
  }
}

exploreResponse();

Output:

Message ID: msg_01ABC123XYZ
Model: claude-sonnet-4-20250514
Stop reason: end_turn
Tokens: { input: 14, output: 28 }

Content blocks: 1
- Type: text
  Text: Hello! (English)
Bonjour! (French)
Hola! (Spanish)

Response Structure Types

interface Message {
  id: string; // Unique message ID
  type: 'message';
  role: 'assistant';
  content: ContentBlock[]; // Array of content blocks
  model: string; // Model used
  stop_reason: StopReason; // Why generation stopped
  stop_sequence: string | null; // If stopped by stop sequence
  usage: {
    input_tokens: number; // Tokens in your input
    output_tokens: number; // Tokens in response
  };
}

type ContentBlock =
  | { type: 'text'; text: string }
  | { type: 'tool_use'; id: string; name: string; input: object };

type StopReason =
  | 'end_turn' // Natural completion
  | 'max_tokens' // Hit token limit
  | 'stop_sequence' // Hit stop sequence
  | 'tool_use'; // Model wants to use a tool

Working with System Prompts

Claude responds well to detailed system prompts:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

const anthropic = new Anthropic();

async function codeReviewer(code: string): Promise<string> {
  const response = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 2048,
    system: `You are an expert TypeScript code reviewer.

When reviewing code, analyze:
1. Type safety - Are types used correctly?
2. Error handling - Are errors caught appropriately?
3. Best practices - Does it follow TypeScript conventions?
4. Performance - Any obvious inefficiencies?

Format your review as:
- Summary (1-2 sentences)
- Issues found (bullet list)
- Suggestions (bullet list)
- Improved code (if needed)`,
    messages: [
      {
        role: 'user',
        content: `Review this TypeScript code:\n\n\`\`\`typescript\n${code}\n\`\`\``,
      },
    ],
  });

  const block = response.content[0];
  return block.type === 'text' ? block.text : '';
}

// Test
const code = `
function getUser(id) {
  return fetch('/api/users/' + id)
    .then(r => r.json());
}
`;

codeReviewer(code).then(console.log);

Multi-Turn Conversations

Maintain conversation history for back-and-forth exchanges:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

type Message = Anthropic.MessageParam;

class ClaudeConversation {
  private client: Anthropic;
  private messages: Message[];
  private systemPrompt: string;
  private model: string;

  constructor(systemPrompt: string, model: string = 'claude-sonnet-4-20250514') {
    this.client = new Anthropic();
    this.systemPrompt = systemPrompt;
    this.model = model;
    this.messages = [];
  }

  async send(userMessage: string): Promise<string> {
    // Add user message to history
    this.messages.push({
      role: 'user',
      content: userMessage,
    });

    // Make the request
    const response = await this.client.messages.create({
      model: this.model,
      max_tokens: 2048,
      system: this.systemPrompt,
      messages: this.messages,
    });

    // Extract text from response
    const block = response.content[0];
    const assistantMessage = block.type === 'text' ? block.text : '';

    // Add assistant response to history
    this.messages.push({
      role: 'assistant',
      content: assistantMessage,
    });

    return assistantMessage;
  }

  getHistory(): Message[] {
    return [...this.messages];
  }

  clearHistory(): void {
    this.messages = [];
  }
}

// Usage
async function main() {
  const tutor = new ClaudeConversation(
    'You are a patient TypeScript tutor. Explain concepts step by step with examples.'
  );

  console.log('User: What are generics in TypeScript?');
  const response1 = await tutor.send('What are generics in TypeScript?');
  console.log('Claude:', response1);

  console.log('\nUser: Can you show a simple example?');
  const response2 = await tutor.send('Can you show a simple example?');
  console.log('Claude:', response2);
}

main();

Configuring Request Parameters

Claude supports several parameters to control generation:

const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-20250514',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Write a haiku about coding' }],

  // Temperature: 0-1 (Claude caps at 1.0)
  temperature: 0.7,

  // Top P: nucleus sampling
  top_p: 0.9,

  // Top K: limit token selection
  top_k: 50,

  // Stop sequences: stop at these strings
  stop_sequences: ['\n\n---'],
});

Parameter Comparison

Parameter OpenAI Range Anthropic Range
temperature 0 - 2 0 - 1
top_p 0 - 1 0 - 1
top_k Not supported Any positive integer
max_tokens Optional, varies by model Required, up to model limit

Creating a Reusable Client

Create src/anthropic-client.ts:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

// Types for our client
interface ChatOptions {
  systemPrompt?: string;
  temperature?: number;
  maxTokens?: number;
}

interface ChatResponse {
  content: string;
  tokensUsed: {
    input: number;
    output: number;
    total: number;
  };
  model: string;
  stopReason: string;
}

class AnthropicClient {
  private client: Anthropic;
  private model: string;

  constructor(model: string = 'claude-sonnet-4-20250514') {
    this.client = new Anthropic();
    this.model = model;
  }

  async chat(userMessage: string, options: ChatOptions = {}): Promise<ChatResponse> {
    const response = await this.client.messages.create({
      model: this.model,
      max_tokens: options.maxTokens ?? 1024,
      system: options.systemPrompt,
      temperature: options.temperature ?? 0.7,
      messages: [{ role: 'user', content: userMessage }],
    });

    // Extract text from content blocks
    let content = '';
    for (const block of response.content) {
      if (block.type === 'text') {
        content += block.text;
      }
    }

    return {
      content,
      tokensUsed: {
        input: response.usage.input_tokens,
        output: response.usage.output_tokens,
        total: response.usage.input_tokens + response.usage.output_tokens,
      },
      model: response.model,
      stopReason: response.stop_reason,
    };
  }

  async chatWithHistory(
    messages: Anthropic.MessageParam[],
    options: ChatOptions = {}
  ): Promise<ChatResponse> {
    const response = await this.client.messages.create({
      model: this.model,
      max_tokens: options.maxTokens ?? 1024,
      system: options.systemPrompt,
      temperature: options.temperature ?? 0.7,
      messages,
    });

    let content = '';
    for (const block of response.content) {
      if (block.type === 'text') {
        content += block.text;
      }
    }

    return {
      content,
      tokensUsed: {
        input: response.usage.input_tokens,
        output: response.usage.output_tokens,
        total: response.usage.input_tokens + response.usage.output_tokens,
      },
      model: response.model,
      stopReason: response.stop_reason,
    };
  }
}

// Export default instance
export const anthropicClient = new AnthropicClient();

// Also export class for custom configurations
export { AnthropicClient };

Usage:

import { anthropicClient } from './anthropic-client';

async function main() {
  const response = await anthropicClient.chat('Explain async/await in TypeScript', {
    systemPrompt: 'You are a helpful programming assistant.',
    temperature: 0.5,
  });

  console.log('Answer:', response.content);
  console.log('Tokens:', response.tokensUsed);
}

main();

Handling Errors

Anthropic SDK throws specific errors you should handle:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

const anthropic = new Anthropic();

async function safeChat(message: string): Promise<string | null> {
  try {
    const response = await anthropic.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 1024,
      messages: [{ role: 'user', content: message }],
    });

    const block = response.content[0];
    return block.type === 'text' ? block.text : null;
  } catch (error) {
    if (error instanceof Anthropic.APIError) {
      console.error(`API Error: ${error.status} - ${error.message}`);

      switch (error.status) {
        case 400:
          console.error('Bad request. Check your parameters.');
          break;
        case 401:
          console.error('Invalid API key. Check ANTHROPIC_API_KEY.');
          break;
        case 403:
          console.error('Permission denied.');
          break;
        case 429:
          console.error('Rate limit exceeded. Slow down requests.');
          break;
        case 500:
          console.error('Anthropic server error. Try again later.');
          break;
        case 529:
          console.error('API overloaded. Try again later.');
          break;
      }
    } else if (error instanceof Anthropic.APIConnectionError) {
      console.error('Network error. Check your internet connection.');
    } else {
      console.error('Unexpected error:', error);
    }

    return null;
  }
}

Anthropic-Specific Error Codes

Code Meaning Action
400 Invalid request Check parameters
401 Authentication failed Check API key
403 Permission denied Check account permissions
429 Rate limit Wait and retry
500 Server error Retry with backoff
529 Overloaded Wait longer before retry

Note that 529 (Overloaded) is specific to Anthropic. It means their servers are under heavy load.


Tracking Costs

Anthropic pricing by model:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

const anthropic = new Anthropic();

// Pricing per 1 million tokens (as of 2024)
const PRICING: Record<string, { input: number; output: number }> = {
  'claude-sonnet-4-20250514': { input: 3.0, output: 15.0 },
  'claude-3-5-sonnet-20241022': { input: 3.0, output: 15.0 },
  'claude-3-5-haiku-20241022': { input: 0.25, output: 1.25 },
};

interface CostTracker {
  totalInputTokens: number;
  totalOutputTokens: number;
  totalCost: number;
}

const tracker: CostTracker = {
  totalInputTokens: 0,
  totalOutputTokens: 0,
  totalCost: 0,
};

async function trackedChat(message: string): Promise<string> {
  const model = 'claude-sonnet-4-20250514';

  const response = await anthropic.messages.create({
    model,
    max_tokens: 1024,
    messages: [{ role: 'user', content: message }],
  });

  const pricing = PRICING[model];
  const inputCost = (response.usage.input_tokens / 1_000_000) * pricing.input;
  const outputCost = (response.usage.output_tokens / 1_000_000) * pricing.output;
  const requestCost = inputCost + outputCost;

  tracker.totalInputTokens += response.usage.input_tokens;
  tracker.totalOutputTokens += response.usage.output_tokens;
  tracker.totalCost += requestCost;

  console.log(`Request cost: $${requestCost.toFixed(6)}`);
  console.log(`Total spent: $${tracker.totalCost.toFixed(6)}`);

  const block = response.content[0];
  return block.type === 'text' ? block.text : '';
}

async function main() {
  await trackedChat('What is TypeScript?');
  await trackedChat('What are interfaces?');

  console.log('\n--- Session Summary ---');
  console.log(`Input tokens: ${tracker.totalInputTokens}`);
  console.log(`Output tokens: ${tracker.totalOutputTokens}`);
  console.log(`Total cost: $${tracker.totalCost.toFixed(6)}`);
}

main();

Using Claude's Unique Features

XML Tags for Structure

Claude responds well to XML-style tags in prompts:

async function analyzeDocument(document: string, question: string): Promise<string> {
  const response = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 2048,
    messages: [
      {
        role: 'user',
        content: `Please analyze this document and answer my question.

<document>
${document}
</document>

<question>
${question}
</question>

Provide your answer based only on the information in the document.`,
      },
    ],
  });

  const block = response.content[0];
  return block.type === 'text' ? block.text : '';
}

Prefilling Assistant Responses

You can guide Claude's response format by prefilling:

async function getJSON(prompt: string): Promise<object> {
  const response = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 1024,
    messages: [
      {
        role: 'user',
        content: `${prompt}\n\nRespond with valid JSON only.`,
      },
      {
        role: 'assistant',
        content: '{', // Prefill to start JSON
      },
    ],
  });

  const block = response.content[0];
  if (block.type === 'text') {
    // Complete the JSON and parse
    const jsonString = '{' + block.text;
    return JSON.parse(jsonString);
  }

  return {};
}

// Usage
const result = await getJSON('List three programming languages with their types');
console.log(result);

Exercises

Exercise 1: Provider Wrapper

Create a function that works with both OpenAI and Anthropic:

// Your implementation here
type Provider = 'openai' | 'anthropic';

async function universalChat(
  provider: Provider,
  message: string,
  systemPrompt?: string
): Promise<string> {
  // TODO:
  // 1. Based on provider, use the appropriate SDK
  // 2. Make the request with the same parameters
  // 3. Return the response text
}

// Test:
// const openaiResponse = await universalChat("openai", "Hello!");
// const claudeResponse = await universalChat("anthropic", "Hello!");
Solution
import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';
import OpenAI from 'openai';

type Provider = 'openai' | 'anthropic';

const openai = new OpenAI();
const anthropic = new Anthropic();

async function universalChat(
  provider: Provider,
  message: string,
  systemPrompt?: string
): Promise<string> {
  if (provider === 'openai') {
    const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [];

    if (systemPrompt) {
      messages.push({ role: 'system', content: systemPrompt });
    }
    messages.push({ role: 'user', content: message });

    const response = await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages,
    });

    return response.choices[0].message.content || '';
  } else {
    const response = await anthropic.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 1024,
      system: systemPrompt,
      messages: [{ role: 'user', content: message }],
    });

    const block = response.content[0];
    return block.type === 'text' ? block.text : '';
  }
}

// Test
async function main() {
  const systemPrompt = 'You are a helpful assistant. Be concise.';

  console.log('OpenAI:');
  const openaiResponse = await universalChat('openai', 'What is TypeScript?', systemPrompt);
  console.log(openaiResponse);

  console.log('\nAnthropic:');
  const claudeResponse = await universalChat('anthropic', 'What is TypeScript?', systemPrompt);
  console.log(claudeResponse);
}

main();

Exercise 2: Claude Conversation Manager

Create a conversation class specifically optimized for Claude:

// Your implementation here
class ClaudeTutor {
  private history: Anthropic.MessageParam[] = [];

  constructor(private topic: string) {}

  async ask(question: string): Promise<string> {
    // TODO:
    // 1. Add question to history
    // 2. Make request with topic-specific system prompt
    // 3. Add response to history
    // 4. Return the response
  }

  async followUp(clarification: string): Promise<string> {
    // TODO: Same as ask, but for follow-up questions
  }

  summarize(): string {
    // TODO: Return a summary of the conversation
  }
}

// Test:
// const tutor = new ClaudeTutor("TypeScript generics");
// await tutor.ask("What are generics?");
// await tutor.followUp("Can you show an example?");
Solution
import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

const anthropic = new Anthropic();

class ClaudeTutor {
  private history: Anthropic.MessageParam[] = [];

  constructor(private topic: string) {}

  private async makeRequest(message: string): Promise<string> {
    this.history.push({ role: 'user', content: message });

    const response = await anthropic.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 2048,
      system: `You are an expert tutor on ${this.topic}.

Your teaching style:
- Start with simple explanations
- Use practical examples
- Build on previous questions in the conversation
- Anticipate common misunderstandings
- Be encouraging and patient`,
      messages: this.history,
    });

    const block = response.content[0];
    const responseText = block.type === 'text' ? block.text : '';

    this.history.push({ role: 'assistant', content: responseText });

    return responseText;
  }

  async ask(question: string): Promise<string> {
    return this.makeRequest(question);
  }

  async followUp(clarification: string): Promise<string> {
    return this.makeRequest(clarification);
  }

  summarize(): string {
    const exchanges = this.history.length / 2;
    const questions = this.history
      .filter((m) => m.role === 'user')
      .map((m) => `- ${(m.content as string).slice(0, 50)}...`);

    return `Topic: ${this.topic}
Exchanges: ${exchanges}
Questions asked:
${questions.join('\n')}`;
  }

  clearHistory(): void {
    this.history = [];
  }
}

// Test
async function main() {
  const tutor = new ClaudeTutor('TypeScript generics');

  console.log('Q: What are generics?');
  const answer1 = await tutor.ask('What are generics?');
  console.log('A:', answer1);

  console.log('\nQ: Can you show a simple example?');
  const answer2 = await tutor.followUp('Can you show a simple example?');
  console.log('A:', answer2);

  console.log('\n--- Summary ---');
  console.log(tutor.summarize());
}

main();

Exercise 3: Structured Output Extractor

Create a function that extracts structured data from text using Claude:

// Your implementation here
interface ExtractedPerson {
  name: string;
  age: number | null;
  occupation: string | null;
  skills: string[];
}

async function extractPersonInfo(text: string): Promise<ExtractedPerson> {
  // TODO:
  // 1. Use Claude to extract person information
  // 2. Use XML tags to structure the request
  // 3. Parse the response into the interface
}

// Test:
// const info = await extractPersonInfo(
//   "John is a 28-year-old software engineer who specializes in TypeScript and React."
// );
Solution
import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';

const anthropic = new Anthropic();

interface ExtractedPerson {
  name: string;
  age: number | null;
  occupation: string | null;
  skills: string[];
}

async function extractPersonInfo(text: string): Promise<ExtractedPerson> {
  const response = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 500,
    messages: [
      {
        role: 'user',
        content: `Extract person information from the following text.

<text>
${text}
</text>

Return the information in this exact JSON format:
{
  "name": "the person's name or empty string if not found",
  "age": number or null if not found,
  "occupation": "their job/role or null if not found",
  "skills": ["list", "of", "skills"] or empty array if none found
}

Return ONLY valid JSON, no other text.`,
      },
    ],
  });

  const block = response.content[0];
  if (block.type === 'text') {
    try {
      return JSON.parse(block.text) as ExtractedPerson;
    } catch {
      // If parsing fails, return defaults
      return {
        name: '',
        age: null,
        occupation: null,
        skills: [],
      };
    }
  }

  return {
    name: '',
    age: null,
    occupation: null,
    skills: [],
  };
}

// Test
async function main() {
  const text1 = 'John is a 28-year-old software engineer who specializes in TypeScript and React.';
  const info1 = await extractPersonInfo(text1);
  console.log('Extracted info:', info1);

  const text2 =
    "Sarah works as a data scientist. She's great with Python, SQL, and machine learning.";
  const info2 = await extractPersonInfo(text2);
  console.log('Extracted info:', info2);
}

main();

Key Takeaways

  1. SDK Differences: Anthropic uses messages.create vs OpenAI's chat.completions.create
  2. System Prompts: Passed as a separate parameter, not in messages array
  3. max_tokens Required: Always specify max_tokens for Anthropic requests
  4. Content Blocks: Responses are arrays of blocks, not strings
  5. XML Tags: Claude responds well to XML-structured prompts
  6. Prefilling: Guide responses by prefilling assistant messages
  7. Error 529: Unique to Anthropic, means API is overloaded
  8. Temperature Cap: Anthropic caps temperature at 1.0 (vs OpenAI's 2.0)

Resources

Resource Type Description
Anthropic API Reference Documentation Complete API documentation
Anthropic TypeScript SDK Repository Official SDK with examples
Anthropic Cookbook Tutorial Practical recipes and patterns
Claude Prompt Library Examples Curated prompt examples

Next Lesson

You have learned how to integrate both OpenAI and Anthropic APIs. In the next lesson, you will learn how to process and transform API responses effectively, including extracting structured data and handling different response formats.

Continue to Lesson 5.3: Processing Responses