Lesson 4.2: Anthropic (Claude)
Duration: 50 minutes
Learning Objectives
By the end of this lesson, you will be able to:
- Understand Anthropic's Claude model family and their strengths
- Set up and authenticate with the Anthropic API
- Make requests using the TypeScript SDK
- Leverage Claude's unique features like extended thinking
- Handle long-context conversations effectively
- Compare Claude's approach to other providers
Introduction
Anthropic is an AI safety company founded by former OpenAI researchers. Their Claude models are known for being helpful, harmless, and honest. Claude excels at nuanced conversations, long document analysis, and tasks requiring careful reasoning. In this lesson, you will learn how to integrate Claude into your applications.
Claude Model Lineup
Anthropic offers the Claude 3 and Claude 4 model families:
┌─────────────────────────────────────────────────────────────────┐
│ Claude Model Families │
├─────────────────────────────────────────────────────────────────┤
│ Claude 4 │ Latest generation flagship │
│ (Opus) │ Best reasoning and analysis │
│ │ Best for: Complex tasks, research, coding │
├─────────────────┼───────────────────────────────────────────────┤
│ Claude 4 │ Balanced performance and speed │
│ (Sonnet) │ Great for most production uses │
│ │ Best for: General applications, chat │
├─────────────────┼───────────────────────────────────────────────┤
│ Claude 3.5 │ Previous generation, still excellent │
│ (Sonnet) │ Fast and capable │
│ │ Best for: Cost-sensitive applications │
├─────────────────┼───────────────────────────────────────────────┤
│ Claude 3 │ Fast, lightweight model │
│ (Haiku) │ Best latency and cost │
│ │ Best for: High-volume, simple tasks │
└─────────────────────────────────────────────────────────────────┘
Model Selection Guidelines
| Use Case | Recommended Model | Why |
|---|---|---|
| Complex analysis | claude-sonnet-4-20250514 | Best reasoning capabilities |
| General applications | claude-sonnet-4-20250514 | Balanced performance |
| High-volume chat | claude-3-5-haiku-20241022 | Fast and economical |
| Long documents | claude-sonnet-4-20250514 | 200K context window |
| Code generation | claude-sonnet-4-20250514 | Excellent at coding tasks |
Setting Up the Anthropic SDK
Installation
npm install @anthropic-ai/sdk
Authentication
Get your API key from the Anthropic Console.
import Anthropic from '@anthropic-ai/sdk';
// The SDK automatically reads ANTHROPIC_API_KEY from environment
const anthropic = new Anthropic();
// Or explicitly provide the key
const anthropicWithKey = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
Environment setup:
# .env file
ANTHROPIC_API_KEY=sk-ant-your-api-key-here
Making Your First Request
Anthropic uses the messages.create method for chat completions:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function chat(userMessage: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: userMessage }],
});
// Claude returns content as an array of blocks
const textBlock = response.content[0];
if (textBlock.type === 'text') {
return textBlock.text;
}
return '';
}
// Usage
const answer = await chat('What makes TypeScript different from JavaScript?');
console.log(answer);
Understanding the Response Structure
interface Message {
id: string; // Unique message ID
type: 'message';
role: 'assistant';
content: ContentBlock[]; // Array of content blocks
model: string; // Model used
stop_reason: 'end_turn' | 'max_tokens' | 'stop_sequence' | 'tool_use';
stop_sequence: string | null;
usage: {
input_tokens: number;
output_tokens: number;
};
}
type ContentBlock =
| { type: 'text'; text: string }
| { type: 'tool_use'; id: string; name: string; input: object };
Using System Prompts
Claude supports system prompts as a separate parameter:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function codeReviewer(code: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2048,
system: `You are an expert TypeScript code reviewer.
Analyze code for:
- Type safety and proper typing
- Potential runtime errors
- Performance issues
- Best practice violations
Provide specific, actionable feedback with code examples when suggesting fixes.`,
messages: [
{
role: 'user',
content: `Please review this code:\n\n\`\`\`typescript\n${code}\n\`\`\``,
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
Multi-Turn Conversations
Maintain conversation history by passing previous messages:
import Anthropic from '@anthropic-ai/sdk';
type Message = Anthropic.MessageParam;
class ClaudeConversation {
private anthropic: Anthropic;
private messages: Message[];
private systemPrompt: string;
private model: string;
constructor(systemPrompt: string, model: string = 'claude-sonnet-4-20250514') {
this.anthropic = new Anthropic();
this.systemPrompt = systemPrompt;
this.model = model;
this.messages = [];
}
async send(userMessage: string): Promise<string> {
this.messages.push({ role: 'user', content: userMessage });
const response = await this.anthropic.messages.create({
model: this.model,
max_tokens: 2048,
system: this.systemPrompt,
messages: this.messages,
});
const textBlock = response.content[0];
const assistantMessage = textBlock.type === 'text' ? textBlock.text : '';
this.messages.push({ role: 'assistant', content: assistantMessage });
return assistantMessage;
}
getHistory(): Message[] {
return [...this.messages];
}
clearHistory(): void {
this.messages = [];
}
}
// Usage
const conversation = new ClaudeConversation(
'You are a patient programming tutor. Explain concepts step by step.'
);
const response1 = await conversation.send('What are promises in JavaScript?');
console.log(response1);
const response2 = await conversation.send('How do I handle errors with them?');
console.log(response2);
Working with Long Contexts
Claude supports up to 200K tokens of context, making it excellent for document analysis:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function analyzeDocument(document: string, question: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
messages: [
{
role: 'user',
content: `Here is a document to analyze:
<document>
${document}
</document>
Based on this document, please answer the following question:
${question}`,
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
// Usage with a large document
const longDocument = '...'; // Your document content
const answer = await analyzeDocument(
longDocument,
'What are the main themes discussed in this document?'
);
Using XML Tags for Structure
Claude responds well to XML-style tags for organizing input:
async function compareDocuments(doc1: string, doc2: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2048,
messages: [
{
role: 'user',
content: `Compare these two documents and identify key differences:
<document_a>
${doc1}
</document_a>
<document_b>
${doc2}
</document_b>
Provide your analysis in the following format:
1. Summary of Document A
2. Summary of Document B
3. Key Differences
4. Similarities`,
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
Vision Capabilities
Claude can analyze images when you include them in messages:
import Anthropic from '@anthropic-ai/sdk';
import fs from 'fs';
const anthropic = new Anthropic();
async function analyzeImage(imagePath: string, question: string): Promise<string> {
const imageBuffer = fs.readFileSync(imagePath);
const base64Image = imageBuffer.toString('base64');
const mediaType = 'image/png'; // or image/jpeg, image/gif, image/webp
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: mediaType,
data: base64Image,
},
},
{
type: 'text',
text: question,
},
],
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
// Usage
const analysis = await analyzeImage(
'./screenshot.png',
'Describe this UI and suggest improvements.'
);
Using Image URLs
async function analyzeImageUrl(imageUrl: string, question: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'url',
url: imageUrl,
},
},
{
type: 'text',
text: question,
},
],
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
Extended Thinking
Claude supports extended thinking for complex reasoning tasks:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function solveComplexProblem(problem: string): Promise<{
thinking: string;
answer: string;
}> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 16000,
thinking: {
type: 'enabled',
budget_tokens: 10000, // Allow up to 10K tokens for thinking
},
messages: [{ role: 'user', content: problem }],
});
let thinking = '';
let answer = '';
for (const block of response.content) {
if (block.type === 'thinking') {
thinking = block.thinking;
} else if (block.type === 'text') {
answer = block.text;
}
}
return { thinking, answer };
}
// Usage
const result = await solveComplexProblem(`
Analyze this code and identify all potential security vulnerabilities:
\`\`\`typescript
app.get('/user/:id', async (req, res) => {
const query = \`SELECT * FROM users WHERE id = \${req.params.id}\`;
const user = await db.query(query);
res.json(user);
});
\`\`\`
`);
console.log("Claude's thinking process:", result.thinking);
console.log('\nFinal answer:', result.answer);
Error Handling
Handle Anthropic-specific errors:
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function safeChat(message: string): Promise<string> {
try {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: message }],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
} catch (error) {
if (error instanceof Anthropic.APIError) {
switch (error.status) {
case 400:
console.error('Bad request:', error.message);
break;
case 401:
console.error('Invalid API key');
break;
case 403:
console.error('Permission denied');
break;
case 429:
console.error('Rate limit exceeded');
break;
case 500:
console.error('Anthropic server error');
break;
case 529:
console.error('API overloaded - try again later');
break;
default:
console.error(`API error ${error.status}:`, error.message);
}
} else if (error instanceof Anthropic.APIConnectionError) {
console.error('Connection error:', error.message);
} else {
throw error;
}
return '';
}
}
Retry with Exponential Backoff
async function chatWithRetry(message: string, maxRetries: number = 3): Promise<string> {
let lastError: Error | null = null;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: message }],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
} catch (error) {
lastError = error as Error;
if (error instanceof Anthropic.RateLimitError) {
const delay = Math.pow(2, attempt) * 1000;
console.log(`Rate limited. Waiting ${delay}ms before retry...`);
await new Promise((resolve) => setTimeout(resolve, delay));
} else if (
error instanceof Anthropic.APIError &&
(error.status === 529 || error.status >= 500)
) {
const delay = attempt * 2000;
console.log(`Server error. Waiting ${delay}ms before retry...`);
await new Promise((resolve) => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw lastError;
}
Understanding Pricing
Anthropic charges based on input and output tokens:
┌─────────────────────────────────────────────────────────────────┐
│ Anthropic Pricing (as of 2024) │
├─────────────────┬──────────────────┬────────────────────────────┤
│ Model │ Input (per 1M) │ Output (per 1M) │
├─────────────────┼──────────────────┼────────────────────────────┤
│ Claude Opus 4 │ $15.00 │ $75.00 │
│ Claude Sonnet 4│ $3.00 │ $15.00 │
│ Claude Sonnet │ $3.00 │ $15.00 │
│ 3.5 │ │ │
│ Claude Haiku │ $0.25 │ $1.25 │
│ 3.5 │ │ │
└─────────────────┴──────────────────┴────────────────────────────┘
Tracking Usage
interface UsageReport {
inputTokens: number;
outputTokens: number;
cost: number;
model: string;
}
const CLAUDE_PRICING: Record<string, { input: number; output: number }> = {
'claude-sonnet-4-20250514': { input: 3, output: 15 },
'claude-3-5-sonnet-20241022': { input: 3, output: 15 },
'claude-3-5-haiku-20241022': { input: 0.25, output: 1.25 },
};
async function chatWithUsage(message: string): Promise<{
response: string;
usage: UsageReport;
}> {
const model = 'claude-sonnet-4-20250514';
const response = await anthropic.messages.create({
model,
max_tokens: 1024,
messages: [{ role: 'user', content: message }],
});
const pricing = CLAUDE_PRICING[model];
const inputCost = (response.usage.input_tokens / 1_000_000) * pricing.input;
const outputCost = (response.usage.output_tokens / 1_000_000) * pricing.output;
const textBlock = response.content[0];
return {
response: textBlock.type === 'text' ? textBlock.text : '',
usage: {
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens,
cost: inputCost + outputCost,
model,
},
};
}
Claude's Unique Strengths
1. Nuanced Understanding
Claude excels at understanding subtle context and nuance:
async function analyzeNuance(text: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
system:
'You are skilled at detecting subtle implications, tone, and unstated assumptions in text.',
messages: [
{
role: 'user',
content: `Analyze the following message for:
1. Literal meaning
2. Implied meaning or subtext
3. Emotional tone
4. Any assumptions being made
Text: "${text}"`,
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
2. Following Complex Instructions
Claude is particularly good at following detailed, multi-part instructions:
async function generateStructuredContent(topic: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2048,
messages: [
{
role: 'user',
content: `Create educational content about "${topic}" following these exact requirements:
<format>
1. Start with a 2-sentence hook that poses an interesting question
2. Provide a clear definition in simple terms (max 50 words)
3. Give 3 practical examples, each with:
- A title
- A code snippet (TypeScript)
- A 1-sentence explanation
4. End with a "Common Mistakes" section listing 2 mistakes
5. Include a "Quick Reference" box with key syntax
</format>
<style>
- Use second person ("you")
- Keep paragraphs under 4 sentences
- No jargon without explanation
</style>
<constraints>
- Total length: 400-600 words
- Code examples must be runnable
- Do not include information about topics outside ${topic}
</constraints>`,
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
3. Honest Uncertainty
Claude acknowledges when it is uncertain:
async function factCheck(claim: string): Promise<string> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
system: `When fact-checking claims:
- Be explicit about your confidence level
- Distinguish between facts and opinions
- Say "I'm not certain" when you don't know
- Suggest verification sources`,
messages: [
{
role: 'user',
content: `Fact-check this claim: "${claim}"`,
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
Exercises
Exercise 1: Document Summarizer
Create a function that summarizes long documents with Claude:
// Your implementation here
async function summarizeDocument(
document: string,
options: {
maxLength: number;
style: 'bullet' | 'paragraph' | 'executive';
}
): Promise<string> {
// TODO: Implement document summarization
}
Solution
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function summarizeDocument(
document: string,
options: {
maxLength: number;
style: 'bullet' | 'paragraph' | 'executive';
}
): Promise<string> {
const styleInstructions = {
bullet: 'Use bullet points for each key point',
paragraph: 'Write in flowing paragraphs',
executive: 'Write an executive summary with: Overview, Key Findings, Recommendations',
};
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: options.maxLength * 2,
messages: [
{
role: 'user',
content: `Summarize the following document.
<document>
${document}
</document>
<requirements>
- Maximum length: ${options.maxLength} words
- Style: ${styleInstructions[options.style]}
- Focus on the most important information
- Maintain accuracy - do not add information not in the original
</requirements>`,
},
],
});
const textBlock = response.content[0];
return textBlock.type === 'text' ? textBlock.text : '';
}
// Test
const summary = await summarizeDocument('Your long document here...', {
maxLength: 200,
style: 'executive',
});
console.log(summary);
Exercise 2: Multi-Turn Technical Support
Build a technical support assistant that maintains conversation context:
// Your implementation here
class TechSupportBot {
// TODO: Implement a multi-turn conversation bot
// - Maintains conversation history
// - Specializes in TypeScript and Node.js
// - Asks clarifying questions when needed
}
Solution
import Anthropic from '@anthropic-ai/sdk';
type Message = Anthropic.MessageParam;
class TechSupportBot {
private anthropic: Anthropic;
private messages: Message[];
constructor() {
this.anthropic = new Anthropic();
this.messages = [];
}
async chat(userMessage: string): Promise<string> {
this.messages.push({ role: 'user', content: userMessage });
const response = await this.anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 2048,
system: `You are a technical support specialist for TypeScript and Node.js.
Your approach:
1. First, ensure you understand the problem fully
2. Ask clarifying questions if the issue is unclear
3. Provide step-by-step solutions
4. Include code examples when helpful
5. Explain why the solution works
Be patient, thorough, and assume the user might be a beginner.
If you need more information to help, ask specific questions.`,
messages: this.messages,
});
const textBlock = response.content[0];
const assistantMessage = textBlock.type === 'text' ? textBlock.text : '';
this.messages.push({ role: 'assistant', content: assistantMessage });
return assistantMessage;
}
resetConversation(): void {
this.messages = [];
}
}
// Test
const bot = new TechSupportBot();
const response1 = await bot.chat("I'm getting a type error in my code");
console.log('Bot:', response1);
const response2 = await bot.chat("It says 'Property x does not exist on type y'");
console.log('Bot:', response2);
Exercise 3: Code Reviewer with Extended Thinking
Create a code reviewer that shows its reasoning process:
// Your implementation here
async function reviewCodeWithReasoning(code: string): Promise<{
thinking: string;
review: string;
score: number;
}> {
// TODO: Use extended thinking to review code
}
Solution
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function reviewCodeWithReasoning(code: string): Promise<{
thinking: string;
review: string;
score: number;
}> {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 8000,
thinking: {
type: 'enabled',
budget_tokens: 4000,
},
messages: [
{
role: 'user',
content: `Review this TypeScript code and provide:
1. A detailed review with specific issues and suggestions
2. A score from 1-10
\`\`\`typescript
${code}
\`\`\`
Format your response as:
REVIEW:
[Your detailed review]
SCORE: [1-10]`,
},
],
});
let thinking = '';
let review = '';
let score = 0;
for (const block of response.content) {
if (block.type === 'thinking') {
thinking = block.thinking;
} else if (block.type === 'text') {
review = block.text;
// Extract score from response
const scoreMatch = review.match(/SCORE:\s*(\d+)/);
if (scoreMatch) {
score = parseInt(scoreMatch[1], 10);
}
}
}
return { thinking, review, score };
}
// Test
const result = await reviewCodeWithReasoning(`
function fetchData(url) {
return fetch(url).then(r => r.json());
}
`);
console.log('Thinking process:', result.thinking);
console.log('\nReview:', result.review);
console.log('Score:', result.score);
Key Takeaways
- Model Selection: Use Claude Sonnet for most tasks, Haiku for high-volume simple tasks
- System Prompts: Pass as a separate parameter, not in messages
- Long Context: Claude handles up to 200K tokens, great for document analysis
- XML Tags: Claude responds well to XML-style structure in prompts
- Extended Thinking: Enable for complex reasoning tasks that benefit from step-by-step analysis
- Error Handling: Watch for 529 (overloaded) errors specific to Anthropic
- Honest AI: Claude acknowledges uncertainty, making it reliable for factual tasks
Resources
| Resource | Type | Description |
|---|---|---|
| Anthropic API Reference | Documentation | Complete API reference |
| Claude Prompt Library | Examples | Curated prompt examples |
| Anthropic Cookbook | Tutorial | Practical recipes and examples |
| Anthropic TypeScript SDK | Repository | Official SDK source |
Next Lesson
You have learned how to work with Anthropic's Claude. In the next lesson, you will explore Google's Gemini models, which offer unique multimodal capabilities and tight integration with Google's ecosystem.