From Zero to AI

Lesson 5.4: LangChain.js Basics

Duration: 60 minutes

Learning Objectives

By the end of this lesson, you will be able to:

  1. Understand the LangChain.js architecture and components
  2. Create agents using LangChain.js
  3. Define and use custom tools
  4. Build chains and agents with memory

What is LangChain.js

LangChain.js is a framework for building applications with language models. Instead of writing all the boilerplate code yourself, LangChain provides pre-built components that handle common patterns.

Key benefits:

  • Abstraction: Consistent interface across different LLM providers
  • Composability: Chain components together easily
  • Pre-built Tools: Common tools ready to use
  • Agent Patterns: ReAct and other patterns implemented
  • Memory: Built-in conversation memory management

Core Concepts

LangChain.js is built around these concepts:

Models

Wrappers around LLM providers (OpenAI, Anthropic, etc.)

Prompts

Templates for generating prompts with variables

Chains

Sequences of operations that process input to output

Tools

Functions that agents can call

Agents

Systems that use models to decide which tools to call

Memory

Storage for conversation history and context


Installation and Setup

Install the required packages:

npm install langchain @langchain/openai @langchain/anthropic @langchain/community dotenv

Basic setup:

import { ChatAnthropic } from '@langchain/anthropic';
import { ChatOpenAI } from '@langchain/openai';
import * as dotenv from 'dotenv';

dotenv.config();

// OpenAI model
const openaiModel = new ChatOpenAI({
  modelName: 'gpt-4o',
  temperature: 0,
});

// Anthropic model
const anthropicModel = new ChatAnthropic({
  modelName: 'claude-sonnet-4-20250514',
  temperature: 0,
});

Working with Models

LangChain provides a unified interface for different providers:

import { HumanMessage, SystemMessage } from '@langchain/core/messages';
import { ChatOpenAI } from '@langchain/openai';

const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  temperature: 0.7,
});

// Simple invocation
const response = await model.invoke('What is TypeScript?');
console.log(response.content);

// With message types
const messages = [
  new SystemMessage('You are a helpful coding assistant.'),
  new HumanMessage('Explain async/await in TypeScript'),
];

const result = await model.invoke(messages);
console.log(result.content);

Prompt Templates

Create reusable prompts with variables:

import { ChatPromptTemplate } from '@langchain/core/prompts';

// Simple template
const simpleTemplate = ChatPromptTemplate.fromMessages([
  ['system', 'You are an expert in {topic}.'],
  ['human', '{question}'],
]);

const formatted = await simpleTemplate.formatMessages({
  topic: 'TypeScript',
  question: 'What are generics?',
});

const response = await model.invoke(formatted);

Advanced templates:

// Use with actual history
import { AIMessage, HumanMessage } from '@langchain/core/messages';
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';

// Template with message history placeholder
const templateWithHistory = ChatPromptTemplate.fromMessages([
  ['system', 'You are a helpful assistant. Be concise.'],
  new MessagesPlaceholder('history'),
  ['human', '{input}'],
]);

const formattedWithHistory = await templateWithHistory.formatMessages({
  history: [
    new HumanMessage("Hi, I'm learning TypeScript"),
    new AIMessage('Great! TypeScript is a typed superset of JavaScript.'),
  ],
  input: 'What should I learn first?',
});

Creating Chains

Chains connect components together:

import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { ChatOpenAI } from '@langchain/openai';

const model = new ChatOpenAI({ modelName: 'gpt-4o' });

const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a code reviewer. Provide brief, actionable feedback.'],
  ['human', 'Review this code:\n\n{code}'],
]);

const outputParser = new StringOutputParser();

// Create a chain using pipe
const chain = prompt.pipe(model).pipe(outputParser);

// Run the chain
const review = await chain.invoke({
  code: `
function add(a, b) {
  return a + b;
}
`,
});

console.log(review);

Structured Output

Get structured data from the model:

import { ChatOpenAI } from '@langchain/openai';
import { z } from 'zod';

const model = new ChatOpenAI({ modelName: 'gpt-4o' });

// Define the output schema
const ReviewSchema = z.object({
  score: z.number().min(1).max(10).describe('Quality score from 1-10'),
  issues: z.array(z.string()).describe('List of issues found'),
  suggestions: z.array(z.string()).describe('Improvement suggestions'),
  summary: z.string().describe('Brief summary of the review'),
});

// Create structured output model
const structuredModel = model.withStructuredOutput(ReviewSchema);

const result = await structuredModel.invoke(
  'Review this TypeScript code: function add(a: any, b: any) { return a + b; }'
);

console.log(result.score); // 4
console.log(result.issues); // ["Using 'any' type defeats TypeScript's purpose", ...]
console.log(result.suggestions); // ["Use specific types like number", ...]

Defining Tools

Tools extend what agents can do:

import { tool } from '@langchain/core/tools';
import { z } from 'zod';

// Simple tool
const calculatorTool = tool(
  async ({ expression }) => {
    try {
      // Note: Use a proper math parser in production
      const result = Function(`"use strict"; return (${expression})`)();
      return `Result: ${result}`;
    } catch {
      return `Error: Could not evaluate "${expression}"`;
    }
  },
  {
    name: 'calculator',
    description: 'Performs mathematical calculations',
    schema: z.object({
      expression: z.string().describe('The math expression to evaluate'),
    }),
  }
);

// Tool that fetches data
const weatherTool = tool(
  async ({ location }) => {
    // Simulated weather data
    const weather = {
      temperature: Math.floor(Math.random() * 30) + 10,
      condition: ['sunny', 'cloudy', 'rainy'][Math.floor(Math.random() * 3)],
    };
    return `Weather in ${location}: ${weather.temperature}°C, ${weather.condition}`;
  },
  {
    name: 'get_weather',
    description: 'Gets current weather for a location',
    schema: z.object({
      location: z.string().describe('The city to get weather for'),
    }),
  }
);

More complex tools:

import { tool } from '@langchain/core/tools';
import { z } from 'zod';

// Tool with multiple parameters
const searchTool = tool(
  async ({ query, maxResults, category }) => {
    // Simulated search
    const results = [
      { title: `Result 1 for "${query}"`, url: 'https://example.com/1' },
      { title: `Result 2 for "${query}"`, url: 'https://example.com/2' },
    ].slice(0, maxResults);

    return JSON.stringify(results, null, 2);
  },
  {
    name: 'web_search',
    description: 'Searches the web for information',
    schema: z.object({
      query: z.string().describe('The search query'),
      maxResults: z.number().default(5).describe('Maximum results to return'),
      category: z
        .enum(['general', 'news', 'academic'])
        .optional()
        .describe('Category to search in'),
    }),
  }
);

// Tool that takes notes
const notes: string[] = [];

const noteTool = tool(
  async ({ action, content }) => {
    if (action === 'add') {
      notes.push(content || '');
      return `Note added. Total notes: ${notes.length}`;
    } else if (action === 'list') {
      return notes.length > 0
        ? `Notes:\n${notes.map((n, i) => `${i + 1}. ${n}`).join('\n')}`
        : 'No notes yet.';
    } else if (action === 'clear') {
      notes.length = 0;
      return 'Notes cleared.';
    }
    return 'Unknown action';
  },
  {
    name: 'notes',
    description: 'Manages a list of notes',
    schema: z.object({
      action: z.enum(['add', 'list', 'clear']).describe('Action to perform'),
      content: z.string().optional().describe('Note content (for add action)'),
    }),
  }
);

Creating Agents

Agents use tools to accomplish tasks:

import { ChatPromptTemplate } from '@langchain/core/prompts';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createToolCallingAgent } from 'langchain/agents';

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });

// Define tools
const tools = [calculatorTool, weatherTool, noteTool];

// Create agent prompt
const prompt = ChatPromptTemplate.fromMessages([
  [
    'system',
    `You are a helpful assistant with access to tools.

Use tools when needed to answer questions. Think step by step.
After using tools, provide a clear final answer.`,
  ],
  ['placeholder', '{chat_history}'],
  ['human', '{input}'],
  ['placeholder', '{agent_scratchpad}'],
]);

// Create the agent
const agent = createToolCallingAgent({
  llm: model,
  tools,
  prompt,
});

// Create executor
const agentExecutor = new AgentExecutor({
  agent,
  tools,
  verbose: true, // Shows agent reasoning
});

// Run the agent
const result = await agentExecutor.invoke({
  input: "What's 15% of 850? Then tell me the weather in Paris and make a note of both.",
  chat_history: [],
});

console.log(result.output);

Agent with Memory

Add conversation memory to agents:

import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createToolCallingAgent } from 'langchain/agents';
import { BufferMemory } from 'langchain/memory';
import { ChatMessageHistory } from 'langchain/stores/message/in_memory';

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });

// Create memory
const messageHistory = new ChatMessageHistory();
const memory = new BufferMemory({
  chatHistory: messageHistory,
  returnMessages: true,
  memoryKey: 'chat_history',
});

// Prompt with history placeholder
const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a helpful assistant. Remember what the user tells you.'],
  new MessagesPlaceholder('chat_history'),
  ['human', '{input}'],
  ['placeholder', '{agent_scratchpad}'],
]);

const agent = createToolCallingAgent({
  llm: model,
  tools: [calculatorTool],
  prompt,
});

const agentExecutor = new AgentExecutor({
  agent,
  tools: [calculatorTool],
  memory,
});

// First interaction
await agentExecutor.invoke({
  input: "My name is Alex and I'm learning TypeScript",
});

// Second interaction - agent remembers
const result = await agentExecutor.invoke({
  input: "What's my name and what am I learning?",
});

console.log(result.output);
// Output: Your name is Alex and you're learning TypeScript.

ReAct Agent Pattern

LangChain implements the ReAct (Reasoning + Acting) pattern:

import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createReactAgent } from 'langchain/agents';
import { pull } from 'langchain/hub';

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });

// Pull the ReAct prompt from LangChain Hub
const prompt = await pull<ChatPromptTemplate>('hwchase17/react');

const tools = [calculatorTool, searchTool];

// Create ReAct agent
const agent = await createReactAgent({
  llm: model,
  tools,
  prompt,
});

const agentExecutor = new AgentExecutor({
  agent,
  tools,
  verbose: true,
  maxIterations: 5,
});

const result = await agentExecutor.invoke({
  input: 'Research the current population of Tokyo and calculate what 5% of that would be',
});

console.log(result.output);

The agent will show its reasoning:

Thought: I need to find Tokyo's population first
Action: web_search
Action Input: {"query": "Tokyo population 2024"}

Observation: [Search results...]

Thought: Tokyo's population is about 14 million. Now I need to calculate 5%
Action: calculator
Action Input: {"expression": "14000000 * 0.05"}

Observation: Result: 700000

Thought: I have all the information needed
Final Answer: Tokyo's population is approximately 14 million, and 5% of that is 700,000 people.

Custom Agent Logic

For more control, build custom agent logic:

import { AIMessage, HumanMessage, SystemMessage } from '@langchain/core/messages';
import { ChatOpenAI } from '@langchain/openai';

interface AgentState {
  messages: (HumanMessage | AIMessage | SystemMessage)[];
  currentStep: number;
  maxSteps: number;
}

class CustomLangChainAgent {
  private model: ChatOpenAI;
  private tools: Record<string, (...args: unknown[]) => Promise<string>>;

  constructor(model: ChatOpenAI, tools: Record<string, (...args: unknown[]) => Promise<string>>) {
    this.model = model.bind({
      tools: Object.entries(tools).map(([name, _]) => ({
        type: 'function' as const,
        function: {
          name,
          description: `Tool: ${name}`,
          parameters: { type: 'object', properties: {} },
        },
      })),
    });
    this.tools = tools;
  }

  async run(input: string): Promise<string> {
    const state: AgentState = {
      messages: [
        new SystemMessage('You are a helpful assistant. Use tools when needed.'),
        new HumanMessage(input),
      ],
      currentStep: 0,
      maxSteps: 10,
    };

    while (state.currentStep < state.maxSteps) {
      const response = await this.model.invoke(state.messages);
      state.messages.push(response);
      state.currentStep++;

      // Check if agent wants to use tools
      const toolCalls = response.additional_kwargs?.tool_calls;

      if (!toolCalls || toolCalls.length === 0) {
        // Agent is done
        return response.content as string;
      }

      // Execute tools
      for (const toolCall of toolCalls) {
        const toolName = toolCall.function.name;
        const toolFn = this.tools[toolName];

        if (toolFn) {
          const args = JSON.parse(toolCall.function.arguments);
          const result = await toolFn(args);

          state.messages.push(
            new AIMessage({
              content: '',
              additional_kwargs: {
                tool_calls: [toolCall],
              },
            })
          );

          // Add tool result as a message
          state.messages.push(
            new HumanMessage({
              content: `Tool ${toolName} result: ${result}`,
            })
          );
        }
      }
    }

    return 'Max steps reached';
  }
}

Streaming Responses

Stream agent responses in real-time:

import { ChatPromptTemplate } from '@langchain/core/prompts';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createToolCallingAgent } from 'langchain/agents';

const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  temperature: 0,
  streaming: true,
});

const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a helpful assistant.'],
  ['human', '{input}'],
  ['placeholder', '{agent_scratchpad}'],
]);

const agent = createToolCallingAgent({
  llm: model,
  tools: [calculatorTool],
  prompt,
});

const agentExecutor = new AgentExecutor({
  agent,
  tools: [calculatorTool],
});

// Stream the response
const stream = await agentExecutor.stream({
  input: 'Calculate 123 * 456 and explain the result',
});

for await (const chunk of stream) {
  if (chunk.output) {
    process.stdout.write(chunk.output);
  }
}

Error Handling

Handle errors gracefully in agents:

import { tool } from '@langchain/core/tools';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createToolCallingAgent } from 'langchain/agents';
import { z } from 'zod';

// Tool that might fail
const riskyTool = tool(
  async ({ value }) => {
    if (value < 0) {
      throw new Error('Value cannot be negative');
    }
    return `Processed: ${value}`;
  },
  {
    name: 'risky_operation',
    description: 'Processes a value (must be non-negative)',
    schema: z.object({
      value: z.number().describe('The value to process'),
    }),
  }
);

const agentExecutor = new AgentExecutor({
  agent,
  tools: [riskyTool],
  handleParsingErrors: true, // Handle parsing errors
  maxIterations: 3, // Limit retries
});

try {
  const result = await agentExecutor.invoke({
    input: 'Process the value -5',
  });
  console.log(result.output);
} catch (error) {
  console.error('Agent failed:', error);
}

Best Practices

1. Use Descriptive Tool Names

Clear names help the model choose the right tool.

2. Write Good Tool Descriptions

The description is what the model uses to decide when to use a tool.

3. Set Reasonable Limits

Always set maxIterations to prevent runaway agents.

4. Enable Verbose Mode for Debugging

Use verbose: true during development to see agent reasoning.

5. Handle Errors

Tools can fail. Design for graceful error handling.

6. Test Tools Independently

Verify each tool works correctly before using it in an agent.


Key Takeaways

  1. LangChain.js provides abstractions for building LLM applications
  2. Tools extend agent capabilities with custom functions
  3. Agents use models to decide which tools to call and when
  4. Memory enables multi-turn conversations with context
  5. Chains compose components into reusable pipelines

Practice Exercise

Build a LangChain.js agent with these tools:

  • A calculator for math operations
  • A note-taking tool to save information
  • A retrieval tool to search saved notes

Test the agent with:

  1. "Calculate 20% of 150 and save it as a note"
  2. "What notes have I saved?"
  3. "Calculate the square root of my saved note"

Resources

Resource Type Level
LangChain.js Documentation Documentation Beginner
LangChain.js Agents Guide Documentation Intermediate
LangSmith Tool Intermediate
LangGraph.js Documentation Advanced

Next Lesson

In the next lesson, you will build a complete research agent that can autonomously investigate topics using multiple tools and techniques.

Continue to Lesson 5.5: Practice - Research Agent