From Zero to AI

Lesson 2.5: Practice - Streaming Chatbot

Duration: 75 minutes

Learning Objectives

By the end of this lesson, you will have built:

  1. A complete CLI chatbot with streaming responses
  2. Support for multiple AI providers (OpenAI and Anthropic)
  3. Conversation history management
  4. Graceful error handling and recovery
  5. User commands for controlling the chat

Project Overview

You will build a feature-rich streaming chatbot with:

  • Real-time streaming output
  • Provider switching (OpenAI/Anthropic)
  • Persistent conversation context
  • Commands: /clear, /switch, /exit
  • Error handling with retry logic
  • Metrics display

Project Setup

Create the project structure:

mkdir streaming-chatbot
cd streaming-chatbot
npm init -y
npm install typescript tsx @types/node openai @anthropic-ai/sdk dotenv readline

Create tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "outDir": "dist"
  },
  "include": ["src/**/*"]
}

Create .env:

OPENAI_API_KEY=sk-proj-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here

Step 1: Type Definitions

Create src/types.ts:

export type Provider = 'openai' | 'anthropic';

export interface Message {
  role: 'user' | 'assistant' | 'system';
  content: string;
}

export interface ChatConfig {
  provider: Provider;
  model: string;
  systemPrompt: string;
}

export interface StreamMetrics {
  timeToFirstToken: number;
  totalTime: number;
  tokenCount: number;
}

export const MODELS: Record<Provider, string> = {
  openai: 'gpt-4o',
  anthropic: 'claude-sonnet-4-20250514',
};

Step 2: Provider Clients

Create src/clients.ts:

import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';
import OpenAI from 'openai';

export const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

Step 3: Streaming Implementation

Create src/stream.ts:

import { anthropic, openai } from './clients';
import type { Message, Provider, StreamMetrics } from './types';
import { MODELS } from './types';

export interface StreamResult {
  content: string;
  metrics: StreamMetrics;
}

export async function streamResponse(
  provider: Provider,
  messages: Message[],
  systemPrompt: string,
  onToken: (token: string) => void
): Promise<StreamResult> {
  const startTime = Date.now();
  let firstTokenTime: number | null = null;
  let tokenCount = 0;
  let content = '';

  if (provider === 'openai') {
    const formattedMessages = [
      { role: 'system' as const, content: systemPrompt },
      ...messages.map((m) => ({
        role: m.role as 'user' | 'assistant',
        content: m.content,
      })),
    ];

    const stream = await openai.chat.completions.create({
      model: MODELS.openai,
      messages: formattedMessages,
      stream: true,
    });

    for await (const chunk of stream) {
      const delta = chunk.choices[0]?.delta?.content;
      if (delta) {
        if (firstTokenTime === null) {
          firstTokenTime = Date.now();
        }
        tokenCount++;
        content += delta;
        onToken(delta);
      }
    }
  } else {
    const formattedMessages = messages.map((m) => ({
      role: m.role as 'user' | 'assistant',
      content: m.content,
    }));

    const stream = anthropic.messages.stream({
      model: MODELS.anthropic,
      max_tokens: 2048,
      system: systemPrompt,
      messages: formattedMessages,
    });

    for await (const text of stream.textStream) {
      if (firstTokenTime === null) {
        firstTokenTime = Date.now();
      }
      tokenCount++;
      content += text;
      onToken(text);
    }
  }

  const endTime = Date.now();

  return {
    content,
    metrics: {
      timeToFirstToken: firstTokenTime ? firstTokenTime - startTime : 0,
      totalTime: endTime - startTime,
      tokenCount,
    },
  };
}

Step 4: Conversation Manager

Create src/conversation.ts:

import type { Message } from './types';

export class Conversation {
  private messages: Message[] = [];
  private maxMessages: number;

  constructor(maxMessages: number = 20) {
    this.maxMessages = maxMessages;
  }

  addUserMessage(content: string): void {
    this.messages.push({ role: 'user', content });
    this.trimHistory();
  }

  addAssistantMessage(content: string): void {
    this.messages.push({ role: 'assistant', content });
    this.trimHistory();
  }

  getMessages(): Message[] {
    return [...this.messages];
  }

  clear(): void {
    this.messages = [];
  }

  getMessageCount(): number {
    return this.messages.length;
  }

  private trimHistory(): void {
    if (this.messages.length > this.maxMessages) {
      this.messages = this.messages.slice(-this.maxMessages);
    }
  }
}

Step 5: Command Handler

Create src/commands.ts:

import { Conversation } from './conversation';
import type { Provider } from './types';

export interface CommandResult {
  handled: boolean;
  exit?: boolean;
  message?: string;
  switchProvider?: Provider;
}

export function handleCommand(
  input: string,
  conversation: Conversation,
  currentProvider: Provider
): CommandResult {
  const trimmed = input.trim().toLowerCase();

  if (trimmed === '/exit' || trimmed === '/quit') {
    return { handled: true, exit: true, message: 'Goodbye!' };
  }

  if (trimmed === '/clear') {
    conversation.clear();
    return { handled: true, message: 'Conversation cleared.' };
  }

  if (trimmed === '/switch') {
    const newProvider: Provider = currentProvider === 'openai' ? 'anthropic' : 'openai';
    return {
      handled: true,
      switchProvider: newProvider,
      message: `Switched to ${newProvider}.`,
    };
  }

  if (trimmed === '/help') {
    return {
      handled: true,
      message: `Available commands:
  /clear  - Clear conversation history
  /switch - Switch between OpenAI and Anthropic
  /help   - Show this help message
  /exit   - Exit the chatbot`,
    };
  }

  if (trimmed.startsWith('/')) {
    return {
      handled: true,
      message: `Unknown command: ${trimmed}. Type /help for available commands.`,
    };
  }

  return { handled: false };
}

Step 6: Main Application

Create src/main.ts:

import * as readline from "readline";
import { Conversation } from "./conversation";
import { streamResponse } from "./stream";
import { handleCommand } from "./commands";
import type { Provider } from "./types";
import { MODELS } from "./types";

const SYSTEM_PROMPT = `You are a helpful AI assistant. Be concise and informative.
When asked about code, provide working examples with explanations.`;

async function main(): Promise<void> {
  const conversation = new Conversation();
  let provider: Provider = "openai";

  const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
  });

  console.log("Streaming Chatbot");
  console.log("=================");
  console.log(`Provider: ${provider} (${MODELS[provider]})`);
  console.log("Type /help for commands
");

  const prompt = (): void => {
    rl.question("You: ", async (input) => {
      const trimmedInput = input.trim();

      if (!trimmedInput) {
        prompt();
        return;
      }

      // Handle commands
      const cmdResult = handleCommand(trimmedInput, conversation, provider);
      if (cmdResult.handled) {
        if (cmdResult.message) {
          console.log(`
${cmdResult.message}
`);
        }
        if (cmdResult.switchProvider) {
          provider = cmdResult.switchProvider;
          console.log(`Now using: ${MODELS[provider]}
`);
        }
        if (cmdResult.exit) {
          rl.close();
          return;
        }
        prompt();
        return;
      }

      // Add user message
      conversation.addUserMessage(trimmedInput);

      // Stream response
      process.stdout.write("
Assistant: ");

      try {
        const result = await streamResponse(
          provider,
          conversation.getMessages(),
          SYSTEM_PROMPT,
          (token) => process.stdout.write(token)
        );

        console.log("
");
        console.log(
          `[TTFT: ${result.metrics.timeToFirstToken}ms | ` +
          `Total: ${result.metrics.totalTime}ms | ` +
          `Tokens: ${result.metrics.tokenCount}]
`
        );

        conversation.addAssistantMessage(result.content);
      } catch (error) {
        console.log("
");
        if (error instanceof Error) {
          console.error(`Error: ${error.message}
`);
        }
      }

      prompt();
    });
  };

  prompt();
}

main().catch(console.error);

Step 7: Running the Chatbot

Add a script to package.json:

{
  "scripts": {
    "start": "tsx src/main.ts"
  }
}

Run the chatbot:

npm start

Challenges

Try extending the chatbot with these features:

  1. Token Counting: Display estimated token usage
  2. Response Time History: Track average response times
  3. Export Conversation: Save chat to a file
  4. Custom System Prompts: Allow changing the system prompt
  5. Model Selection: Let users choose specific models

Final Project Structure

streaming-chatbot/
├── src/
│   ├── types.ts
│   ├── clients.ts
│   ├── stream.ts
│   ├── conversation.ts
│   ├── commands.ts
│   └── main.ts
├── .env
├── .gitignore
├── package.json
└── tsconfig.json

Key Takeaways

  1. Streaming provides immediate feedback to users
  2. Provider abstraction makes switching easy
  3. Conversation management maintains context across turns
  4. Commands give users control over the experience
  5. Metrics help monitor and optimize performance

Resources

Resource Type Level
OpenAI Node.js SDK Repository Beginner
Anthropic TypeScript SDK Repository Beginner
Node.js readline Documentation Beginner

Module Complete

Congratulations! You have completed Module 2: Streaming and Real-time. You now understand:

  • Why streaming improves user experience
  • How Server-Sent Events work
  • How to implement streaming with OpenAI
  • How to implement streaming with Anthropic
  • How to build a complete streaming chatbot

In the next module, you will learn about Function Calling - enabling AI to interact with external tools and APIs.

Continue to Module 3: Function Calling