From Zero to AI

Lesson 5.5: Practice - Simple AI Assistant

Duration: 90 minutes

Learning Objectives

By the end of this lesson, you will be able to:

  • Build a complete AI assistant from scratch
  • Combine all techniques learned in this module
  • Implement multi-provider support with fallback
  • Create a conversation manager with history
  • Handle errors gracefully in production code
  • Structure an AI application for maintainability

Introduction

In this final lesson of the module, you will build a complete AI assistant that combines everything you have learned: API integration, response processing, rate limiting, and retry logic. This project will serve as a template for your future AI applications.


Project Overview

You will build an AI assistant with these features:

  • Multi-provider support (OpenAI and Anthropic)
  • Automatic fallback if one provider fails
  • Conversation history management
  • Configurable system prompts
  • Rate limiting and retry logic
  • Cost tracking
  • Clean, maintainable architecture

Project Structure

ai-assistant/
├── src/
│   ├── index.ts              # Main entry point
│   ├── assistant.ts          # Main assistant class
│   ├── providers/
│   │   ├── base.ts           # Provider interface
│   │   ├── openai.ts         # OpenAI implementation
│   │   └── anthropic.ts      # Anthropic implementation
│   ├── utils/
│   │   ├── retry.ts          # Retry logic
│   │   └── cost-tracker.ts   # Cost tracking
│   └── types.ts              # Shared types
├── .env
├── package.json
└── tsconfig.json

Step 1: Project Setup

Create a new directory and initialize the project:

mkdir ai-assistant
cd ai-assistant
npm init -y
npm install typescript tsx @types/node openai @anthropic-ai/sdk dotenv

Create tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "outDir": "dist",
    "declaration": true
  },
  "include": ["src/**/*"]
}

Create .env:

OPENAI_API_KEY=sk-proj-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key

Create the directory structure:

mkdir -p src/providers src/utils

Step 2: Define Types

Create src/types.ts:

// Message types
export interface Message {
  role: 'user' | 'assistant' | 'system';
  content: string;
  timestamp: Date;
}

// Provider response
export interface ProviderResponse {
  content: string;
  tokensUsed: {
    input: number;
    output: number;
  };
  model: string;
  provider: string;
}

// Configuration
export interface AssistantConfig {
  primaryProvider: 'openai' | 'anthropic';
  fallbackProvider?: 'openai' | 'anthropic';
  systemPrompt: string;
  maxHistoryLength: number;
  maxRetries: number;
  enableCostTracking: boolean;
}

// Cost tracking
export interface CostRecord {
  timestamp: Date;
  provider: string;
  model: string;
  inputTokens: number;
  outputTokens: number;
  cost: number;
}

// Provider interface
export interface AIProvider {
  name: string;
  chat(messages: Message[], systemPrompt: string): Promise<ProviderResponse>;
  isAvailable(): Promise<boolean>;
}

Step 3: Implement Retry Utility

Create src/utils/retry.ts:

export interface RetryConfig {
  maxRetries: number;
  initialDelayMs: number;
  maxDelayMs: number;
  backoffMultiplier: number;
}

const DEFAULT_CONFIG: RetryConfig = {
  maxRetries: 3,
  initialDelayMs: 1000,
  maxDelayMs: 30000,
  backoffMultiplier: 2,
};

export async function withRetry<T>(
  fn: () => Promise<T>,
  config: Partial<RetryConfig> = {},
  isRetryable: (error: unknown) => boolean = () => true
): Promise<T> {
  const opts = { ...DEFAULT_CONFIG, ...config };
  let lastError: Error | null = null;
  let delay = opts.initialDelayMs;

  for (let attempt = 1; attempt <= opts.maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      lastError = error as Error;

      if (!isRetryable(error) || attempt === opts.maxRetries) {
        throw error;
      }

      const jitter = delay * 0.3 * (Math.random() * 2 - 1);
      const waitTime = Math.max(100, delay + jitter);

      console.log(
        `[Retry] Attempt ${attempt} failed: ${lastError.message}. ` +
          `Waiting ${Math.round(waitTime)}ms...`
      );

      await sleep(waitTime);
      delay = Math.min(delay * opts.backoffMultiplier, opts.maxDelayMs);
    }
  }

  throw lastError;
}

function sleep(ms: number): Promise<void> {
  return new Promise((resolve) => setTimeout(resolve, ms));
}

Step 4: Implement Cost Tracker

Create src/utils/cost-tracker.ts:

import { CostRecord } from '../types.js';

// Pricing per 1 million tokens
const PRICING: Record<string, { input: number; output: number }> = {
  'gpt-4o-mini': { input: 0.15, output: 0.6 },
  'gpt-4o': { input: 2.5, output: 10.0 },
  'claude-sonnet-4-20250514': { input: 3.0, output: 15.0 },
  'claude-3-5-haiku-20241022': { input: 0.25, output: 1.25 },
};

export class CostTracker {
  private records: CostRecord[] = [];

  record(provider: string, model: string, inputTokens: number, outputTokens: number): number {
    const pricing = PRICING[model] || { input: 1, output: 1 };
    const cost = this.calculateCost(inputTokens, outputTokens, pricing);

    this.records.push({
      timestamp: new Date(),
      provider,
      model,
      inputTokens,
      outputTokens,
      cost,
    });

    return cost;
  }

  private calculateCost(
    inputTokens: number,
    outputTokens: number,
    pricing: { input: number; output: number }
  ): number {
    const inputCost = (inputTokens / 1_000_000) * pricing.input;
    const outputCost = (outputTokens / 1_000_000) * pricing.output;
    return inputCost + outputCost;
  }

  getTotalCost(): number {
    return this.records.reduce((sum, record) => sum + record.cost, 0);
  }

  getTotalTokens(): { input: number; output: number } {
    return this.records.reduce(
      (sum, record) => ({
        input: sum.input + record.inputTokens,
        output: sum.output + record.outputTokens,
      }),
      { input: 0, output: 0 }
    );
  }

  getRecords(): CostRecord[] {
    return [...this.records];
  }

  getSummary(): string {
    const tokens = this.getTotalTokens();
    const cost = this.getTotalCost();

    return `
Cost Summary:
- Requests: ${this.records.length}
- Input tokens: ${tokens.input.toLocaleString()}
- Output tokens: ${tokens.output.toLocaleString()}
- Total cost: $${cost.toFixed(6)}
    `.trim();
  }

  clear(): void {
    this.records = [];
  }
}

Step 5: Implement Provider Base

Create src/providers/base.ts:

import { AIProvider, Message, ProviderResponse } from '../types.js';

export abstract class BaseProvider implements AIProvider {
  abstract name: string;

  abstract chat(messages: Message[], systemPrompt: string): Promise<ProviderResponse>;

  abstract isAvailable(): Promise<boolean>;

  protected formatMessages(messages: Message[]): Array<{
    role: 'user' | 'assistant';
    content: string;
  }> {
    return messages
      .filter((m) => m.role !== 'system')
      .map((m) => ({
        role: m.role as 'user' | 'assistant',
        content: m.content,
      }));
  }
}

Step 6: Implement OpenAI Provider

Create src/providers/openai.ts:

import OpenAI from 'openai';

import { Message, ProviderResponse } from '../types.js';
import { withRetry } from '../utils/retry.js';
import { BaseProvider } from './base.js';

export class OpenAIProvider extends BaseProvider {
  name = 'openai';
  private client: OpenAI;
  private model: string;

  constructor(model: string = 'gpt-4o-mini') {
    super();
    this.client = new OpenAI();
    this.model = model;
  }

  async chat(messages: Message[], systemPrompt: string): Promise<ProviderResponse> {
    return withRetry(
      async () => {
        const formattedMessages: OpenAI.Chat.ChatCompletionMessageParam[] = [
          { role: 'system', content: systemPrompt },
          ...this.formatMessages(messages),
        ];

        const response = await this.client.chat.completions.create({
          model: this.model,
          messages: formattedMessages,
        });

        return {
          content: response.choices[0].message.content || '',
          tokensUsed: {
            input: response.usage?.prompt_tokens || 0,
            output: response.usage?.completion_tokens || 0,
          },
          model: response.model,
          provider: this.name,
        };
      },
      { maxRetries: 3 },
      this.isRetryable
    );
  }

  async isAvailable(): Promise<boolean> {
    try {
      await this.client.models.list();
      return true;
    } catch {
      return false;
    }
  }

  private isRetryable(error: unknown): boolean {
    if (error instanceof OpenAI.RateLimitError) return true;
    if (error instanceof OpenAI.APIConnectionError) return true;
    if (error instanceof OpenAI.APIError && error.status >= 500) return true;
    return false;
  }
}

Step 7: Implement Anthropic Provider

Create src/providers/anthropic.ts:

import Anthropic from '@anthropic-ai/sdk';

import { Message, ProviderResponse } from '../types.js';
import { withRetry } from '../utils/retry.js';
import { BaseProvider } from './base.js';

export class AnthropicProvider extends BaseProvider {
  name = 'anthropic';
  private client: Anthropic;
  private model: string;

  constructor(model: string = 'claude-sonnet-4-20250514') {
    super();
    this.client = new Anthropic();
    this.model = model;
  }

  async chat(messages: Message[], systemPrompt: string): Promise<ProviderResponse> {
    return withRetry(
      async () => {
        const formattedMessages = this.formatMessages(messages);

        const response = await this.client.messages.create({
          model: this.model,
          max_tokens: 2048,
          system: systemPrompt,
          messages: formattedMessages,
        });

        let content = '';
        for (const block of response.content) {
          if (block.type === 'text') {
            content += block.text;
          }
        }

        return {
          content,
          tokensUsed: {
            input: response.usage.input_tokens,
            output: response.usage.output_tokens,
          },
          model: response.model,
          provider: this.name,
        };
      },
      { maxRetries: 3 },
      this.isRetryable
    );
  }

  async isAvailable(): Promise<boolean> {
    try {
      await this.client.messages.create({
        model: this.model,
        max_tokens: 10,
        messages: [{ role: 'user', content: 'Hi' }],
      });
      return true;
    } catch {
      return false;
    }
  }

  private isRetryable(error: unknown): boolean {
    if (error instanceof Anthropic.RateLimitError) return true;
    if (error instanceof Anthropic.APIConnectionError) return true;
    if (error instanceof Anthropic.APIError) {
      return error.status === 529 || error.status >= 500;
    }
    return false;
  }
}

Step 8: Implement the Assistant

Create src/assistant.ts:

import { AnthropicProvider } from './providers/anthropic.js';
import { OpenAIProvider } from './providers/openai.js';
import { AIProvider, AssistantConfig, Message, ProviderResponse } from './types.js';
import { CostTracker } from './utils/cost-tracker.js';

const DEFAULT_CONFIG: AssistantConfig = {
  primaryProvider: 'openai',
  systemPrompt: 'You are a helpful AI assistant.',
  maxHistoryLength: 20,
  maxRetries: 3,
  enableCostTracking: true,
};

export class AIAssistant {
  private config: AssistantConfig;
  private providers: Map<string, AIProvider>;
  private history: Message[];
  private costTracker: CostTracker;

  constructor(config: Partial<AssistantConfig> = {}) {
    this.config = { ...DEFAULT_CONFIG, ...config };
    this.providers = new Map();
    this.history = [];
    this.costTracker = new CostTracker();

    // Initialize providers
    this.providers.set('openai', new OpenAIProvider());
    this.providers.set('anthropic', new AnthropicProvider());
  }

  async chat(userMessage: string): Promise<string> {
    // Add user message to history
    this.addToHistory({
      role: 'user',
      content: userMessage,
      timestamp: new Date(),
    });

    // Try primary provider first
    let response: ProviderResponse | null = null;
    let error: Error | null = null;

    try {
      response = await this.callProvider(this.config.primaryProvider);
    } catch (e) {
      error = e as Error;
      console.log(
        `[Assistant] Primary provider (${this.config.primaryProvider}) failed: ${error.message}`
      );
    }

    // Try fallback provider if primary failed
    if (!response && this.config.fallbackProvider) {
      console.log(`[Assistant] Trying fallback provider (${this.config.fallbackProvider})...`);
      try {
        response = await this.callProvider(this.config.fallbackProvider);
      } catch (e) {
        error = e as Error;
        console.log(
          `[Assistant] Fallback provider (${this.config.fallbackProvider}) failed: ${error.message}`
        );
      }
    }

    // If both failed, throw error
    if (!response) {
      throw new Error(`All providers failed. Last error: ${error?.message || 'Unknown error'}`);
    }

    // Add assistant response to history
    this.addToHistory({
      role: 'assistant',
      content: response.content,
      timestamp: new Date(),
    });

    // Track cost
    if (this.config.enableCostTracking) {
      const cost = this.costTracker.record(
        response.provider,
        response.model,
        response.tokensUsed.input,
        response.tokensUsed.output
      );
      console.log(
        `[Cost] ${response.provider}/${response.model}: ` +
          `${response.tokensUsed.input}+${response.tokensUsed.output} tokens, ` +
          `$${cost.toFixed(6)}`
      );
    }

    return response.content;
  }

  private async callProvider(providerName: string): Promise<ProviderResponse> {
    const provider = this.providers.get(providerName);
    if (!provider) {
      throw new Error(`Provider ${providerName} not found`);
    }

    return provider.chat(this.history, this.config.systemPrompt);
  }

  private addToHistory(message: Message): void {
    this.history.push(message);

    // Trim history if too long
    while (this.history.length > this.config.maxHistoryLength) {
      this.history.shift();
    }
  }

  setSystemPrompt(prompt: string): void {
    this.config.systemPrompt = prompt;
  }

  getHistory(): Message[] {
    return [...this.history];
  }

  clearHistory(): void {
    this.history = [];
  }

  getCostSummary(): string {
    return this.costTracker.getSummary();
  }

  getTotalCost(): number {
    return this.costTracker.getTotalCost();
  }
}

Step 9: Create Main Entry Point

Create src/index.ts:

import 'dotenv/config';
import * as readline from 'readline';

import { AIAssistant } from './assistant.js';

async function main() {
  console.log("AI Assistant - Type 'quit' to exit, 'clear' to reset history\n");

  const assistant = new AIAssistant({
    primaryProvider: 'openai',
    fallbackProvider: 'anthropic',
    systemPrompt: `You are a helpful AI assistant specializing in TypeScript and programming.
Be concise but thorough in your explanations.
Use code examples when they help illustrate concepts.`,
    maxHistoryLength: 10,
    enableCostTracking: true,
  });

  const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
  });

  const prompt = () => {
    rl.question('\nYou: ', async (input) => {
      const userInput = input.trim();

      if (!userInput) {
        prompt();
        return;
      }

      // Handle commands
      if (userInput.toLowerCase() === 'quit') {
        console.log('\n' + assistant.getCostSummary());
        console.log('\nGoodbye!');
        rl.close();
        return;
      }

      if (userInput.toLowerCase() === 'clear') {
        assistant.clearHistory();
        console.log('History cleared.');
        prompt();
        return;
      }

      if (userInput.toLowerCase() === 'cost') {
        console.log(assistant.getCostSummary());
        prompt();
        return;
      }

      // Send message to assistant
      try {
        console.log('\nAssistant: Thinking...');
        const response = await assistant.chat(userInput);
        console.log(`\nAssistant: ${response}`);
      } catch (error) {
        console.error(`\nError: ${(error as Error).message}`);
      }

      prompt();
    });
  };

  prompt();
}

main().catch(console.error);

Step 10: Run the Assistant

Add scripts to package.json:

{
  "scripts": {
    "start": "tsx src/index.ts"
  },
  "type": "module"
}

Run the assistant:

npm start

Example session:

AI Assistant - Type 'quit' to exit, 'clear' to reset history

You: What is TypeScript?

[Cost] openai/gpt-4o-mini: 45+128 tokens, $0.000084

Assistant: TypeScript is a strongly typed programming language that builds on JavaScript,
giving you better tooling at any scale. It adds optional static typing, classes,
and interfaces, making it easier to write and maintain large codebases.

You: Can you show me a simple example?

[Cost] openai/gpt-4o-mini: 156+89 tokens, $0.000077
Assistant: Here is a simple TypeScript example:

// Define a type for a user
interface User {
  name: string;
  age: number;
  email?: string; // optional property
}

// Function with typed parameters and return type
function greetUser(user: User): string {
  return `Hello, ${user.name}! You are ${user.age} years old.`;
}

// TypeScript will catch errors at compile time
const user: User = { name: "Alice", age: 30 };
console.log(greetUser(user));

You: cost

Cost Summary:
- Requests: 2
- Input tokens: 201
- Output tokens: 217
- Total cost: $0.000161

You: quit

Cost Summary:
- Requests: 2
- Input tokens: 201
- Output tokens: 217
- Total cost: $0.000161

Goodbye!

Extending the Assistant

Here are some ideas to extend your assistant:

Add Streaming Support

async streamChat(userMessage: string): AsyncGenerator<string> {
  // Implementation using streaming API
  // Yields chunks as they arrive
}

Add Command System

interface Command {
  name: string;
  description: string;
  execute: (args: string[]) => Promise<string>;
}

const commands: Command[] = [
  {
    name: 'summarize',
    description: 'Summarize the conversation',
    execute: async () => {
      // Use AI to summarize history
    },
  },
  {
    name: 'export',
    description: 'Export conversation to file',
    execute: async () => {
      // Save history to JSON file
    },
  },
];

Add Persistence

import { readFile, writeFile } from 'fs/promises';

class PersistentAssistant extends AIAssistant {
  async saveSession(filename: string): Promise<void> {
    const data = {
      history: this.getHistory(),
      config: this.config,
    };
    await writeFile(filename, JSON.stringify(data, null, 2));
  }

  async loadSession(filename: string): Promise<void> {
    const data = JSON.parse(await readFile(filename, 'utf-8'));
    this.history = data.history;
  }
}

Exercises

Exercise 1: Add Provider Health Check

Add a health check that runs on startup:

// Your implementation here
class HealthCheckedAssistant extends AIAssistant {
  async initialize(): Promise<void> {
    // TODO: Check all providers are available
    // TODO: Report which providers are working
    // TODO: Set fallback automatically if primary is down
  }
}
Solution
class HealthCheckedAssistant extends AIAssistant {
  private availableProviders: string[] = [];

  async initialize(): Promise<void> {
    console.log('Checking provider availability...\n');

    const providers = ['openai', 'anthropic'];

    for (const name of providers) {
      const provider = this.providers.get(name);
      if (provider) {
        try {
          const available = await provider.isAvailable();
          if (available) {
            this.availableProviders.push(name);
            console.log(`[OK] ${name} is available`);
          } else {
            console.log(`[FAIL] ${name} is not available`);
          }
        } catch (error) {
          console.log(`[FAIL] ${name}: ${(error as Error).message}`);
        }
      }
    }

    if (this.availableProviders.length === 0) {
      throw new Error('No AI providers are available');
    }

    // Set primary to first available
    if (!this.availableProviders.includes(this.config.primaryProvider)) {
      this.config.primaryProvider = this.availableProviders[0] as 'openai' | 'anthropic';
      console.log(`\nSwitched primary provider to ${this.config.primaryProvider}`);
    }

    // Set fallback to second available if exists
    if (this.availableProviders.length > 1) {
      this.config.fallbackProvider = this.availableProviders.find(
        (p) => p !== this.config.primaryProvider
      ) as 'openai' | 'anthropic';
    }

    console.log(
      `\nUsing: ${this.config.primaryProvider} (fallback: ${this.config.fallbackProvider || 'none'})\n`
    );
  }
}

Exercise 2: Implement Conversation Modes

Add different conversation modes:

// Your implementation here
type ConversationMode = 'chat' | 'code' | 'tutor' | 'creative';

class ModalAssistant extends AIAssistant {
  private mode: ConversationMode = 'chat';

  setMode(mode: ConversationMode): void {
    // TODO: Change mode and update system prompt
  }

  private getSystemPromptForMode(mode: ConversationMode): string {
    // TODO: Return appropriate system prompt
  }
}
Solution
type ConversationMode = 'chat' | 'code' | 'tutor' | 'creative';

const MODE_PROMPTS: Record<ConversationMode, string> = {
  chat: `You are a helpful AI assistant. Be friendly and conversational.`,

  code: `You are an expert programmer. When asked questions:
- Provide code examples in TypeScript unless otherwise specified
- Explain your code with comments
- Suggest best practices and potential improvements
- Format code blocks properly`,

  tutor: `You are a patient programming tutor. When teaching:
- Start with simple explanations
- Use analogies to explain complex concepts
- Ask follow-up questions to check understanding
- Provide exercises when appropriate
- Celebrate progress and encourage learning`,

  creative: `You are a creative writing assistant. When helping:
- Be imaginative and expressive
- Offer multiple alternatives when appropriate
- Help with brainstorming and ideation
- Maintain a supportive and enthusiastic tone`,
};

class ModalAssistant extends AIAssistant {
  private mode: ConversationMode = 'chat';

  setMode(mode: ConversationMode): void {
    this.mode = mode;
    this.setSystemPrompt(this.getSystemPromptForMode(mode));
    console.log(`Mode changed to: ${mode}`);
  }

  getMode(): ConversationMode {
    return this.mode;
  }

  private getSystemPromptForMode(mode: ConversationMode): string {
    return MODE_PROMPTS[mode];
  }

  listModes(): string {
    return Object.keys(MODE_PROMPTS).join(', ');
  }
}

// Usage in main
const assistant = new ModalAssistant();

// Handle mode command
if (userInput.startsWith('mode ')) {
  const newMode = userInput.slice(5).trim() as ConversationMode;
  if (['chat', 'code', 'tutor', 'creative'].includes(newMode)) {
    assistant.setMode(newMode);
  } else {
    console.log(`Invalid mode. Available: ${assistant.listModes()}`);
  }
}

Exercise 3: Add Rate Limit Awareness

Make the assistant aware of rate limits:

// Your implementation here
class RateLimitAwareAssistant extends AIAssistant {
  private requestCounts: Map<string, number[]> = new Map();

  async chat(message: string): Promise<string> {
    // TODO: Track requests per provider
    // TODO: Warn user if approaching rate limit
    // TODO: Automatically switch providers if rate limited
  }

  getRateLimitStatus(): string {
    // TODO: Return human-readable rate limit status
  }
}
Solution
interface RateLimitConfig {
  requestsPerMinute: number;
  warningThreshold: number; // percentage
}

const RATE_LIMITS: Record<string, RateLimitConfig> = {
  openai: { requestsPerMinute: 60, warningThreshold: 0.8 },
  anthropic: { requestsPerMinute: 50, warningThreshold: 0.8 },
};

class RateLimitAwareAssistant extends AIAssistant {
  private requestTimestamps: Map<string, number[]> = new Map();

  constructor(config: Partial<AssistantConfig> = {}) {
    super(config);
    this.requestTimestamps.set('openai', []);
    this.requestTimestamps.set('anthropic', []);
  }

  async chat(message: string): Promise<string> {
    const provider = this.config.primaryProvider;

    // Check rate limit status before request
    const status = this.checkRateLimit(provider);

    if (status.isLimited) {
      console.log(`[Warning] ${provider} rate limit reached. Switching to fallback...`);
      if (this.config.fallbackProvider) {
        this.config.primaryProvider = this.config.fallbackProvider;
      } else {
        const waitTime = Math.ceil(status.resetInSeconds);
        console.log(`No fallback available. Please wait ${waitTime}s.`);
        throw new Error(`Rate limit exceeded. Try again in ${waitTime}s.`);
      }
    } else if (status.isWarning) {
      console.log(
        `[Warning] Approaching rate limit for ${provider}: ` +
          `${status.used}/${status.limit} requests`
      );
    }

    // Record this request
    this.recordRequest(this.config.primaryProvider);

    // Make the actual request
    return super.chat(message);
  }

  private recordRequest(provider: string): void {
    const timestamps = this.requestTimestamps.get(provider) || [];
    timestamps.push(Date.now());
    this.requestTimestamps.set(provider, timestamps);
  }

  private checkRateLimit(provider: string): {
    isLimited: boolean;
    isWarning: boolean;
    used: number;
    limit: number;
    resetInSeconds: number;
  } {
    const config = RATE_LIMITS[provider];
    const timestamps = this.requestTimestamps.get(provider) || [];
    const now = Date.now();
    const windowStart = now - 60000;

    // Filter to requests in current window
    const recentRequests = timestamps.filter((ts) => ts > windowStart);
    this.requestTimestamps.set(provider, recentRequests);

    const used = recentRequests.length;
    const limit = config.requestsPerMinute;
    const oldestInWindow = recentRequests[0] || now;
    const resetInSeconds = Math.max(0, (oldestInWindow + 60000 - now) / 1000);

    return {
      isLimited: used >= limit,
      isWarning: used >= limit * config.warningThreshold,
      used,
      limit,
      resetInSeconds,
    };
  }

  getRateLimitStatus(): string {
    const lines: string[] = ['Rate Limit Status:'];

    for (const [provider, config] of Object.entries(RATE_LIMITS)) {
      const status = this.checkRateLimit(provider);
      const percentage = Math.round((status.used / status.limit) * 100);
      const bar = this.createProgressBar(percentage);
      lines.push(`  ${provider}: ${bar} ${status.used}/${status.limit} (${percentage}%)`);
    }

    return lines.join('\n');
  }

  private createProgressBar(percentage: number): string {
    const filled = Math.round(percentage / 10);
    const empty = 10 - filled;
    return '[' + '#'.repeat(filled) + '-'.repeat(empty) + ']';
  }
}

Key Takeaways

  1. Modular Architecture: Separate concerns into providers, utilities, and the main assistant class
  2. Provider Abstraction: Use interfaces to make switching providers easy
  3. Fallback Strategy: Always have a backup plan when one provider fails
  4. Cost Tracking: Monitor usage to avoid unexpected bills
  5. Retry Logic: Handle transient failures gracefully
  6. Conversation History: Manage context for coherent multi-turn conversations
  7. Configuration: Make behavior customizable through config objects
  8. Error Handling: Provide clear error messages and recovery options

What You Built

In this lesson, you created a production-ready AI assistant with:

  • Multi-provider support (OpenAI and Anthropic)
  • Automatic fallback when providers fail
  • Exponential backoff retry logic
  • Conversation history management
  • Cost tracking and reporting
  • Clean, maintainable code structure
  • Interactive CLI interface

This project demonstrates all the concepts from this module and serves as a foundation for more complex AI applications.


Resources

Resource Type Description
OpenAI Node.js SDK Repository Official OpenAI SDK
Anthropic TypeScript SDK Repository Official Anthropic SDK
TypeScript Handbook Documentation TypeScript reference
Node.js Readline Documentation CLI input handling

Module Complete

Congratulations! You have completed Module 5: API Practice. You now have the skills to:

  • Integrate with OpenAI and Anthropic APIs
  • Process and validate AI responses
  • Implement robust error handling and retries
  • Build production-ready AI applications

You are ready to move on to Course 5, where you will build more advanced AI applications including chatbots, streaming interfaces, function calling, RAG systems, and AI agents.

Continue to Course 5: Building AI Applications