Lesson 1.1: Chatbot Architecture
Duration: 60 minutes
Learning Objectives
By the end of this lesson, you will be able to:
- Understand what makes a chatbot different from single-turn AI requests
- Identify the core components of a chatbot system
- Design the data flow for a conversational application
- Implement a basic chatbot structure in TypeScript
- Choose between different architectural patterns
Introduction
In Course 4, you learned how to make individual requests to AI APIs. Each request was independent. You asked a question, got an answer, and that was it. But real conversations are different. They have memory. They build on previous exchanges.
A chatbot remembers what you said before. If you ask "What is TypeScript?" and then follow up with "Give me an example of that," the chatbot knows what "that" refers to. This memory is what transforms simple AI requests into genuine conversations.
In this lesson, you will learn how chatbots work under the hood and build the foundation for your own conversational AI.
What Makes a Chatbot
A chatbot is more than just an AI that responds to messages. It is a system with several interconnected parts:
┌─────────────────────────────────────────────────────────────────┐
│ Chatbot Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ User │───▶│ Interface │───▶│ Conversation │ │
│ │ Input │ │ (CLI/Web) │ │ Manager │ │
│ └──────────┘ └──────────────┘ └────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌──────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Output │◀───│ Response │◀───│ AI Provider │ │
│ │ Display │ │ Handler │ │ (OpenAI/Claude) │ │
│ └──────────┘ └──────────────┘ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Let us examine each component:
1. User Interface
The interface is how users interact with your chatbot. It could be:
- CLI (Command Line): Text-based, great for development and tools
- Web Chat: Browser-based, common for customer support
- Mobile App: Native experience on phones
- Messaging Platform: Slack, Discord, Telegram integration
For this course, we will build a CLI interface. The concepts transfer to any platform.
2. Conversation Manager
This is the brain of your chatbot. It:
- Maintains the message history
- Formats messages for the AI API
- Handles context window limits
- Manages conversation state (active, ended, reset)
3. AI Provider
The actual language model that generates responses. You can use:
- OpenAI (GPT-4o, GPT-4o-mini)
- Anthropic (Claude)
- Other providers (Google Gemini, open-source models)
4. Response Handler
Processes the AI response before showing it to the user:
- Extracts the message content
- Handles errors gracefully
- Formats output for display
- Updates conversation history
Single-Turn vs Multi-Turn
Understanding this distinction is crucial:
Single-Turn (What You Did Before)
// Each request is independent
const response1 = await chat('What is TypeScript?');
const response2 = await chat('Give me an example'); // AI does not know what "example" refers to
Multi-Turn (Chatbot)
// Conversation has memory
conversation.addMessage('user', 'What is TypeScript?');
const response1 = await conversation.getResponse();
// AI now knows the context
conversation.addMessage('user', 'Give me an example');
const response2 = await conversation.getResponse();
// AI knows you want a TypeScript example
The difference is that multi-turn conversations pass the full history to the AI with each request.
The Message Array
At the heart of every chatbot is the message array. This is what you send to the AI API:
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is TypeScript?' },
{ role: 'assistant', content: 'TypeScript is a typed superset of JavaScript...' },
{ role: 'user', content: 'Give me an example' },
// AI will respond knowing the full context
];
Each message has:
| Property | Description |
|---|---|
role |
Who sent the message: system, user, or assistant |
content |
The actual text of the message |
Role Explanations
┌──────────────┬───────────────────────────────────────────────────┐
│ Role │ Purpose │
├──────────────┼───────────────────────────────────────────────────┤
│ system │ Sets AI behavior. Comes first. One per │
│ │ conversation. Example: "You are a pirate." │
├──────────────┼───────────────────────────────────────────────────┤
│ user │ Messages from the human. Questions, commands, │
│ │ requests. Example: "What is TypeScript?" │
├──────────────┼───────────────────────────────────────────────────┤
│ assistant │ AI responses. Stored to maintain context. │
│ │ Example: "TypeScript is..." │
└──────────────┴───────────────────────────────────────────────────┘
Building the Foundation
Let us create the type definitions for our chatbot. Create src/types.ts:
// Message roles
export type MessageRole = 'system' | 'user' | 'assistant';
// A single message in the conversation
export interface Message {
role: MessageRole;
content: string;
}
// Conversation state
export interface Conversation {
id: string;
messages: Message[];
createdAt: Date;
updatedAt: Date;
}
// Configuration for the chatbot
export interface ChatbotConfig {
provider: 'openai' | 'anthropic';
model: string;
systemPrompt: string;
maxTokens?: number;
temperature?: number;
}
// Response from the chatbot
export interface ChatResponse {
content: string;
tokensUsed: {
input: number;
output: number;
total: number;
};
}
A Simple Chatbot Class
Now let us build a basic chatbot. Create src/simple-chatbot.ts:
import 'dotenv/config';
import OpenAI from 'openai';
import type { ChatResponse, ChatbotConfig, Message } from './types';
export class SimpleChatbot {
private openai: OpenAI;
private config: ChatbotConfig;
private messages: Message[] = [];
constructor(config: ChatbotConfig) {
this.openai = new OpenAI();
this.config = config;
// Initialize with system prompt
if (config.systemPrompt) {
this.messages.push({
role: 'system',
content: config.systemPrompt,
});
}
}
async chat(userMessage: string): Promise<ChatResponse> {
// Add user message to history
this.messages.push({
role: 'user',
content: userMessage,
});
// Make API request with full history
const response = await this.openai.chat.completions.create({
model: this.config.model,
messages: this.messages,
max_tokens: this.config.maxTokens ?? 1000,
temperature: this.config.temperature ?? 0.7,
});
// Extract assistant response
const assistantMessage = response.choices[0].message.content || '';
// Add assistant response to history
this.messages.push({
role: 'assistant',
content: assistantMessage,
});
return {
content: assistantMessage,
tokensUsed: {
input: response.usage?.prompt_tokens || 0,
output: response.usage?.completion_tokens || 0,
total: response.usage?.total_tokens || 0,
},
};
}
// Get current conversation length
getMessageCount(): number {
return this.messages.length;
}
// Reset the conversation
reset(): void {
this.messages = [];
if (this.config.systemPrompt) {
this.messages.push({
role: 'system',
content: this.config.systemPrompt,
});
}
}
}
Testing the Chatbot
Create src/test-chatbot.ts to verify it works:
import { SimpleChatbot } from './simple-chatbot';
async function main() {
const chatbot = new SimpleChatbot({
provider: 'openai',
model: 'gpt-4o-mini',
systemPrompt: 'You are a friendly programming tutor. Keep responses concise.',
temperature: 0.7,
});
console.log("Chatbot initialized. Let's have a conversation!\n");
// First message
console.log('User: What is TypeScript?');
const response1 = await chatbot.chat('What is TypeScript?');
console.log(`Assistant: ${response1.content}\n`);
// Follow-up (chatbot remembers context)
console.log('User: Give me a simple example of that.');
const response2 = await chatbot.chat('Give me a simple example of that.');
console.log(`Assistant: ${response2.content}\n`);
// Another follow-up
console.log('User: What are its main benefits?');
const response3 = await chatbot.chat('What are its main benefits?');
console.log(`Assistant: ${response3.content}\n`);
console.log(`Total messages in conversation: ${chatbot.getMessageCount()}`);
}
main().catch(console.error);
Run it:
npx tsx src/test-chatbot.ts
You should see a coherent conversation where each response builds on previous context.
Architectural Patterns
There are several ways to structure a chatbot. Here are the most common patterns:
Pattern 1: Simple In-Memory
Store messages in an array. Simple but loses data on restart.
class InMemoryChatbot {
private messages: Message[] = [];
// All data in memory
}
Best for: Development, simple tools, short-lived conversations
Pattern 2: Session-Based
Store conversations per session with unique IDs.
class SessionChatbot {
private sessions: Map<string, Message[]> = new Map();
chat(sessionId: string, message: string) {
const history = this.sessions.get(sessionId) || [];
// ...
}
}
Best for: Web applications, multiple concurrent users
Pattern 3: Persistent Storage
Save conversations to a database or file system.
class PersistentChatbot {
async chat(conversationId: string, message: string) {
const history = await this.loadFromDatabase(conversationId);
// ... get response ...
await this.saveToDatabase(conversationId, history);
}
}
Best for: Production applications, long-running conversations, analytics
Pattern Comparison
| Pattern | Persistence | Complexity | Use Case |
|---|---|---|---|
| In-Memory | None | Low | Development |
| Session-Based | Session lifetime | Medium | Web apps |
| Persistent | Permanent | High | Production |
For this module, we will use the in-memory pattern. It is perfect for learning and CLI applications.
The Conversation Loop
Every chatbot follows this basic loop:
┌─────────────────────────────────────────────────────────────────┐
│ Conversation Loop │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌───────────────┐ │
│ │ Start │ │
│ └───────┬───────┘ │
│ │ │
│ ▼ │
│ ┌───────────────┐ │
│ │ Wait for User │◀─────────────────────┐ │
│ │ Input │ │ │
│ └───────┬───────┘ │ │
│ │ │ │
│ ▼ │ │
│ ┌───────────────┐ │ │
│ │ Exit │ Yes │ │
│ │ Command? │────▶ End │ │
│ └───────┬───────┘ │ │
│ │ No │ │
│ ▼ │ │
│ ┌───────────────┐ │ │
│ │ Add to │ │ │
│ │ History │ │ │
│ └───────┬───────┘ │ │
│ │ │ │
│ ▼ │ │
│ ┌───────────────┐ │ │
│ │ Send to │ │ │
│ │ AI API │ │ │
│ └───────┬───────┘ │ │
│ │ │ │
│ ▼ │ │
│ ┌───────────────┐ │ │
│ │ Display │ │ │
│ │ Response │──────────────────────┘ │
│ └───────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Here is how this looks in code:
import * as readline from 'readline';
import { SimpleChatbot } from './simple-chatbot';
async function runChatLoop() {
const chatbot = new SimpleChatbot({
provider: 'openai',
model: 'gpt-4o-mini',
systemPrompt: 'You are a helpful assistant.',
});
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
console.log("Chat started. Type 'exit' to quit.\n");
const askQuestion = () => {
rl.question('You: ', async (input) => {
const userInput = input.trim();
// Check for exit
if (userInput.toLowerCase() === 'exit') {
console.log('Goodbye!');
rl.close();
return;
}
// Skip empty input
if (!userInput) {
askQuestion();
return;
}
try {
// Get response
const response = await chatbot.chat(userInput);
console.log(`\nAssistant: ${response.content}\n`);
} catch (error) {
console.error('Error:', error);
}
// Continue the loop
askQuestion();
});
};
askQuestion();
}
runChatLoop();
Error Handling in Chatbots
Chatbots need robust error handling. Users expect smooth experiences even when things go wrong:
import OpenAI from 'openai';
export class RobustChatbot {
private openai: OpenAI;
private messages: Message[] = [];
async chat(userMessage: string): Promise<string> {
// Add user message
this.messages.push({ role: 'user', content: userMessage });
try {
const response = await this.openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: this.messages,
});
const content = response.choices[0].message.content || '';
this.messages.push({ role: 'assistant', content });
return content;
} catch (error) {
// Remove the failed user message
this.messages.pop();
if (error instanceof OpenAI.APIError) {
switch (error.status) {
case 429:
return "I'm getting too many requests. Please wait a moment and try again.";
case 500:
case 503:
return "I'm having trouble connecting. Please try again in a few seconds.";
case 401:
throw new Error('Invalid API key. Please check your configuration.');
default:
return 'Something went wrong. Please try again.';
}
}
throw error;
}
}
}
Key points:
- Remove failed messages from history to keep it clean
- Provide user-friendly error messages
- Only throw for critical errors (like invalid API key)
Exercises
Exercise 1: Add Message Timestamps
Extend the Message type to include timestamps:
// Your implementation here
interface TimestampedMessage {
role: MessageRole;
content: string;
timestamp: Date;
}
class TimestampedChatbot {
// TODO: Implement a chatbot that tracks when each message was sent
}
Solution
import 'dotenv/config';
import OpenAI from 'openai';
type MessageRole = 'system' | 'user' | 'assistant';
interface TimestampedMessage {
role: MessageRole;
content: string;
timestamp: Date;
}
class TimestampedChatbot {
private openai: OpenAI;
private messages: TimestampedMessage[] = [];
constructor(systemPrompt: string) {
this.openai = new OpenAI();
this.messages.push({
role: 'system',
content: systemPrompt,
timestamp: new Date(),
});
}
async chat(userMessage: string): Promise<string> {
this.messages.push({
role: 'user',
content: userMessage,
timestamp: new Date(),
});
// Convert to API format (without timestamps)
const apiMessages = this.messages.map((m) => ({
role: m.role,
content: m.content,
}));
const response = await this.openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: apiMessages,
});
const content = response.choices[0].message.content || '';
this.messages.push({
role: 'assistant',
content,
timestamp: new Date(),
});
return content;
}
getHistory(): TimestampedMessage[] {
return [...this.messages];
}
getConversationDuration(): number {
if (this.messages.length < 2) return 0;
const first = this.messages[0].timestamp;
const last = this.messages[this.messages.length - 1].timestamp;
return last.getTime() - first.getTime();
}
}
// Test
async function main() {
const bot = new TimestampedChatbot('You are helpful.');
await bot.chat('Hello!');
await new Promise((r) => setTimeout(r, 1000)); // Wait 1 second
await bot.chat('How are you?');
console.log('History:', bot.getHistory());
console.log('Duration:', bot.getConversationDuration(), 'ms');
}
main();
Exercise 2: Multiple Providers
Create a chatbot that can switch between OpenAI and Anthropic:
// Your implementation here
class MultiProviderChatbot {
constructor(provider: 'openai' | 'anthropic') {
// TODO: Initialize the correct client
}
async chat(message: string): Promise<string> {
// TODO: Route to the correct provider
}
}
Solution
import Anthropic from '@anthropic-ai/sdk';
import 'dotenv/config';
import OpenAI from 'openai';
type Provider = 'openai' | 'anthropic';
interface Message {
role: 'user' | 'assistant';
content: string;
}
class MultiProviderChatbot {
private provider: Provider;
private openai?: OpenAI;
private anthropic?: Anthropic;
private messages: Message[] = [];
private systemPrompt: string;
constructor(provider: Provider, systemPrompt: string = 'You are a helpful assistant.') {
this.provider = provider;
this.systemPrompt = systemPrompt;
if (provider === 'openai') {
this.openai = new OpenAI();
} else {
this.anthropic = new Anthropic();
}
}
async chat(userMessage: string): Promise<string> {
this.messages.push({ role: 'user', content: userMessage });
let response: string;
if (this.provider === 'openai' && this.openai) {
response = await this.chatOpenAI();
} else if (this.provider === 'anthropic' && this.anthropic) {
response = await this.chatAnthropic();
} else {
throw new Error('Provider not initialized');
}
this.messages.push({ role: 'assistant', content: response });
return response;
}
private async chatOpenAI(): Promise<string> {
const messages = [{ role: 'system' as const, content: this.systemPrompt }, ...this.messages];
const response = await this.openai!.chat.completions.create({
model: 'gpt-4o-mini',
messages,
});
return response.choices[0].message.content || '';
}
private async chatAnthropic(): Promise<string> {
const response = await this.anthropic!.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1000,
system: this.systemPrompt,
messages: this.messages,
});
const block = response.content[0];
return block.type === 'text' ? block.text : '';
}
getProvider(): Provider {
return this.provider;
}
}
// Test
async function main() {
// Test with OpenAI
const openaiBot = new MultiProviderChatbot('openai', 'You are concise.');
console.log('OpenAI:', await openaiBot.chat('What is 2+2?'));
// Test with Anthropic
const anthropicBot = new MultiProviderChatbot('anthropic', 'You are concise.');
console.log('Anthropic:', await anthropicBot.chat('What is 2+2?'));
}
main();
Exercise 3: Conversation Statistics
Add a method to get conversation statistics:
// Your implementation here
interface ConversationStats {
messageCount: number;
userMessageCount: number;
assistantMessageCount: number;
totalTokensUsed: number;
averageResponseLength: number;
}
class StatsChatbot {
getStats(): ConversationStats {
// TODO: Implement
}
}
Solution
import 'dotenv/config';
import OpenAI from 'openai';
interface Message {
role: 'system' | 'user' | 'assistant';
content: string;
tokens?: number;
}
interface ConversationStats {
messageCount: number;
userMessageCount: number;
assistantMessageCount: number;
totalTokensUsed: number;
averageResponseLength: number;
}
class StatsChatbot {
private openai: OpenAI;
private messages: Message[] = [];
private totalTokens: number = 0;
constructor(systemPrompt: string) {
this.openai = new OpenAI();
this.messages.push({
role: 'system',
content: systemPrompt,
});
}
async chat(userMessage: string): Promise<string> {
this.messages.push({ role: 'user', content: userMessage });
const response = await this.openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: this.messages.map((m) => ({ role: m.role, content: m.content })),
});
const content = response.choices[0].message.content || '';
const tokens = response.usage?.total_tokens || 0;
this.totalTokens += tokens;
this.messages.push({
role: 'assistant',
content,
tokens: response.usage?.completion_tokens,
});
return content;
}
getStats(): ConversationStats {
const userMessages = this.messages.filter((m) => m.role === 'user');
const assistantMessages = this.messages.filter((m) => m.role === 'assistant');
const totalResponseLength = assistantMessages.reduce((sum, m) => sum + m.content.length, 0);
const averageResponseLength =
assistantMessages.length > 0 ? totalResponseLength / assistantMessages.length : 0;
return {
messageCount: this.messages.length,
userMessageCount: userMessages.length,
assistantMessageCount: assistantMessages.length,
totalTokensUsed: this.totalTokens,
averageResponseLength: Math.round(averageResponseLength),
};
}
}
// Test
async function main() {
const bot = new StatsChatbot('You are helpful.');
await bot.chat('What is JavaScript?');
await bot.chat('What about TypeScript?');
await bot.chat('Which should I learn first?');
console.log('Stats:', bot.getStats());
}
main();
Key Takeaways
- Chatbots have memory: They maintain conversation history across multiple turns
- Messages array is the core: System, user, and assistant messages form the conversation
- Architecture matters: Choose in-memory, session-based, or persistent based on your needs
- The conversation loop: Wait for input, process, respond, repeat
- Error handling is essential: Users expect graceful degradation
- Types help: Define clear interfaces for messages, conversations, and responses
Resources
| Resource | Type | Description |
|---|---|---|
| OpenAI Chat Guide | Documentation | Official chat completions guide |
| Anthropic Messages API | Documentation | Claude messages reference |
| Building LLM Applications | Course | Free course on LLM applications |
| Node.js Readline | Documentation | CLI input handling |
Next Lesson
You now understand the architecture of a chatbot. In the next lesson, you will dive deeper into managing message history, including handling context limits and implementing conversation memory strategies.