Lesson 6.1: Project Planning
Duration: 60 minutes
Learning Objectives
By the end of this lesson, you will be able to:
- Define clear requirements for an AI application
- Design a modular architecture that supports extensibility
- Set up a professional project structure
- Create configuration management for multiple environments
- Establish patterns for clean, maintainable code
Introduction
Every successful software project starts with planning. For AI applications, this is especially important because:
- Multiple AI providers with different APIs
- Complex data flows between components
- Need for flexibility as AI capabilities evolve
- Production considerations like rate limiting and error handling
This lesson guides you through planning your AI Knowledge Assistant from requirements to implementation-ready architecture.
Defining Requirements
Before writing code, clarify what your application should do. Here are the requirements for our AI Knowledge Assistant:
Functional Requirements
-
Conversational Interface
- Accept user messages via CLI
- Maintain conversation history
- Support multi-turn conversations
-
Knowledge Retrieval (RAG)
- Load documents from a folder
- Generate embeddings for semantic search
- Retrieve relevant context for questions
-
Tool Integration
- Calculator for math operations
- Weather lookup for location-based queries
- Web search for current information
- Note-taking for persistent storage
-
Streaming Responses
- Display responses as they generate
- Show tool usage in real-time
- Handle interruptions gracefully
Non-Functional Requirements
- Modularity: Easy to add new tools or providers
- Type Safety: Full TypeScript coverage
- Error Handling: Graceful degradation on failures
- Configuration: Environment-based settings
- Testability: Components can be tested in isolation
Architecture Design
The architecture follows a layered approach with clear separation of concerns:
┌─────────────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
│ │
│ ┌───────────────────────┐ │
│ │ CLI Interface │ │
│ └───────────┬───────────┘ │
│ │ │
└────────────────────────────────┼────────────────────────────────────┘
│
┌────────────────────────────────┼────────────────────────────────────┐
│ APPLICATION LAYER │
│ │ │
│ ┌───────────▼───────────┐ │
│ │ Assistant │ │
│ │ Orchestrator │ │
│ └───────────┬───────────┘ │
│ │ │
│ ┌──────────────────────┼──────────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │Conversation │ │ Context │ │ Message │ │
│ │ Manager │ │ Builder │ │ Processor │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
│
┌────────────────────────────────┼────────────────────────────────────┐
│ CAPABILITY LAYER │
│ │ │
│ ┌───────────────────────────┼───────────────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ RAG │ │ Tools │ │ Streaming│ │
│ │ Service │ │ Registry │ │ Handler │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
│
┌────────────────────────────────┼────────────────────────────────────┐
│ INFRASTRUCTURE LAYER │
│ │ │
│ ┌───────────────────────────┼───────────────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ OpenAI │ │Anthropic │ │ Vector │ │
│ │ Provider │ │ Provider │ │ Store │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
Layer Responsibilities
Presentation Layer
- Handles user input/output
- Formats responses for display
- Manages the interaction loop
Application Layer
- Orchestrates the conversation flow
- Decides when to use RAG vs tools
- Manages conversation state
Capability Layer
- Implements RAG retrieval
- Provides tool execution
- Handles response streaming
Infrastructure Layer
- Communicates with AI providers
- Manages vector storage
- Handles external API calls
Project Structure
Create a well-organized folder structure:
mkdir -p ai-assistant/src/{core,rag,tools,providers,utils}
mkdir -p ai-assistant/{documents,tests}
cd ai-assistant
Here is what each folder contains:
ai-assistant/
├── src/
│ ├── core/ # Application logic
│ │ ├── assistant.ts # Main orchestrator
│ │ ├── conversation.ts # Conversation management
│ │ ├── config.ts # Configuration
│ │ └── types.ts # Shared types
│ │
│ ├── rag/ # RAG implementation
│ │ ├── embeddings.ts # Generate embeddings
│ │ ├── vector-store.ts # Store and query vectors
│ │ ├── retriever.ts # Retrieve relevant docs
│ │ └── loader.ts # Load documents
│ │
│ ├── tools/ # Tool implementations
│ │ ├── index.ts # Tool registry
│ │ ├── calculator.ts # Math operations
│ │ ├── weather.ts # Weather lookup
│ │ ├── web-search.ts # Web search
│ │ └── notes.ts # Note taking
│ │
│ ├── providers/ # AI provider integrations
│ │ ├── base.ts # Provider interface
│ │ ├── openai.ts # OpenAI implementation
│ │ └── anthropic.ts # Anthropic implementation
│ │
│ ├── utils/ # Utilities
│ │ ├── logger.ts # Logging
│ │ ├── errors.ts # Custom errors
│ │ └── helpers.ts # Helper functions
│ │
│ └── index.ts # Entry point
│
├── documents/ # Documents for RAG
│ ├── sample.md # Sample document
│ └── README.md # Documents guide
│
├── tests/ # Test files
│ ├── assistant.test.ts
│ └── tools.test.ts
│
├── .env # Environment variables
├── .env.example # Example env file
├── .gitignore
├── package.json
├── tsconfig.json
└── README.md
Configuration Management
Configuration should be centralized and type-safe. Create src/core/config.ts:
import dotenv from 'dotenv';
import { z } from 'zod';
// Load environment variables
dotenv.config();
// Define configuration schema
const ConfigSchema = z.object({
// AI Providers
openaiApiKey: z.string().min(1, 'OpenAI API key is required'),
anthropicApiKey: z.string().optional(),
// Model settings
defaultProvider: z.enum(['openai', 'anthropic']).default('openai'),
defaultModel: z.string().default('gpt-4o'),
maxTokens: z.number().default(4096),
temperature: z.number().min(0).max(2).default(0.7),
// RAG settings
embeddingModel: z.string().default('text-embedding-3-small'),
chunkSize: z.number().default(1000),
chunkOverlap: z.number().default(200),
retrievalTopK: z.number().default(3),
// Application settings
documentsPath: z.string().default('./documents'),
logLevel: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
// Optional API keys for tools
weatherApiKey: z.string().optional(),
});
// Infer type from schema
export type Config = z.infer<typeof ConfigSchema>;
// Parse and validate configuration
function loadConfig(): Config {
const rawConfig = {
openaiApiKey: process.env.OPENAI_API_KEY,
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
defaultProvider: process.env.DEFAULT_PROVIDER,
defaultModel: process.env.DEFAULT_MODEL,
maxTokens: process.env.MAX_TOKENS ? parseInt(process.env.MAX_TOKENS, 10) : undefined,
temperature: process.env.TEMPERATURE ? parseFloat(process.env.TEMPERATURE) : undefined,
embeddingModel: process.env.EMBEDDING_MODEL,
chunkSize: process.env.CHUNK_SIZE ? parseInt(process.env.CHUNK_SIZE, 10) : undefined,
chunkOverlap: process.env.CHUNK_OVERLAP ? parseInt(process.env.CHUNK_OVERLAP, 10) : undefined,
retrievalTopK: process.env.RETRIEVAL_TOP_K
? parseInt(process.env.RETRIEVAL_TOP_K, 10)
: undefined,
documentsPath: process.env.DOCUMENTS_PATH,
logLevel: process.env.LOG_LEVEL,
weatherApiKey: process.env.WEATHER_API_KEY,
};
const result = ConfigSchema.safeParse(rawConfig);
if (!result.success) {
const errors = result.error.issues
.map((issue) => ` - ${issue.path.join('.')}: ${issue.message}`)
.join('\n');
throw new Error(`Configuration validation failed:\n${errors}`);
}
return result.data;
}
// Export singleton config instance
export const config = loadConfig();
Core Types
Define shared types in src/core/types.ts:
// Message types
export interface Message {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string;
name?: string;
toolCallId?: string;
}
export interface ToolCall {
id: string;
name: string;
arguments: Record<string, unknown>;
}
export interface AssistantMessage extends Message {
role: 'assistant';
toolCalls?: ToolCall[];
}
// Tool types
export interface ToolDefinition {
name: string;
description: string;
parameters: {
type: 'object';
properties: Record<string, unknown>;
required?: string[];
};
}
export interface Tool extends ToolDefinition {
execute: (args: Record<string, unknown>) => Promise<string>;
}
// RAG types
export interface Document {
id: string;
content: string;
metadata: Record<string, unknown>;
}
export interface RetrievalResult {
document: Document;
score: number;
}
// Conversation types
export interface Conversation {
id: string;
messages: Message[];
createdAt: Date;
updatedAt: Date;
metadata?: Record<string, unknown>;
}
// Provider types
export interface ProviderOptions {
model: string;
maxTokens?: number;
temperature?: number;
stream?: boolean;
}
export interface ProviderResponse {
content: string;
toolCalls?: ToolCall[];
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
}
export interface StreamChunk {
type: 'text' | 'tool_call' | 'done';
content?: string;
toolCall?: ToolCall;
}
// Provider interface
export interface AIProvider {
chat(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): Promise<ProviderResponse>;
chatStream(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): AsyncIterable<StreamChunk>;
}
// Assistant types
export interface AssistantOptions {
provider?: 'openai' | 'anthropic';
model?: string;
systemPrompt?: string;
tools?: Tool[];
enableRag?: boolean;
}
export interface AssistantResponse {
content: string;
toolsUsed?: string[];
documentsRetrieved?: number;
}
Setting Up the Project
Now let's initialize the project with all necessary files.
1. Initialize npm and Install Dependencies
npm init -y
npm install typescript tsx @types/node openai @anthropic-ai/sdk dotenv zod
npm install @langchain/openai @langchain/community langchain chromadb
npm install --save-dev @types/node
2. Create tsconfig.json
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"outDir": "dist",
"rootDir": "src",
"declaration": true,
"sourceMap": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "tests"]
}
3. Create package.json Scripts
Update package.json:
{
"name": "ai-assistant",
"version": "1.0.0",
"type": "module",
"scripts": {
"start": "tsx src/index.ts",
"dev": "tsx watch src/index.ts",
"build": "tsc",
"test": "tsx --test tests/**/*.test.ts",
"lint": "tsc --noEmit"
},
"dependencies": {
"@anthropic-ai/sdk": "^0.24.0",
"@langchain/community": "^0.2.0",
"@langchain/openai": "^0.2.0",
"chromadb": "^1.8.0",
"dotenv": "^16.4.0",
"langchain": "^0.2.0",
"openai": "^4.52.0",
"zod": "^3.23.0"
},
"devDependencies": {
"@types/node": "^20.14.0",
"tsx": "^4.16.0",
"typescript": "^5.5.0"
}
}
4. Create .env.example
# Required: AI Provider Keys
OPENAI_API_KEY=sk-proj-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
# Optional: Provider Settings
DEFAULT_PROVIDER=openai
DEFAULT_MODEL=gpt-4o
MAX_TOKENS=4096
TEMPERATURE=0.7
# Optional: RAG Settings
EMBEDDING_MODEL=text-embedding-3-small
CHUNK_SIZE=1000
CHUNK_OVERLAP=200
RETRIEVAL_TOP_K=3
DOCUMENTS_PATH=./documents
# Optional: Application Settings
LOG_LEVEL=info
# Optional: Tool API Keys
WEATHER_API_KEY=your-weather-api-key
5. Create .gitignore
# Dependencies
node_modules/
# Build output
dist/
# Environment
.env
.env.local
# IDE
.vscode/
.idea/
# OS
.DS_Store
Thumbs.db
# Logs
*.log
npm-debug.log*
# Testing
coverage/
# Vector store data
chroma-data/
Logger Utility
Create a simple logger in src/utils/logger.ts:
import { config } from '../core/config.js';
type LogLevel = 'debug' | 'info' | 'warn' | 'error';
const LOG_LEVELS: Record<LogLevel, number> = {
debug: 0,
info: 1,
warn: 2,
error: 3,
};
class Logger {
private level: LogLevel;
private name: string;
constructor(name: string) {
this.name = name;
this.level = config.logLevel;
}
private shouldLog(level: LogLevel): boolean {
return LOG_LEVELS[level] >= LOG_LEVELS[this.level];
}
private format(level: LogLevel, message: string): string {
const timestamp = new Date().toISOString();
return `[${timestamp}] [${level.toUpperCase()}] [${this.name}] ${message}`;
}
debug(message: string, ...args: unknown[]): void {
if (this.shouldLog('debug')) {
console.debug(this.format('debug', message), ...args);
}
}
info(message: string, ...args: unknown[]): void {
if (this.shouldLog('info')) {
console.info(this.format('info', message), ...args);
}
}
warn(message: string, ...args: unknown[]): void {
if (this.shouldLog('warn')) {
console.warn(this.format('warn', message), ...args);
}
}
error(message: string, ...args: unknown[]): void {
if (this.shouldLog('error')) {
console.error(this.format('error', message), ...args);
}
}
}
export function createLogger(name: string): Logger {
return new Logger(name);
}
Error Handling
Create custom errors in src/utils/errors.ts:
export class AssistantError extends Error {
constructor(
message: string,
public code: string,
public cause?: Error
) {
super(message);
this.name = 'AssistantError';
}
}
export class ProviderError extends AssistantError {
constructor(
message: string,
public provider: string,
cause?: Error
) {
super(message, 'PROVIDER_ERROR', cause);
this.name = 'ProviderError';
}
}
export class ToolError extends AssistantError {
constructor(
message: string,
public toolName: string,
cause?: Error
) {
super(message, 'TOOL_ERROR', cause);
this.name = 'ToolError';
}
}
export class RagError extends AssistantError {
constructor(message: string, cause?: Error) {
super(message, 'RAG_ERROR', cause);
this.name = 'RagError';
}
}
export class ConfigError extends AssistantError {
constructor(message: string, cause?: Error) {
super(message, 'CONFIG_ERROR', cause);
this.name = 'ConfigError';
}
}
// Error handler utility
export function handleError(error: unknown): AssistantError {
if (error instanceof AssistantError) {
return error;
}
if (error instanceof Error) {
return new AssistantError(error.message, 'UNKNOWN_ERROR', error);
}
return new AssistantError(String(error), 'UNKNOWN_ERROR');
}
Sample Document
Create a sample document for RAG in documents/sample.md:
# AI Assistant Documentation
## Overview
This AI assistant helps users with various tasks including:
- Answering questions using document knowledge
- Performing calculations
- Looking up weather information
- Taking and retrieving notes
## Features
### Conversation Memory
The assistant remembers the context of your conversation. You can ask follow-up questions without repeating information.
### Document Knowledge
The assistant can search through loaded documents to find relevant information. Ask questions about any topic covered in the documents.
### Tool Usage
The assistant can use these tools:
1. **Calculator**: For mathematical operations
2. **Weather**: For current weather information
3. **Notes**: For saving and retrieving information
## Best Practices
When asking questions:
- Be specific about what you want to know
- Provide context when relevant
- Ask follow-up questions to clarify
## Limitations
- The assistant cannot browse the internet in real-time
- Weather data requires an API key
- Document knowledge is limited to loaded files
Key Takeaways
- Start with requirements: Clear requirements guide architecture decisions
- Layer your architecture: Separation of concerns makes code maintainable
- Type everything: TypeScript catches errors early and improves documentation
- Centralize configuration: One source of truth for settings
- Plan for errors: Custom error types make debugging easier
Practice Exercise
- Create the complete folder structure for the project
- Implement the configuration module with validation
- Create the types file with all necessary interfaces
- Set up the logger and error utilities
- Add your own sample document for RAG
Next Steps
With the foundation in place, you are ready to build the core assistant functionality. In the next lesson, you will implement:
- The main Assistant class
- Conversation management
- Message processing
- Provider integration