Lesson 6.2: Core Functionality
Duration: 90 minutes
Learning Objectives
By the end of this lesson, you will be able to:
- Build an AI provider abstraction layer
- Implement conversation management with history
- Create the main assistant orchestrator
- Handle streaming responses
- Process tool calls within the conversation flow
Introduction
The core of your AI assistant handles three critical tasks:
- Provider Communication: Sending messages to AI models and receiving responses
- Conversation Management: Maintaining context across multiple turns
- Orchestration: Coordinating between user input, AI responses, and tool execution
In this lesson, you build these foundational components that everything else depends on.
Provider Abstraction
First, create a base interface that all providers implement. This allows switching providers without changing application code.
Provider Interface
Create src/providers/base.ts:
import type {
Message,
ProviderOptions,
ProviderResponse,
StreamChunk,
ToolDefinition,
} from '../core/types.js';
export interface AIProvider {
readonly name: string;
chat(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): Promise<ProviderResponse>;
chatStream(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): AsyncIterable<StreamChunk>;
}
export abstract class BaseProvider implements AIProvider {
abstract readonly name: string;
abstract chat(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): Promise<ProviderResponse>;
abstract chatStream(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): AsyncIterable<StreamChunk>;
protected validateMessages(messages: Message[]): void {
if (messages.length === 0) {
throw new Error('Messages array cannot be empty');
}
}
}
OpenAI Provider
Create src/providers/openai.ts:
import OpenAI from 'openai';
import { config } from '../core/config.js';
import type {
Message,
ProviderOptions,
ProviderResponse,
StreamChunk,
ToolCall,
ToolDefinition,
} from '../core/types.js';
import { ProviderError } from '../utils/errors.js';
import { createLogger } from '../utils/logger.js';
import { BaseProvider } from './base.js';
const logger = createLogger('OpenAIProvider');
export class OpenAIProvider extends BaseProvider {
readonly name = 'openai';
private client: OpenAI;
constructor() {
super();
this.client = new OpenAI({
apiKey: config.openaiApiKey,
});
}
async chat(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): Promise<ProviderResponse> {
this.validateMessages(messages);
const model = options?.model ?? config.defaultModel;
const maxTokens = options?.maxTokens ?? config.maxTokens;
const temperature = options?.temperature ?? config.temperature;
logger.debug(`Sending request to ${model}`, {
messageCount: messages.length,
hasTools: !!tools?.length,
});
try {
const response = await this.client.chat.completions.create({
model,
messages: this.formatMessages(messages),
tools: tools ? this.formatTools(tools) : undefined,
max_tokens: maxTokens,
temperature,
});
const choice = response.choices[0];
const message = choice.message;
return {
content: message.content ?? '',
toolCalls: message.tool_calls?.map((tc) => ({
id: tc.id,
name: tc.function.name,
arguments: JSON.parse(tc.function.arguments),
})),
usage: response.usage
? {
promptTokens: response.usage.prompt_tokens,
completionTokens: response.usage.completion_tokens,
totalTokens: response.usage.total_tokens,
}
: undefined,
};
} catch (error) {
logger.error('OpenAI API error', error);
throw new ProviderError(
`OpenAI API request failed: ${error instanceof Error ? error.message : 'Unknown error'}`,
this.name,
error instanceof Error ? error : undefined
);
}
}
async *chatStream(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): AsyncIterable<StreamChunk> {
this.validateMessages(messages);
const model = options?.model ?? config.defaultModel;
const maxTokens = options?.maxTokens ?? config.maxTokens;
const temperature = options?.temperature ?? config.temperature;
logger.debug(`Starting stream from ${model}`);
try {
const stream = await this.client.chat.completions.create({
model,
messages: this.formatMessages(messages),
tools: tools ? this.formatTools(tools) : undefined,
max_tokens: maxTokens,
temperature,
stream: true,
});
let currentToolCall: Partial<ToolCall> | null = null;
let toolCallArgs = '';
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta;
if (!delta) continue;
// Handle text content
if (delta.content) {
yield { type: 'text', content: delta.content };
}
// Handle tool calls
if (delta.tool_calls) {
for (const tc of delta.tool_calls) {
if (tc.id) {
// New tool call starting
if (currentToolCall?.id) {
// Yield previous tool call
yield {
type: 'tool_call',
toolCall: {
id: currentToolCall.id,
name: currentToolCall.name!,
arguments: JSON.parse(toolCallArgs),
},
};
}
currentToolCall = { id: tc.id, name: tc.function?.name };
toolCallArgs = tc.function?.arguments ?? '';
} else if (tc.function?.arguments) {
toolCallArgs += tc.function.arguments;
}
}
}
}
// Yield final tool call if any
if (currentToolCall?.id) {
yield {
type: 'tool_call',
toolCall: {
id: currentToolCall.id,
name: currentToolCall.name!,
arguments: JSON.parse(toolCallArgs || '{}'),
},
};
}
yield { type: 'done' };
} catch (error) {
logger.error('OpenAI streaming error', error);
throw new ProviderError(
`OpenAI stream failed: ${error instanceof Error ? error.message : 'Unknown error'}`,
this.name,
error instanceof Error ? error : undefined
);
}
}
private formatMessages(messages: Message[]): OpenAI.Chat.ChatCompletionMessageParam[] {
return messages.map((msg) => {
if (msg.role === 'tool') {
return {
role: 'tool' as const,
content: msg.content,
tool_call_id: msg.toolCallId!,
};
}
return {
role: msg.role as 'system' | 'user' | 'assistant',
content: msg.content,
};
});
}
private formatTools(tools: ToolDefinition[]): OpenAI.Chat.ChatCompletionTool[] {
return tools.map((tool) => ({
type: 'function' as const,
function: {
name: tool.name,
description: tool.description,
parameters: tool.parameters,
},
}));
}
}
Anthropic Provider
Create src/providers/anthropic.ts:
import Anthropic from '@anthropic-ai/sdk';
import { config } from '../core/config.js';
import type {
Message,
ProviderOptions,
ProviderResponse,
StreamChunk,
ToolDefinition,
} from '../core/types.js';
import { ProviderError } from '../utils/errors.js';
import { createLogger } from '../utils/logger.js';
import { BaseProvider } from './base.js';
const logger = createLogger('AnthropicProvider');
export class AnthropicProvider extends BaseProvider {
readonly name = 'anthropic';
private client: Anthropic;
constructor() {
super();
if (!config.anthropicApiKey) {
throw new ProviderError('Anthropic API key is required', this.name);
}
this.client = new Anthropic({
apiKey: config.anthropicApiKey,
});
}
async chat(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): Promise<ProviderResponse> {
this.validateMessages(messages);
const model = options?.model ?? 'claude-sonnet-4-20250514';
const maxTokens = options?.maxTokens ?? config.maxTokens;
logger.debug(`Sending request to ${model}`);
// Extract system message
const systemMessage = messages.find((m) => m.role === 'system');
const otherMessages = messages.filter((m) => m.role !== 'system');
try {
const response = await this.client.messages.create({
model,
max_tokens: maxTokens,
system: systemMessage?.content,
messages: this.formatMessages(otherMessages),
tools: tools ? this.formatTools(tools) : undefined,
});
let content = '';
const toolCalls: ProviderResponse['toolCalls'] = [];
for (const block of response.content) {
if (block.type === 'text') {
content += block.text;
} else if (block.type === 'tool_use') {
toolCalls.push({
id: block.id,
name: block.name,
arguments: block.input as Record<string, unknown>,
});
}
}
return {
content,
toolCalls: toolCalls.length > 0 ? toolCalls : undefined,
usage: {
promptTokens: response.usage.input_tokens,
completionTokens: response.usage.output_tokens,
totalTokens: response.usage.input_tokens + response.usage.output_tokens,
},
};
} catch (error) {
logger.error('Anthropic API error', error);
throw new ProviderError(
`Anthropic API request failed: ${error instanceof Error ? error.message : 'Unknown error'}`,
this.name,
error instanceof Error ? error : undefined
);
}
}
async *chatStream(
messages: Message[],
tools?: ToolDefinition[],
options?: ProviderOptions
): AsyncIterable<StreamChunk> {
this.validateMessages(messages);
const model = options?.model ?? 'claude-sonnet-4-20250514';
const maxTokens = options?.maxTokens ?? config.maxTokens;
logger.debug(`Starting stream from ${model}`);
const systemMessage = messages.find((m) => m.role === 'system');
const otherMessages = messages.filter((m) => m.role !== 'system');
try {
const stream = this.client.messages.stream({
model,
max_tokens: maxTokens,
system: systemMessage?.content,
messages: this.formatMessages(otherMessages),
tools: tools ? this.formatTools(tools) : undefined,
});
let currentToolId = '';
let currentToolName = '';
let currentToolArgs = '';
for await (const event of stream) {
if (event.type === 'content_block_start') {
if (event.content_block.type === 'tool_use') {
currentToolId = event.content_block.id;
currentToolName = event.content_block.name;
currentToolArgs = '';
}
} else if (event.type === 'content_block_delta') {
if (event.delta.type === 'text_delta') {
yield { type: 'text', content: event.delta.text };
} else if (event.delta.type === 'input_json_delta') {
currentToolArgs += event.delta.partial_json;
}
} else if (event.type === 'content_block_stop') {
if (currentToolId) {
yield {
type: 'tool_call',
toolCall: {
id: currentToolId,
name: currentToolName,
arguments: JSON.parse(currentToolArgs || '{}'),
},
};
currentToolId = '';
currentToolName = '';
currentToolArgs = '';
}
}
}
yield { type: 'done' };
} catch (error) {
logger.error('Anthropic streaming error', error);
throw new ProviderError(
`Anthropic stream failed: ${error instanceof Error ? error.message : 'Unknown error'}`,
this.name,
error instanceof Error ? error : undefined
);
}
}
private formatMessages(messages: Message[]): Anthropic.MessageParam[] {
return messages.map((msg) => {
if (msg.role === 'tool') {
return {
role: 'user' as const,
content: [
{
type: 'tool_result' as const,
tool_use_id: msg.toolCallId!,
content: msg.content,
},
],
};
}
if (msg.role === 'assistant') {
return {
role: 'assistant' as const,
content: msg.content,
};
}
return {
role: 'user' as const,
content: msg.content,
};
});
}
private formatTools(tools: ToolDefinition[]): Anthropic.Tool[] {
return tools.map((tool) => ({
name: tool.name,
description: tool.description,
input_schema: tool.parameters as Anthropic.Tool.InputSchema,
}));
}
}
Provider Factory
Create src/providers/index.ts:
import { config } from '../core/config.js';
import { AnthropicProvider } from './anthropic.js';
import type { AIProvider } from './base.js';
import { OpenAIProvider } from './openai.js';
export type ProviderName = 'openai' | 'anthropic';
const providers = new Map<ProviderName, AIProvider>();
export function getProvider(name?: ProviderName): AIProvider {
const providerName = name ?? config.defaultProvider;
if (!providers.has(providerName)) {
switch (providerName) {
case 'openai':
providers.set(providerName, new OpenAIProvider());
break;
case 'anthropic':
providers.set(providerName, new AnthropicProvider());
break;
default:
throw new Error(`Unknown provider: ${providerName}`);
}
}
return providers.get(providerName)!;
}
export { AIProvider } from './base.js';
Conversation Management
Conversation management tracks messages and provides context to the AI.
Create src/core/conversation.ts:
import { randomUUID } from 'crypto';
import { createLogger } from '../utils/logger.js';
import type { Conversation, Message, ToolCall } from './types.js';
const logger = createLogger('ConversationManager');
export class ConversationManager {
private conversations = new Map<string, Conversation>();
private activeConversationId: string | null = null;
createConversation(systemPrompt?: string): Conversation {
const id = randomUUID();
const now = new Date();
const conversation: Conversation = {
id,
messages: [],
createdAt: now,
updatedAt: now,
};
if (systemPrompt) {
conversation.messages.push({
role: 'system',
content: systemPrompt,
});
}
this.conversations.set(id, conversation);
this.activeConversationId = id;
logger.debug(`Created conversation ${id}`);
return conversation;
}
getConversation(id?: string): Conversation | undefined {
const conversationId = id ?? this.activeConversationId;
if (!conversationId) return undefined;
return this.conversations.get(conversationId);
}
getActiveConversation(): Conversation | undefined {
return this.activeConversationId
? this.conversations.get(this.activeConversationId)
: undefined;
}
setActiveConversation(id: string): void {
if (!this.conversations.has(id)) {
throw new Error(`Conversation ${id} not found`);
}
this.activeConversationId = id;
logger.debug(`Switched to conversation ${id}`);
}
addUserMessage(content: string, conversationId?: string): Message {
const conversation = this.getConversation(conversationId);
if (!conversation) {
throw new Error('No active conversation');
}
const message: Message = {
role: 'user',
content,
};
conversation.messages.push(message);
conversation.updatedAt = new Date();
logger.debug(`Added user message to ${conversation.id}`);
return message;
}
addAssistantMessage(content: string, toolCalls?: ToolCall[], conversationId?: string): Message {
const conversation = this.getConversation(conversationId);
if (!conversation) {
throw new Error('No active conversation');
}
const message: Message = {
role: 'assistant',
content,
};
conversation.messages.push(message);
conversation.updatedAt = new Date();
logger.debug(`Added assistant message to ${conversation.id}`);
return message;
}
addToolResult(
toolCallId: string,
result: string,
toolName?: string,
conversationId?: string
): Message {
const conversation = this.getConversation(conversationId);
if (!conversation) {
throw new Error('No active conversation');
}
const message: Message = {
role: 'tool',
content: result,
toolCallId,
name: toolName,
};
conversation.messages.push(message);
conversation.updatedAt = new Date();
logger.debug(`Added tool result for ${toolCallId}`);
return message;
}
getMessages(conversationId?: string): Message[] {
const conversation = this.getConversation(conversationId);
return conversation?.messages ?? [];
}
clearConversation(conversationId?: string): void {
const conversation = this.getConversation(conversationId);
if (!conversation) return;
// Keep system message if present
const systemMessage = conversation.messages.find((m) => m.role === 'system');
conversation.messages = systemMessage ? [systemMessage] : [];
conversation.updatedAt = new Date();
logger.debug(`Cleared conversation ${conversation.id}`);
}
deleteConversation(id: string): boolean {
if (this.activeConversationId === id) {
this.activeConversationId = null;
}
return this.conversations.delete(id);
}
listConversations(): Conversation[] {
return Array.from(this.conversations.values());
}
// Get recent context (last N messages)
getRecentContext(maxMessages: number, conversationId?: string): Message[] {
const messages = this.getMessages(conversationId);
// Always include system message
const systemMessage = messages.find((m) => m.role === 'system');
const otherMessages = messages.filter((m) => m.role !== 'system');
// Get last N messages
const recentMessages = otherMessages.slice(-maxMessages);
return systemMessage ? [systemMessage, ...recentMessages] : recentMessages;
}
}
The Assistant Orchestrator
The main Assistant class ties everything together.
Create src/core/assistant.ts:
import { type AIProvider, getProvider } from '../providers/index.js';
import { handleError } from '../utils/errors.js';
import { createLogger } from '../utils/logger.js';
import { config } from './config.js';
import { ConversationManager } from './conversation.js';
import type {
AssistantOptions,
AssistantResponse,
Message,
ProviderOptions,
StreamChunk,
Tool,
ToolCall,
ToolDefinition,
} from './types.js';
const logger = createLogger('Assistant');
const DEFAULT_SYSTEM_PROMPT = `You are a helpful AI assistant with access to tools and knowledge.
Guidelines:
- Be concise but thorough in your responses
- Use tools when they would help answer the user's question
- If you use information from documents, mention that you found it in your knowledge base
- If you don't know something, say so honestly
- Ask clarifying questions when the user's request is ambiguous`;
export class Assistant {
private provider: AIProvider;
private conversationManager: ConversationManager;
private tools: Map<string, Tool>;
private systemPrompt: string;
private ragEnabled: boolean;
private ragRetriever?: (query: string) => Promise<string>;
constructor(options: AssistantOptions = {}) {
this.provider = getProvider(options.provider);
this.conversationManager = new ConversationManager();
this.tools = new Map();
this.systemPrompt = options.systemPrompt ?? DEFAULT_SYSTEM_PROMPT;
this.ragEnabled = options.enableRag ?? false;
// Register provided tools
if (options.tools) {
for (const tool of options.tools) {
this.registerTool(tool);
}
}
// Start a conversation
this.conversationManager.createConversation(this.systemPrompt);
logger.info(`Assistant initialized with provider: ${this.provider.name}`);
}
registerTool(tool: Tool): void {
this.tools.set(tool.name, tool);
logger.debug(`Registered tool: ${tool.name}`);
}
unregisterTool(name: string): boolean {
const result = this.tools.delete(name);
if (result) {
logger.debug(`Unregistered tool: ${name}`);
}
return result;
}
setRagRetriever(retriever: (query: string) => Promise<string>): void {
this.ragRetriever = retriever;
this.ragEnabled = true;
logger.info('RAG retriever configured');
}
async chat(userMessage: string): Promise<AssistantResponse> {
logger.debug(`Processing message: ${userMessage.substring(0, 50)}...`);
try {
// Add user message to conversation
this.conversationManager.addUserMessage(userMessage);
// Build context with RAG if enabled
let messages = this.conversationManager.getMessages();
if (this.ragEnabled && this.ragRetriever) {
messages = await this.augmentWithRag(messages, userMessage);
}
// Get tool definitions
const toolDefs = this.getToolDefinitions();
// Track tools used
const toolsUsed: string[] = [];
// Conversation loop (for tool calls)
let maxIterations = 10;
let response = await this.provider.chat(messages, toolDefs);
while (response.toolCalls && response.toolCalls.length > 0 && maxIterations > 0) {
// Execute each tool call
for (const toolCall of response.toolCalls) {
const result = await this.executeTool(toolCall);
this.conversationManager.addToolResult(toolCall.id, result, toolCall.name);
toolsUsed.push(toolCall.name);
}
// Get updated messages and call again
messages = this.conversationManager.getMessages();
response = await this.provider.chat(messages, toolDefs);
maxIterations--;
}
// Add final response to conversation
this.conversationManager.addAssistantMessage(response.content);
return {
content: response.content,
toolsUsed: toolsUsed.length > 0 ? toolsUsed : undefined,
};
} catch (error) {
const handledError = handleError(error);
logger.error(`Chat error: ${handledError.message}`);
throw handledError;
}
}
async *chatStream(userMessage: string): AsyncIterable<StreamChunk> {
logger.debug(`Processing stream: ${userMessage.substring(0, 50)}...`);
try {
// Add user message to conversation
this.conversationManager.addUserMessage(userMessage);
// Build context
let messages = this.conversationManager.getMessages();
if (this.ragEnabled && this.ragRetriever) {
messages = await this.augmentWithRag(messages, userMessage);
}
const toolDefs = this.getToolDefinitions();
let fullContent = '';
const toolCalls: ToolCall[] = [];
// Stream the response
for await (const chunk of this.provider.chatStream(messages, toolDefs)) {
if (chunk.type === 'text' && chunk.content) {
fullContent += chunk.content;
yield chunk;
} else if (chunk.type === 'tool_call' && chunk.toolCall) {
toolCalls.push(chunk.toolCall);
yield chunk;
} else if (chunk.type === 'done') {
// Process tool calls if any
if (toolCalls.length > 0) {
for (const toolCall of toolCalls) {
const result = await this.executeTool(toolCall);
this.conversationManager.addToolResult(toolCall.id, result, toolCall.name);
}
// Continue conversation after tool use
messages = this.conversationManager.getMessages();
for await (const innerChunk of this.provider.chatStream(messages, toolDefs)) {
if (innerChunk.type === 'text' && innerChunk.content) {
fullContent += innerChunk.content;
yield innerChunk;
}
}
}
// Save full response
if (fullContent) {
this.conversationManager.addAssistantMessage(fullContent);
}
yield { type: 'done' };
}
}
} catch (error) {
const handledError = handleError(error);
logger.error(`Stream error: ${handledError.message}`);
throw handledError;
}
}
private async augmentWithRag(messages: Message[], query: string): Promise<Message[]> {
if (!this.ragRetriever) return messages;
try {
const context = await this.ragRetriever(query);
if (!context) return messages;
// Find system message and augment it
const systemIndex = messages.findIndex((m) => m.role === 'system');
if (systemIndex >= 0) {
const augmentedSystem: Message = {
role: 'system',
content: `${messages[systemIndex].content}
RELEVANT CONTEXT FROM KNOWLEDGE BASE:
${context}
Use this context to help answer the user's question if relevant.`,
};
return [augmentedSystem, ...messages.filter((m) => m.role !== 'system')];
}
// No system message - add context as first message
return [
{
role: 'system',
content: `RELEVANT CONTEXT FROM KNOWLEDGE BASE:
${context}
Use this context to help answer the user's question if relevant.`,
},
...messages,
];
} catch (error) {
logger.warn(`RAG retrieval failed: ${error}`);
return messages;
}
}
private getToolDefinitions(): ToolDefinition[] {
return Array.from(this.tools.values()).map((tool) => ({
name: tool.name,
description: tool.description,
parameters: tool.parameters,
}));
}
private async executeTool(toolCall: ToolCall): Promise<string> {
const tool = this.tools.get(toolCall.name);
if (!tool) {
logger.warn(`Tool not found: ${toolCall.name}`);
return `Error: Tool "${toolCall.name}" not found`;
}
logger.debug(`Executing tool: ${toolCall.name}`, toolCall.arguments);
try {
const result = await tool.execute(toolCall.arguments);
logger.debug(`Tool ${toolCall.name} completed`);
return result;
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
logger.error(`Tool ${toolCall.name} failed: ${message}`);
return `Error executing ${toolCall.name}: ${message}`;
}
}
// Utility methods
clearHistory(): void {
this.conversationManager.clearConversation();
logger.info('Conversation history cleared');
}
getHistory(): Message[] {
return this.conversationManager.getMessages();
}
setProvider(provider: 'openai' | 'anthropic'): void {
this.provider = getProvider(provider);
logger.info(`Switched to provider: ${provider}`);
}
}
CLI Interface
Create the entry point with a CLI interface in src/index.ts:
import * as readline from 'readline';
import { Assistant } from './core/assistant.js';
import { createLogger } from './utils/logger.js';
const logger = createLogger('Main');
async function main() {
console.log('AI Knowledge Assistant');
console.log('======================');
console.log('Type your message and press Enter.');
console.log('Commands: /clear (clear history), /exit (quit)');
console.log('');
const assistant = new Assistant();
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const prompt = () => {
rl.question('You: ', async (input) => {
const trimmed = input.trim();
if (!trimmed) {
prompt();
return;
}
// Handle commands
if (trimmed === '/exit') {
console.log('Goodbye!');
rl.close();
process.exit(0);
}
if (trimmed === '/clear') {
assistant.clearHistory();
console.log('Conversation history cleared.\n');
prompt();
return;
}
// Process message with streaming
process.stdout.write('Assistant: ');
try {
for await (const chunk of assistant.chatStream(trimmed)) {
if (chunk.type === 'text' && chunk.content) {
process.stdout.write(chunk.content);
} else if (chunk.type === 'tool_call' && chunk.toolCall) {
process.stdout.write(`\n[Using tool: ${chunk.toolCall.name}]\n`);
}
}
console.log('\n');
} catch (error) {
console.error(`\nError: ${error instanceof Error ? error.message : error}\n`);
}
prompt();
});
};
prompt();
}
main().catch((error) => {
logger.error('Fatal error', error);
process.exit(1);
});
Running the Assistant
Test your implementation:
# Create .env with your API key
echo "OPENAI_API_KEY=sk-proj-your-key" > .env
# Run the assistant
npm start
Example interaction:
AI Knowledge Assistant
======================
Type your message and press Enter.
Commands: /clear (clear history), /exit (quit)
You: Hello! What can you help me with?
Assistant: Hello! I'm an AI assistant that can help you with various tasks:
1. **Answer questions** - I can provide information and explanations on many topics
2. **Have conversations** - I remember our chat history and can follow up on previous topics
3. **Use tools** - When configured, I can perform calculations, check weather, search the web, and take notes
Feel free to ask me anything! What would you like to know?
You: /clear
Conversation history cleared.
You: /exit
Goodbye!
Key Takeaways
- Provider abstraction enables switching AI providers without code changes
- Conversation management maintains context across turns
- The assistant orchestrator coordinates providers, tools, and RAG
- Streaming provides better user experience for long responses
- Error handling at each layer prevents cascading failures
Practice Exercise
- Add a method to the Assistant to switch models at runtime
- Implement conversation persistence (save/load from file)
- Add token counting to track usage
- Create a method to export conversation history as JSON
Next Steps
The core is complete. In the next lesson, you will add RAG capabilities to give your assistant knowledge from documents.