From Zero to AI

Lesson 5.1: OpenAI API - First Request

Duration: 60 minutes

Learning Objectives

By the end of this lesson, you will be able to:

  • Set up the OpenAI SDK in a TypeScript project
  • Configure authentication using API keys
  • Make your first chat completion request
  • Understand the request and response structure
  • Handle basic errors that may occur
  • Create a reusable OpenAI client

Introduction

You have learned about AI providers and their offerings. Now it is time to write actual code. In this lesson, you will make your first request to the OpenAI API. By the end, you will have a working TypeScript application that communicates with GPT models.

Making API requests might seem intimidating at first, but the OpenAI SDK makes it straightforward. You will be surprised how few lines of code are needed to get started.


Setting Up Your Project

Let us create a fresh project for this module. If you have not done so already, follow these steps:

Step 1: Initialize the Project

mkdir ai-practice
cd ai-practice
npm init -y

Step 2: Install Dependencies

npm install typescript tsx @types/node openai dotenv

Here is what each package does:

Package Purpose
typescript TypeScript compiler
tsx Run TypeScript files directly without compiling
@types/node TypeScript definitions for Node.js
openai Official OpenAI SDK
dotenv Load environment variables from .env files

Step 3: Configure TypeScript

Create a tsconfig.json file:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "outDir": "dist"
  },
  "include": ["src/**/*"]
}

Step 4: Set Up Your API Key

Create a .env file in your project root:

OPENAI_API_KEY=sk-proj-your-api-key-here

Replace sk-proj-your-api-key-here with your actual API key from the OpenAI Platform.

Important: Never commit your .env file to version control. Add it to .gitignore:

echo ".env" >> .gitignore
echo "node_modules" >> .gitignore

Step 5: Create Source Directory

mkdir src

Your First Request

Now let us write code that talks to OpenAI. Create a file called src/first-request.ts:

import 'dotenv/config';
import OpenAI from 'openai';

// Create the OpenAI client
// It automatically reads OPENAI_API_KEY from environment
const openai = new OpenAI();

async function main() {
  console.log('Sending request to OpenAI...');

  const response = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [
      {
        role: 'user',
        content: 'What is TypeScript in one sentence?',
      },
    ],
  });

  // Extract the response text
  const answer = response.choices[0].message.content;
  console.log('Response:', answer);
}

main();

Run it with:

npx tsx src/first-request.ts

You should see output like:

Sending request to OpenAI...
Response: TypeScript is a statically typed superset of JavaScript that compiles to plain JavaScript.

Congratulations! You just made your first AI API request.


Understanding the Request

Let us break down what happened in that request:

const response = await openai.chat.completions.create({
  model: 'gpt-4o-mini', // Which model to use
  messages: [
    // The conversation history
    {
      role: 'user', // Who is speaking
      content: '...', // What they said
    },
  ],
});

The Model Parameter

The model parameter specifies which AI model processes your request:

Model Best For Cost
gpt-4o-mini Most tasks, cost-effective Cheapest
gpt-4o Complex reasoning, vision Mid-range
gpt-4-turbo Long documents (128K context) Higher
o1-mini Math and logic problems Higher

For learning and development, gpt-4o-mini is the best choice. It is fast, capable, and affordable.

The Messages Array

The messages array represents a conversation. Each message has:

  • role: Who is speaking (system, user, or assistant)
  • content: What they said
┌─────────────────────────────────────────────────────────────────┐
│                    Message Roles                                 │
├────────────────┬────────────────────────────────────────────────┤
│  system        │ Instructions for the AI's behavior             │
│                │ Example: "You are a helpful assistant"         │
├────────────────┼────────────────────────────────────────────────┤
│  user          │ Messages from the human user                   │
│                │ Example: "What is TypeScript?"                 │
├────────────────┼────────────────────────────────────────────────┤
│  assistant     │ Previous AI responses (for context)            │
│                │ Example: "TypeScript is..."                    │
└────────────────┴────────────────────────────────────────────────┘

Understanding the Response

The response object contains several useful fields:

interface ChatCompletion {
  id: string; // Unique ID for this completion
  object: 'chat.completion'; // Type of object
  created: number; // Unix timestamp
  model: string; // Model that was used
  choices: Array<{
    index: number; // Position in choices array
    message: {
      role: 'assistant'; // Always "assistant" for responses
      content: string | null; // The actual response text
    };
    finish_reason: string; // Why generation stopped
  }>;
  usage: {
    prompt_tokens: number; // Tokens in your input
    completion_tokens: number; // Tokens in the response
    total_tokens: number; // Total tokens used
  };
}

Let us explore the response in detail:

import 'dotenv/config';
import OpenAI from 'openai';

const openai = new OpenAI();

async function exploreResponse() {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: 'Say hello in three languages.' }],
  });

  // The response ID
  console.log('Response ID:', response.id);

  // The model used
  console.log('Model:', response.model);

  // Token usage
  console.log('Tokens used:', {
    input: response.usage?.prompt_tokens,
    output: response.usage?.completion_tokens,
    total: response.usage?.total_tokens,
  });

  // The actual response
  const message = response.choices[0].message;
  console.log('Response content:', message.content);

  // Why did it stop generating?
  console.log('Finish reason:', response.choices[0].finish_reason);
}

exploreResponse();

Output:

Response ID: chatcmpl-abc123xyz
Model: gpt-4o-mini-2024-07-18
Tokens used: { input: 14, output: 23, total: 37 }
Response content: Hello! (English)
Bonjour! (French)
Hola! (Spanish)
Finish reason: stop

Finish Reasons

The finish_reason tells you why the model stopped generating:

Reason Meaning
stop Natural completion
length Hit the max_tokens limit
content_filter Content was filtered
tool_calls Model wants to call a tool

Adding a System Prompt

System prompts set the behavior and personality of the AI. They are powerful for creating specialized assistants:

import 'dotenv/config';
import OpenAI from 'openai';

const openai = new OpenAI();

async function codeExplainer(code: string): Promise<string> {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [
      {
        role: 'system',
        content: `You are a code explainer for beginners.
When explaining code:
- Use simple language
- Explain line by line
- Give analogies where helpful
- Point out common mistakes to avoid`,
      },
      {
        role: 'user',
        content: `Explain this code:\n\n${code}`,
      },
    ],
  });

  return response.choices[0].message.content || '';
}

// Test it
const code = `
const numbers = [1, 2, 3, 4, 5];
const doubled = numbers.map(n => n * 2);
console.log(doubled);
`;

codeExplainer(code).then((explanation) => {
  console.log(explanation);
});

Configuring Request Parameters

You can fine-tune the model's behavior with additional parameters:

const response = await openai.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Write a haiku about coding' }],

  // Control randomness (0 = deterministic, 2 = very random)
  temperature: 0.7,

  // Maximum tokens in the response
  max_tokens: 100,

  // Stop generation at these sequences
  stop: ['\n\n'],

  // Number of completions to generate
  n: 1,
});

Temperature Guide

Temperature: 0.0 ─────────────────────────────────── 2.0

Low (0-0.3)         │  Mid (0.5-0.8)      │  High (1.0+)
────────────────────┼─────────────────────┼────────────────
Deterministic       │  Balanced           │  Creative
Factual answers     │  General chat       │  Brainstorming
Code generation     │  Writing            │  Poetry
Data extraction     │  Explanations       │  Fiction

Creating a Reusable Client

For real applications, you want a reusable client. Create src/openai-client.ts:

import 'dotenv/config';
import OpenAI from 'openai';

// Types for our client
interface ChatOptions {
  systemPrompt?: string;
  temperature?: number;
  maxTokens?: number;
}

interface ChatResponse {
  content: string;
  tokensUsed: {
    input: number;
    output: number;
    total: number;
  };
  model: string;
}

// Create the client class
class OpenAIClient {
  private client: OpenAI;
  private model: string;

  constructor(model: string = 'gpt-4o-mini') {
    this.client = new OpenAI();
    this.model = model;
  }

  async chat(userMessage: string, options: ChatOptions = {}): Promise<ChatResponse> {
    const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [];

    // Add system prompt if provided
    if (options.systemPrompt) {
      messages.push({
        role: 'system',
        content: options.systemPrompt,
      });
    }

    // Add the user message
    messages.push({
      role: 'user',
      content: userMessage,
    });

    // Make the request
    const response = await this.client.chat.completions.create({
      model: this.model,
      messages,
      temperature: options.temperature ?? 0.7,
      max_tokens: options.maxTokens ?? 1000,
    });

    // Extract and return the response
    return {
      content: response.choices[0].message.content || '',
      tokensUsed: {
        input: response.usage?.prompt_tokens || 0,
        output: response.usage?.completion_tokens || 0,
        total: response.usage?.total_tokens || 0,
      },
      model: response.model,
    };
  }
}

// Export a default instance
export const openaiClient = new OpenAIClient();

// Also export the class for custom configurations
export { OpenAIClient };

Now you can use it anywhere:

import { openaiClient } from './openai-client';

async function main() {
  const response = await openaiClient.chat('What are the benefits of TypeScript?', {
    systemPrompt: 'You are a helpful programming assistant.',
    temperature: 0.5,
  });

  console.log('Answer:', response.content);
  console.log('Tokens:', response.tokensUsed);
}

main();

Handling Errors

API requests can fail for various reasons. Here is how to handle them:

import 'dotenv/config';
import OpenAI from 'openai';

const openai = new OpenAI();

async function safeChat(message: string): Promise<string | null> {
  try {
    const response = await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: [{ role: 'user', content: message }],
    });

    return response.choices[0].message.content;
  } catch (error) {
    if (error instanceof OpenAI.APIError) {
      // API returned an error response
      console.error(`API Error: ${error.status} - ${error.message}`);

      switch (error.status) {
        case 401:
          console.error('Invalid API key. Check your OPENAI_API_KEY.');
          break;
        case 429:
          console.error('Rate limit exceeded. Wait and try again.');
          break;
        case 500:
          console.error('OpenAI server error. Try again later.');
          break;
      }
    } else if (error instanceof OpenAI.APIConnectionError) {
      // Network error
      console.error('Network error. Check your internet connection.');
    } else {
      // Unknown error
      console.error('Unexpected error:', error);
    }

    return null;
  }
}

// Test with error handling
async function main() {
  const result = await safeChat('Hello!');

  if (result) {
    console.log('Success:', result);
  } else {
    console.log('Request failed.');
  }
}

main();

Common Error Codes

Code Meaning Solution
401 Invalid API key Check your key in .env
403 Permission denied Check your account permissions
429 Rate limit Wait and retry, or upgrade plan
500 Server error Wait and retry
503 Service unavailable Wait and retry

Tracking Costs

Every API call costs money. Here is how to track your spending:

import 'dotenv/config';
import OpenAI from 'openai';

const openai = new OpenAI();

// Pricing per 1 million tokens (as of 2024)
const PRICING = {
  'gpt-4o-mini': { input: 0.15, output: 0.6 },
  'gpt-4o': { input: 2.5, output: 10.0 },
};

interface CostTracker {
  totalInputTokens: number;
  totalOutputTokens: number;
  totalCost: number;
}

const tracker: CostTracker = {
  totalInputTokens: 0,
  totalOutputTokens: 0,
  totalCost: 0,
};

async function trackedChat(message: string): Promise<string> {
  const model = 'gpt-4o-mini';

  const response = await openai.chat.completions.create({
    model,
    messages: [{ role: 'user', content: message }],
  });

  const usage = response.usage!;
  const pricing = PRICING[model as keyof typeof PRICING];

  // Calculate cost
  const inputCost = (usage.prompt_tokens / 1_000_000) * pricing.input;
  const outputCost = (usage.completion_tokens / 1_000_000) * pricing.output;
  const requestCost = inputCost + outputCost;

  // Update tracker
  tracker.totalInputTokens += usage.prompt_tokens;
  tracker.totalOutputTokens += usage.completion_tokens;
  tracker.totalCost += requestCost;

  console.log(`Request cost: $${requestCost.toFixed(6)}`);
  console.log(`Total spent: $${tracker.totalCost.toFixed(6)}`);

  return response.choices[0].message.content || '';
}

// Example usage
async function main() {
  await trackedChat('What is TypeScript?');
  await trackedChat('What are generics?');
  await trackedChat('Explain interfaces.');

  console.log('\n--- Session Summary ---');
  console.log(`Total input tokens: ${tracker.totalInputTokens}`);
  console.log(`Total output tokens: ${tracker.totalOutputTokens}`);
  console.log(`Total cost: $${tracker.totalCost.toFixed(6)}`);
}

main();

Exercises

Exercise 1: Simple Question Answerer

Create a function that answers questions about a given topic:

// Your implementation here
async function askAbout(topic: string, question: string): Promise<string> {
  // TODO:
  // 1. Use a system prompt that makes the AI an expert on the topic
  // 2. Pass the question as the user message
  // 3. Return the response
}

// Test:
// const answer = await askAbout("TypeScript", "What are union types?");
Solution
import 'dotenv/config';
import OpenAI from 'openai';

const openai = new OpenAI();

async function askAbout(topic: string, question: string): Promise<string> {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [
      {
        role: 'system',
        content: `You are an expert on ${topic}. 
Answer questions clearly and concisely.
Use examples when they help explain concepts.
If something is outside your area of expertise, say so.`,
      },
      {
        role: 'user',
        content: question,
      },
    ],
    temperature: 0.5,
  });

  return response.choices[0].message.content || '';
}

// Test
async function main() {
  const answer = await askAbout('TypeScript', 'What are union types?');
  console.log(answer);
}

main();

Exercise 2: Response Validator

Create a function that validates the API response contains expected content:

// Your implementation here
interface ValidationResult {
  isValid: boolean;
  content: string;
  error?: string;
}

async function getValidatedResponse(
  prompt: string,
  mustContain: string[]
): Promise<ValidationResult> {
  // TODO:
  // 1. Make the API request
  // 2. Check if the response contains all required strings
  // 3. Return validation result
}

// Test:
// const result = await getValidatedResponse(
//   "List three programming languages",
//   ["JavaScript", "Python"]
// );
Solution
import 'dotenv/config';
import OpenAI from 'openai';

const openai = new OpenAI();

interface ValidationResult {
  isValid: boolean;
  content: string;
  error?: string;
}

async function getValidatedResponse(
  prompt: string,
  mustContain: string[]
): Promise<ValidationResult> {
  try {
    const response = await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: [{ role: 'user', content: prompt }],
    });

    const content = response.choices[0].message.content || '';
    const contentLower = content.toLowerCase();

    // Check all required strings
    const missing = mustContain.filter((term) => !contentLower.includes(term.toLowerCase()));

    if (missing.length > 0) {
      return {
        isValid: false,
        content,
        error: `Missing required terms: ${missing.join(', ')}`,
      };
    }

    return {
      isValid: true,
      content,
    };
  } catch (error) {
    return {
      isValid: false,
      content: '',
      error: `API error: ${error instanceof Error ? error.message : 'Unknown'}`,
    };
  }
}

// Test
async function main() {
  const result = await getValidatedResponse(
    'List three popular programming languages with brief descriptions',
    ['JavaScript', 'Python']
  );

  console.log('Valid:', result.isValid);
  console.log('Content:', result.content);
  if (result.error) {
    console.log('Error:', result.error);
  }
}

main();

Exercise 3: Token Budget Manager

Create a client that stops when a token budget is reached:

// Your implementation here
class BudgetedClient {
  private maxTokens: number;
  private usedTokens: number = 0;

  constructor(maxTokens: number) {
    this.maxTokens = maxTokens;
  }

  async chat(message: string): Promise<string | null> {
    // TODO:
    // 1. Check if budget allows another request
    // 2. Make the request if budget allows
    // 3. Update used tokens
    // 4. Return null if budget exceeded
  }

  getRemainingBudget(): number {
    // TODO: Return remaining tokens
  }
}

// Test:
// const client = new BudgetedClient(1000);
// await client.chat("Hello!");
// console.log("Remaining:", client.getRemainingBudget());
Solution
import 'dotenv/config';
import OpenAI from 'openai';

const openai = new OpenAI();

class BudgetedClient {
  private maxTokens: number;
  private usedTokens: number = 0;

  constructor(maxTokens: number) {
    this.maxTokens = maxTokens;
  }

  async chat(message: string): Promise<string | null> {
    // Check if we have budget
    if (this.usedTokens >= this.maxTokens) {
      console.log('Budget exceeded. Cannot make request.');
      return null;
    }

    try {
      const response = await openai.chat.completions.create({
        model: 'gpt-4o-mini',
        messages: [{ role: 'user', content: message }],
        max_tokens: Math.min(500, this.maxTokens - this.usedTokens),
      });

      // Update used tokens
      const tokens = response.usage?.total_tokens || 0;
      this.usedTokens += tokens;

      console.log(`Used ${tokens} tokens. Total: ${this.usedTokens}/${this.maxTokens}`);

      return response.choices[0].message.content;
    } catch (error) {
      console.error('Request failed:', error);
      return null;
    }
  }

  getRemainingBudget(): number {
    return Math.max(0, this.maxTokens - this.usedTokens);
  }

  getUsedTokens(): number {
    return this.usedTokens;
  }
}

// Test
async function main() {
  const client = new BudgetedClient(500);

  await client.chat('What is TypeScript?');
  console.log('Remaining budget:', client.getRemainingBudget());

  await client.chat('What are generics?');
  console.log('Remaining budget:', client.getRemainingBudget());

  // This might exceed budget
  await client.chat('Explain the type system in detail.');
  console.log('Remaining budget:', client.getRemainingBudget());
}

main();

Key Takeaways

  1. SDK Setup: Install openai and configure your API key in environment variables
  2. Chat Completions: Use openai.chat.completions.create() for all text generation
  3. Messages Structure: Messages have role (system/user/assistant) and content
  4. System Prompts: Use them to define AI behavior and personality
  5. Temperature: Lower for factual tasks, higher for creative tasks
  6. Error Handling: Always wrap API calls in try/catch blocks
  7. Cost Tracking: Monitor token usage to control expenses
  8. Reusable Clients: Create wrapper classes for cleaner code

Resources

Resource Type Description
OpenAI API Reference Documentation Complete API documentation
OpenAI Node.js SDK Repository Official SDK with examples
OpenAI Pricing Reference Current pricing information
OpenAI Cookbook Tutorial Practical examples and recipes

Next Lesson

You have made your first request to OpenAI. In the next lesson, you will learn how to integrate Anthropic's Claude API, which has a slightly different approach but similar concepts.

Continue to Lesson 5.2: Anthropic API - Claude Integration