ChatGPT API Tutorial 2026 — Developer’s Getting Started Guide

James Whitaker

April 18, 2026

chatgpt api tutorial

This ChatGPT API tutorial covers everything a developer needs to start building with OpenAI’s API in 2026 — from account setup and API key generation through model selection, token pricing, and a working Python code example. The OpenAI API has evolved significantly with the GPT-5 model family, and understanding the 2026 model naming structure is essential before writing your first call. This tutorial uses the current API specifications as of April 2026.

Step 1 — Get Your API Key

  • 1Create an OpenAI accountGo to platform.openai.com and sign up. An existing ChatGPT account works — use the same email. API access and ChatGPT subscriptions are separate billing products.
  • 2Generate an API keyIn your OpenAI dashboard, go to API Keys → Create New Secret Key. Copy the key immediately — it is shown once and cannot be recovered. Store it in an environment variable, never in your code.
  • 3Add billingGo to Settings → Billing → Add payment method. API usage is pay-per-token — you are charged only for what you use. Set a usage limit to prevent unexpected charges during development.
  • 4Install the SDKPython: pip install openai. JavaScript/Node: npm install openai. Both SDKs are maintained by OpenAI and support all API features. The examples in this tutorial use Python 3.10+.

Step 2 — Understand the 2026 Model Options

ModelAPI NameInput price/1M tokensOutput price/1M tokensBest For
GPT-5.4gpt-5.4$2.50$15.00Complex tasks — strongest reasoning and coding
GPT-5.3 Instantgpt-5.3-chat-latestLowerLowerStandard queries — fast and capable
GPT-5.4 Minigpt-5.4-mini~$0.40~$1.60High-volume, latency-sensitive workloads
GPT-5.2gpt-5.2Available in APIAvailableLegacy access — being phased from ChatGPT UI

OpenAI API model options and pricing, April 2026. Prices subject to change — verify at platform.openai.com/pricing before budgeting. Cached inputs receive up to 90% discount.

Step 3 — Your First API Call


import os
from openai import OpenAI

# Store your API key in an environment variable — never hardcode it
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# Make your first API call
response = client.chat.completions.create(
    model="gpt-5.3-chat-latest",  # Use gpt-5.4 for complex tasks
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant specialising in data analysis."
        },
        {
            "role": "user",
            "content": "Explain the difference between mean, median, and mode in one paragraph."
        }
    ],
    max_tokens=300,
    temperature=0.7  # 0 = deterministic, 1 = creative
)

# Extract and print the response
print(response.choices[0].message.content)
print(f"\nTokens used: {response.usage.total_tokens}")

Step 4 — Key API Parameters to Know

  • temperature: Controls randomness. 0 = highly deterministic (same input → same output, good for data extraction). 1 = highly varied (good for creative tasks). Default 0.7 balances consistency with variety.
  • max_tokens: Maximum length of the response. Does not guarantee that length — sets a ceiling. Include realistic limits to control costs, especially in production.
  • system message: Sets persistent context for the entire conversation. Write clear, specific system prompts — this is the highest-value configuration in any application built on the OpenAI API.
  • reasoning_effort: Available on GPT-5.4. Controls thinking depth — “minimal” for fast responses, “high” or “max” for complex analytical tasks. Using “minimal” for simple queries significantly reduces cost and latency.
  • Cached inputs: Identical system prompts sent within a time window are cached — repeated calls with the same system prompt cost up to 90% less. Structure your API calls to maximise cache hits for production cost efficiency.

💡 Cost optimisation — the biggest practical considerationAPI costs in development feel minimal but can scale significantly in production. Three most impactful cost controls: (1) set usage limits in your OpenAI dashboard before you start, (2) use GPT-5.4 Mini for high-volume, latency-sensitive endpoints and GPT-5.4 only for complex reasoning tasks, (3) structure system prompts to maximise cache hits — cached inputs cost up to 90% less than uncached ones.

Unlock everything in Perplexity Hub—click here to explore the full collection.

Frequently Asked Questions

How do I get started with the ChatGPT API?

Create an account at platform.openai.com, generate an API key, add billing, install the openai Python or Node.js SDK, and make your first call using the messages array format. The full process takes under 30 minutes for a developer familiar with REST APIs. Your first call should use gpt-5.3-chat-latest — it is fast, capable, and affordable for experimentation. – chatgpt api tutorial.

How much does the ChatGPT API cost?

GPT-5.4 is priced at $2.50 per million input tokens and $15.00 per million output tokens. GPT-5.4 Mini is substantially cheaper at approximately $0.40/$1.60 per million tokens — the right choice for high-volume production workloads. Cached inputs receive up to a 90% discount. API billing is separate from ChatGPT subscriptions — a ChatGPT Plus subscription does not include API credits (though Plus does include $5/month in API credits as of 2026).

What is the difference between the ChatGPT API and ChatGPT Plus?

ChatGPT Plus ($20/month) is access to the ChatGPT web and mobile interface for human users — conversations, features, and the ChatGPT product. The OpenAI API is a programmatic interface for developers building applications — charged per token consumed with no fixed monthly fee. You can build an application that uses GPT-5.4 without any ChatGPT subscription, and a ChatGPT Plus subscription does not reduce API token costs (though it now includes a $5/month API credit). – chatgpt api tutorial