OpenAI ChatGPT API Tutorial: Build AI Apps with Python
Learn to use OpenAI's ChatGPT API in Python. Build chatbots, generate content, and integrate AI into your applications with practical examples.
Moshiour Rahman
Advertisement
Introduction to OpenAI API
OpenAI’s API provides access to powerful AI models like GPT-4 and GPT-3.5-turbo. You can build chatbots, generate content, summarize text, translate languages, and much more.
Getting Started
- Create an account at platform.openai.com
- Generate an API key
- Install the Python library
pip install openai python-dotenv
Setup
import os
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
Create a .env file:
OPENAI_API_KEY=sk-your-api-key-here
Basic Chat Completion
def chat(message):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
]
)
return response.choices[0].message.content
# Example usage
result = chat("What is Python?")
print(result)
Understanding Messages
| Role | Purpose |
|---|---|
| system | Sets the AI’s behavior and context |
| user | The user’s input/question |
| assistant | The AI’s previous responses |
Multi-Turn Conversations
class ChatBot:
def __init__(self, system_prompt="You are a helpful assistant."):
self.client = OpenAI()
self.messages = [
{"role": "system", "content": system_prompt}
]
def chat(self, user_message):
self.messages.append({"role": "user", "content": user_message})
response = self.client.chat.completions.create(
model="gpt-3.5-turbo",
messages=self.messages,
temperature=0.7,
max_tokens=1000
)
assistant_message = response.choices[0].message.content
self.messages.append({"role": "assistant", "content": assistant_message})
return assistant_message
def clear_history(self):
self.messages = [self.messages[0]] # Keep system message
# Usage
bot = ChatBot("You are a Python programming expert.")
print(bot.chat("How do I read a file in Python?"))
print(bot.chat("Can you show me how to handle errors?")) # Remembers context
Streaming Responses
For real-time output like ChatGPT:
def stream_chat(message):
stream = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
print() # New line at end
# Usage
stream_chat("Explain machine learning in simple terms")
Function Calling
Let the AI call your functions:
import json
# Define your functions
def get_weather(location, unit="celsius"):
"""Get current weather for a location"""
# In real app, call a weather API
return {"location": location, "temperature": 22, "unit": unit, "condition": "sunny"}
def search_products(query, max_price=None):
"""Search for products"""
return [
{"name": f"Product matching '{query}'", "price": 29.99},
{"name": f"Another {query} item", "price": 49.99}
]
# Define function schemas for OpenAI
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name, e.g., San Francisco"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "search_products",
"description": "Search for products in the catalog",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
},
"max_price": {
"type": "number",
"description": "Maximum price filter"
}
},
"required": ["query"]
}
}
}
]
def process_with_functions(user_message):
messages = [{"role": "user", "content": user_message}]
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
tools=tools,
tool_choice="auto"
)
response_message = response.choices[0].message
# Check if the model wants to call a function
if response_message.tool_calls:
messages.append(response_message)
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
# Call the actual function
if function_name == "get_weather":
result = get_weather(**function_args)
elif function_name == "search_products":
result = search_products(**function_args)
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"content": json.dumps(result)
})
# Get final response
final_response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages
)
return final_response.choices[0].message.content
return response_message.content
# Usage
print(process_with_functions("What's the weather in Tokyo?"))
print(process_with_functions("Find me some headphones under $50"))
Practical Applications
Code Generator
def generate_code(description, language="python"):
response = client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": f"You are an expert {language} programmer. Generate clean, well-commented code."
},
{
"role": "user",
"content": f"Write {language} code to: {description}"
}
],
temperature=0.2 # Lower temperature for more precise code
)
return response.choices[0].message.content
# Usage
code = generate_code("create a REST API endpoint for user registration using Flask")
print(code)
Content Summarizer
def summarize(text, max_length=100):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": f"Summarize the following text in {max_length} words or less. Be concise and capture key points."
},
{
"role": "user",
"content": text
}
],
temperature=0.3
)
return response.choices[0].message.content
# Usage
article = """
Long article text here...
"""
summary = summarize(article, max_length=50)
Text Translator
def translate(text, target_language):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": f"You are a professional translator. Translate the text to {target_language}. Maintain the original meaning and tone."
},
{
"role": "user",
"content": text
}
]
)
return response.choices[0].message.content
# Usage
english_text = "Hello, how are you today?"
spanish = translate(english_text, "Spanish")
print(spanish) # "Hola, ¿cómo estás hoy?"
Sentiment Analyzer
def analyze_sentiment(text):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": """Analyze the sentiment of the text. Respond with JSON:
{"sentiment": "positive/negative/neutral", "confidence": 0.0-1.0, "explanation": "brief explanation"}"""
},
{
"role": "user",
"content": text
}
],
response_format={"type": "json_object"}
)
return json.loads(response.choices[0].message.content)
# Usage
result = analyze_sentiment("I absolutely love this product! Best purchase ever!")
print(result)
# {"sentiment": "positive", "confidence": 0.95, "explanation": "Strong positive language..."}
Best Practices
1. Handle Errors
from openai import OpenAIError, RateLimitError
def safe_chat(message):
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": message}]
)
return response.choices[0].message.content
except RateLimitError:
print("Rate limit exceeded. Waiting...")
time.sleep(60)
return safe_chat(message) # Retry
except OpenAIError as e:
print(f"OpenAI API error: {e}")
return None
2. Manage Costs
def estimate_tokens(text):
# Rough estimate: ~4 characters per token
return len(text) // 4
def chat_with_budget(message, max_tokens=500):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": message}],
max_tokens=max_tokens # Limit response length
)
# Log usage
usage = response.usage
print(f"Tokens used - Prompt: {usage.prompt_tokens}, Completion: {usage.completion_tokens}")
return response.choices[0].message.content
3. Use Appropriate Temperature
| Temperature | Use Case |
|---|---|
| 0.0 - 0.3 | Factual, precise (code, analysis) |
| 0.4 - 0.7 | Balanced (general chat) |
| 0.8 - 1.0 | Creative (stories, brainstorming) |
4. Effective System Prompts
# Be specific about format and behavior
system_prompts = {
"json_responder": "Always respond with valid JSON. No additional text.",
"teacher": "You are a patient teacher. Explain concepts simply with examples.",
"code_reviewer": "Review code for bugs, security issues, and best practices. Be specific.",
"copywriter": "Write engaging, persuasive marketing copy. Use active voice."
}
API Parameters Reference
| Parameter | Description | Default |
|---|---|---|
| model | Model to use | Required |
| messages | Conversation history | Required |
| temperature | Randomness (0-2) | 1 |
| max_tokens | Maximum response length | Varies |
| top_p | Nucleus sampling | 1 |
| n | Number of completions | 1 |
| stream | Stream responses | false |
| stop | Stop sequences | null |
Summary
The OpenAI API enables powerful AI features:
- Chat completions for conversations
- Streaming for real-time responses
- Function calling for tool integration
- JSON mode for structured output
Start with simple prompts, iterate on your system messages, and build increasingly sophisticated AI applications.
Advertisement
Moshiour Rahman
Software Architect & AI Engineer
Enterprise software architect with deep expertise in financial systems, distributed architecture, and AI-powered applications. Building large-scale systems at Fortune 500 companies. Specializing in LLM orchestration, multi-agent systems, and cloud-native solutions. I share battle-tested patterns from real enterprise projects.
Related Articles
OpenAI API with Python: Build AI-Powered Applications
Master the OpenAI API for building AI applications. Learn GPT-4, embeddings, function calling, assistants API, and production best practices.
PythonLangChain Tutorial: Build AI Applications with Python
Master LangChain for building LLM-powered applications. Learn chains, agents, memory, RAG, and integrate with OpenAI, HuggingFace, and vector databases.
PythonAI Agents Fundamentals: Build Your First Agent from Scratch
Master AI agents from the ground up. Learn the agent loop, build a working agent in pure Python, and understand the foundations that power LangGraph and CrewAI.
Comments
Comments are powered by GitHub Discussions.
Configure Giscus at giscus.app to enable comments.