Python 6 min read

OpenAI ChatGPT API Tutorial: Build AI Apps with Python

Learn to use OpenAI's ChatGPT API in Python. Build chatbots, generate content, and integrate AI into your applications with practical examples.

MR

Moshiour Rahman

Advertisement

Introduction to OpenAI API

OpenAI’s API provides access to powerful AI models like GPT-4 and GPT-3.5-turbo. You can build chatbots, generate content, summarize text, translate languages, and much more.

Getting Started

  1. Create an account at platform.openai.com
  2. Generate an API key
  3. Install the Python library
pip install openai python-dotenv

Setup

import os
from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

Create a .env file:

OPENAI_API_KEY=sk-your-api-key-here

Basic Chat Completion

def chat(message):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": message}
        ]
    )
    return response.choices[0].message.content

# Example usage
result = chat("What is Python?")
print(result)

Understanding Messages

RolePurpose
systemSets the AI’s behavior and context
userThe user’s input/question
assistantThe AI’s previous responses

Multi-Turn Conversations

class ChatBot:
    def __init__(self, system_prompt="You are a helpful assistant."):
        self.client = OpenAI()
        self.messages = [
            {"role": "system", "content": system_prompt}
        ]

    def chat(self, user_message):
        self.messages.append({"role": "user", "content": user_message})

        response = self.client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=self.messages,
            temperature=0.7,
            max_tokens=1000
        )

        assistant_message = response.choices[0].message.content
        self.messages.append({"role": "assistant", "content": assistant_message})

        return assistant_message

    def clear_history(self):
        self.messages = [self.messages[0]]  # Keep system message


# Usage
bot = ChatBot("You are a Python programming expert.")
print(bot.chat("How do I read a file in Python?"))
print(bot.chat("Can you show me how to handle errors?"))  # Remembers context

Streaming Responses

For real-time output like ChatGPT:

def stream_chat(message):
    stream = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": message}
        ],
        stream=True
    )

    for chunk in stream:
        if chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="", flush=True)
    print()  # New line at end

# Usage
stream_chat("Explain machine learning in simple terms")

Function Calling

Let the AI call your functions:

import json

# Define your functions
def get_weather(location, unit="celsius"):
    """Get current weather for a location"""
    # In real app, call a weather API
    return {"location": location, "temperature": 22, "unit": unit, "condition": "sunny"}

def search_products(query, max_price=None):
    """Search for products"""
    return [
        {"name": f"Product matching '{query}'", "price": 29.99},
        {"name": f"Another {query} item", "price": 49.99}
    ]

# Define function schemas for OpenAI
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather in a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city name, e.g., San Francisco"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"]
                    }
                },
                "required": ["location"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "search_products",
            "description": "Search for products in the catalog",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query"
                    },
                    "max_price": {
                        "type": "number",
                        "description": "Maximum price filter"
                    }
                },
                "required": ["query"]
            }
        }
    }
]

def process_with_functions(user_message):
    messages = [{"role": "user", "content": user_message}]

    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=messages,
        tools=tools,
        tool_choice="auto"
    )

    response_message = response.choices[0].message

    # Check if the model wants to call a function
    if response_message.tool_calls:
        messages.append(response_message)

        for tool_call in response_message.tool_calls:
            function_name = tool_call.function.name
            function_args = json.loads(tool_call.function.arguments)

            # Call the actual function
            if function_name == "get_weather":
                result = get_weather(**function_args)
            elif function_name == "search_products":
                result = search_products(**function_args)

            messages.append({
                "tool_call_id": tool_call.id,
                "role": "tool",
                "content": json.dumps(result)
            })

        # Get final response
        final_response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=messages
        )
        return final_response.choices[0].message.content

    return response_message.content

# Usage
print(process_with_functions("What's the weather in Tokyo?"))
print(process_with_functions("Find me some headphones under $50"))

Practical Applications

Code Generator

def generate_code(description, language="python"):
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {
                "role": "system",
                "content": f"You are an expert {language} programmer. Generate clean, well-commented code."
            },
            {
                "role": "user",
                "content": f"Write {language} code to: {description}"
            }
        ],
        temperature=0.2  # Lower temperature for more precise code
    )
    return response.choices[0].message.content

# Usage
code = generate_code("create a REST API endpoint for user registration using Flask")
print(code)

Content Summarizer

def summarize(text, max_length=100):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "system",
                "content": f"Summarize the following text in {max_length} words or less. Be concise and capture key points."
            },
            {
                "role": "user",
                "content": text
            }
        ],
        temperature=0.3
    )
    return response.choices[0].message.content

# Usage
article = """
Long article text here...
"""
summary = summarize(article, max_length=50)

Text Translator

def translate(text, target_language):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "system",
                "content": f"You are a professional translator. Translate the text to {target_language}. Maintain the original meaning and tone."
            },
            {
                "role": "user",
                "content": text
            }
        ]
    )
    return response.choices[0].message.content

# Usage
english_text = "Hello, how are you today?"
spanish = translate(english_text, "Spanish")
print(spanish)  # "Hola, ¿cómo estás hoy?"

Sentiment Analyzer

def analyze_sentiment(text):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "system",
                "content": """Analyze the sentiment of the text. Respond with JSON:
                {"sentiment": "positive/negative/neutral", "confidence": 0.0-1.0, "explanation": "brief explanation"}"""
            },
            {
                "role": "user",
                "content": text
            }
        ],
        response_format={"type": "json_object"}
    )
    return json.loads(response.choices[0].message.content)

# Usage
result = analyze_sentiment("I absolutely love this product! Best purchase ever!")
print(result)
# {"sentiment": "positive", "confidence": 0.95, "explanation": "Strong positive language..."}

Best Practices

1. Handle Errors

from openai import OpenAIError, RateLimitError

def safe_chat(message):
    try:
        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": message}]
        )
        return response.choices[0].message.content
    except RateLimitError:
        print("Rate limit exceeded. Waiting...")
        time.sleep(60)
        return safe_chat(message)  # Retry
    except OpenAIError as e:
        print(f"OpenAI API error: {e}")
        return None

2. Manage Costs

def estimate_tokens(text):
    # Rough estimate: ~4 characters per token
    return len(text) // 4

def chat_with_budget(message, max_tokens=500):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": message}],
        max_tokens=max_tokens  # Limit response length
    )

    # Log usage
    usage = response.usage
    print(f"Tokens used - Prompt: {usage.prompt_tokens}, Completion: {usage.completion_tokens}")

    return response.choices[0].message.content

3. Use Appropriate Temperature

TemperatureUse Case
0.0 - 0.3Factual, precise (code, analysis)
0.4 - 0.7Balanced (general chat)
0.8 - 1.0Creative (stories, brainstorming)

4. Effective System Prompts

# Be specific about format and behavior
system_prompts = {
    "json_responder": "Always respond with valid JSON. No additional text.",
    "teacher": "You are a patient teacher. Explain concepts simply with examples.",
    "code_reviewer": "Review code for bugs, security issues, and best practices. Be specific.",
    "copywriter": "Write engaging, persuasive marketing copy. Use active voice."
}

API Parameters Reference

ParameterDescriptionDefault
modelModel to useRequired
messagesConversation historyRequired
temperatureRandomness (0-2)1
max_tokensMaximum response lengthVaries
top_pNucleus sampling1
nNumber of completions1
streamStream responsesfalse
stopStop sequencesnull

Summary

The OpenAI API enables powerful AI features:

  1. Chat completions for conversations
  2. Streaming for real-time responses
  3. Function calling for tool integration
  4. JSON mode for structured output

Start with simple prompts, iterate on your system messages, and build increasingly sophisticated AI applications.

Advertisement

MR

Moshiour Rahman

Software Architect & AI Engineer

Share:
MR

Moshiour Rahman

Software Architect & AI Engineer

Enterprise software architect with deep expertise in financial systems, distributed architecture, and AI-powered applications. Building large-scale systems at Fortune 500 companies. Specializing in LLM orchestration, multi-agent systems, and cloud-native solutions. I share battle-tested patterns from real enterprise projects.

Related Articles

Comments

Comments are powered by GitHub Discussions.

Configure Giscus at giscus.app to enable comments.