Skip to main content
Portkey provides a robust and secure gateway to integrate OpenAI’s APIs into your applications, including GPT-4o, o1, DALL·E, Whisper, and more. With Portkey, take advantage of features like fast AI gateway access, observability, prompt management, and more, while securely managing API keys through Model Catalog.

All Models

Full support for GPT-4o, o1, GPT-4, GPT-3.5, and all OpenAI models

All Endpoints

Chat, completions, embeddings, audio, images, and more fully supported

Multi-SDK Support

Use with OpenAI SDK, Portkey SDK, or popular frameworks like LangChain

Quick Start

Get OpenAI working in 3 steps:
from portkey_ai import Portkey

# 1. Install: pip install portkey-ai
# 2. Add @openai provider in model catalog
# 3. Use it:

portkey = Portkey(api_key="PORTKEY_API_KEY")

response = portkey.chat.completions.create(
    model="@openai/gpt-4o",
    messages=[{"role": "user", "content": "Say this is a test"}]
)

print(response.choices[0].message.content)
Tip: You can also set provider="@openai" in Portkey() and use just model="gpt-4o" in the request.Legacy support: The virtual_key parameter still works for backwards compatibility.

Add Provider in Model Catalog

  1. Go to Model Catalog → Add Provider
  2. Select OpenAI
  3. Choose existing credentials or create new by entering your OpenAI API key
  4. (Optional) Add your OpenAI Organization ID and Project ID for better cost tracking
  5. Name your provider (e.g., openai-prod)

Complete Setup Guide →

See all setup options, code examples, and detailed instructions

Basic Usage

Streaming

Stream responses for real-time output in your applications:
response = portkey.chat.completions.create(
    model="@openai/gpt-4o",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

Advanced Features

Responses API

OpenAI’s Responses API combines the best of both Chat Completions and Assistants APIs. Portkey fully supports this API with both the Portkey SDK and OpenAI SDK.
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY")

response = portkey.responses.create(
    model="@openai/gpt-4.1",
    input="Tell me a three sentence bedtime story about a unicorn."
)

print(response)
The Responses API provides a more flexible foundation for building agentic applications with built-in tools that execute automatically.

Remote MCP support on Responses API

Portkey supports Remote MCP support by OpenAI on its Responses API. Learn More

Streaming with Responses API

response = portkey.responses.create(
    model="@openai/gpt-4.1",
    instructions="You are a helpful assistant.",
    input="Hello!",
    stream=True
)

for event in response:
    print(event)

Realtime API

Portkey supports OpenAI’s Realtime API with a seamless integration. This allows you to use Portkey’s logging, cost tracking, and guardrail features while using the Realtime API.

Realtime API

Using Vision Models

Portkey’s multimodal Gateway fully supports OpenAI vision models as well. See this guide for more info:

Vision with the Responses API

The Responses API also processes images alongside text:
response = portkey.responses.create(
    model="@openai/gpt-4.1",
    input=[
        {
            "role": "user",
            "content": [
                { "type": "input_text", "text": "What is in this image?" },
                {
                    "type": "input_image",
                    "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
                }
            ]
        }
    ]
)

print(response)

Function Calling

Function calls within your OpenAI or Portkey SDK operations remain standard. These logs will appear in Portkey, highlighting the utilized functions and their outputs. Additionally, you can define functions within your prompts and invoke the portkey.prompts.completions.create method as above.

Function Calling with the Responses API

The Responses API also supports function calling with the same powerful capabilities:
tools = [
    {
        "type": "function",
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA"
                },
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location", "unit"]
        }
    }
]

response = portkey.responses.create(
    model="@openai/gpt-4.1",
    tools=tools,
    input="What is the weather like in Boston today?",
    tool_choice="auto"
)

print(response)

Fine-Tuning

Please refer to our fine-tuning guides to take advantage of Portkey’s advanced continuous fine-tuning capabilities.

Image Generation

Portkey supports multiple modalities for OpenAI. Make image generation requests through Portkey’s AI Gateway the same way as making completion calls.
// Define the OpenAI client as shown above

const image = await openai.images.generate({
  model:"dall-e-3",
  prompt:"Lucy in the sky with diamonds",
  size:"1024x1024"
})
Portkey’s fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you’d be able to see this request with the request and response.
querying-vision-language-models

Log view for an image generation request on OpenAI

More information on image generation is available in the API Reference.

Video Generation with Sora

Portkey supports OpenAI’s Sora video generation models through the AI Gateway. Generate videos using the Portkey Python SDK:
from portkey_ai import Portkey

client = Portkey(
    api_key="PORTKEY_API_KEY"
)

video = client.videos.create(
    model="@openai/sora-2",
    prompt="A video of a cool cat on a motorcycle in the night",
)

print("Video generation started:", video)
Pricing for video generation requests will be visible on your Portkey dashboard, allowing you to track costs alongside your other API usage.

Audio - Transcription, Translation, and Text-to-Speech

Portkey’s multimodal Gateway also supports the audio methods on OpenAI API. Check out the below guides for more info: Check out the below guides for more info:

Integrated Tools with Responses API

Web Search Tool

Web search delivers accurate and clearly-cited answers from the web, using the same tool as search in ChatGPT:
response = portkey.responses.create(
    model="@openai/gpt-4.1",
    tools=[{
        "type": "web_search_preview",
        "search_context_size": "medium", # Options: "high", "medium" (default), or "low"
        "user_location": {  # Optional - for localized results
            "type": "approximate",
            "country": "US",
            "city": "San Francisco",
            "region": "California"
        }
    }],
    input="What was a positive news story from today?"
)

print(response)
Options for search_context_size:
  • high: Most comprehensive context, higher cost, slower response
  • medium: Balanced context, cost, and latency (default)
  • low: Minimal context, lowest cost, fastest response
Responses include citations for URLs found in search results, with clickable references.

File Search Tool

File search enables quick retrieval from your knowledge base across multiple file types:
response = portkey.responses.create(
    model="@openai/gpt-4.1",
    tools=[{
        "type": "file_search",
        "vector_store_ids": ["vs_1234567890"],
        "max_num_results": 20,
        "filters": {  # Optional - filter by metadata
            "type": "eq",
            "key": "document_type",
            "value": "report"
        }
    }],
    input="What are the attributes of an ancient brown dragon?"
)

print(response)
This tool requires you to first create a vector store and upload files to it. Supports various file formats including PDFs, DOCXs, TXT, and more. Results include file citations in the response.

Enhanced Reasoning

Control the depth of model reasoning for more comprehensive analysis:
response = portkey.responses.create(
    model="@openai/o3-mini",
    input="How much wood would a woodchuck chuck?",
    reasoning={
        "effort": "high"  # Options: "high", "medium", or "low"
    }
)

print(response)

Computer Use Assistant

Portkey also supports the Computer Use Assistant (CUA) tool, which helps agents control computers or virtual machines through screenshots and actions. This feature is available for select developers as a research preview on premium tiers.

Learn More about Computer use tool here

Managing OpenAI Projects & Organizations in Portkey

When integrating OpenAI with Portkey, specify your OpenAI organization and project IDs along with your API key. This is particularly useful if you belong to multiple organizations or are accessing projects through a legacy user API key. Specifying the organization and project IDs helps you maintain better control over your access rules, usage, and costs. Add your Org & Project details using:
  1. Adding in Model Catalog (Recommended)
  2. Defining a Gateway Config
  3. Passing Details in a Request
Let’s explore each method in more detail.

Using Model Catalog

When adding OpenAI from the Model Catalog, Portkey automatically displays optional fields for the organization ID and project ID alongside the API key field. Get your OpenAI API key from here, then add it to Portkey along with your org/project details.
LOGO
Portkey takes budget management a step further than OpenAI. While OpenAI allows setting budget limits per project, Portkey enables you to set budget limits for each provider you create. For more information on budget limits, refer to this documentation:

Using the Gateway Config

You can also specify the organization and project details in the gateway config, either at the root level or within a specific target.
{
	"provider": "@openai",
	"openai_organization": "org-xxxxxx",
	"openai_project": "proj_xxxxxxxx"
}

While Making a Request

You can also pass your organization and project details directly when making a request using curl, the OpenAI SDK, or the Portkey SDK.
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL

client = OpenAI(
    api_key="PORTKEY_API_KEY",
    organization="org-xxxxxxxxxx",
    project="proj_xxxxxxxxx",
    base_url=PORTKEY_GATEWAY_URL
)

chat_complete = client.chat.completions.create(
    model="@openai/gpt-4o",
    messages=[{"role": "user", "content": "Say this is a test"}],
)

print(chat_complete.choices[0].message.content)

Frequently Asked Questions

General FAQs

You can sign up to OpenAI here and grab your scoped API key here.
The OpenAI API can be used by signing up to the OpenAI platform. You can find the pricing info here
You can find your current rate limits imposed by OpenAI here. For more tips, check out this guide.