Skip to main content

OpenAI Provider

Connect to the OpenAI API (api.openai.com) for chat completions, embeddings, image generation, and speech.

Quick Start

builder.Services
.AddCoreAIServices()
.AddCoreAIOrchestration()
.AddCoreAIOpenAI();

Services Registered

ServiceImplementationLifetime
IAIClientProviderOpenAIClientProviderScoped
IAICompletionClientOpenAICompletionClientScoped
Connection sourceScoped

Configuration

Connection Setup

Provide an API key through your connection source:

builder.Services.AddCoreAIConnectionSource("OpenAI", options =>
{
options.Connections.Add(new AIProviderConnectionEntry
{
Name = "my-openai",
ProviderName = "OpenAI",
// Set API key and optional endpoint
});
});

Constants

ConstantValue
OpenAIConstants.ProviderName"OpenAI"
OpenAIConstants.ClientName"OpenAI"

Capabilities

CapabilitySupported
Chat completions
Streaming
Embeddings
Image generation
Speech-to-text
Text-to-speech

Configuration Example

A full appsettings.json configuration for OpenAI:

{
"CrestApps": {
"AI": {
"Providers": {
"OpenAI": {
"ApiKey": "sk-..."
}
}
}
}
}

Or register connections programmatically:

builder.Services.AddCoreAIConnectionSource("OpenAI", options =>
{
options.Connections.Add(new AIProviderConnectionEntry
{
Name = "my-openai",
ProviderName = "OpenAI",
// API key is read from configuration or set directly
});
});
tip

Never commit API keys to source control. Use environment variables, user secrets, or a vault provider:

dotnet user-secrets set "CrestApps:AI:Providers:OpenAI:ApiKey" "sk-..."

Available Models

ModelTypeContext WindowBest For
gpt-4.1Chat1M tokensComplex reasoning, coding, instruction following
gpt-4.1-miniChat1M tokensBalanced performance and cost
gpt-4.1-nanoChat1M tokensFast, cost-effective for simple tasks
o4-miniReasoning200K tokensSTEM, math, coding with chain-of-thought
gpt-4oChat128K tokensMultimodal (text + vision), general purpose
gpt-4o-miniChat128K tokensBudget-friendly multimodal
text-embedding-3-smallEmbedding8K tokensCost-effective embeddings
text-embedding-3-largeEmbedding8K tokensHigher-quality embeddings
dall-e-3ImageImage generation
whisper-1Speech-to-textAudio transcription
tts-1 / tts-1-hdText-to-speechVoice synthesis
info

Model availability and capabilities change frequently. Check the OpenAI models documentation for the latest information.

Streaming

The OpenAI provider fully supports streaming responses. When streaming is enabled, tokens are sent to the client as they are generated rather than waiting for the complete response:

// Streaming is handled automatically by the orchestrator when the
// chat interaction is configured for streaming (the default for real-time chat).
// No additional configuration is needed.

Streaming is the default behavior for the chat interactions module. The OpenAICompletionClient uses the IChatClient.GetStreamingResponseAsync() method from the Microsoft.Extensions.AI abstraction.

Function Calling

OpenAI models support function calling (tool use), which is the foundation for the Custom AI Tools system. When tools are registered and assigned to a profile, the OpenAI provider automatically:

  1. Serializes tool definitions as JSON Schema in the request
  2. Parses tool call responses from the model
  3. Invokes the matching AITool via the orchestrator
  4. Sends tool results back to the model for the final response

All GPT-4 and newer models support parallel function calling (multiple tools invoked in a single turn).