Introduction
A server implementation allows you to emit AG-UI events directly from your agent or server. This approach is ideal when you’re building a new agent from scratch or want a dedicated service for your agent capabilities.When to use a server implementation
Server implementations allow you to directly emit AG-UI events from your agent or server. If you are not using an agent framework or haven’t created a protocol for your agent framework yet, this is the best way to get started. Server implementations are also great for:- Building a new agent frameworks from scratch
- Maximum control over how and what events are emitted
- Exposing your agent as a standalone API
What you’ll build
In this guide, we’ll create a standalone HTTP server that:- Accepts AG-UI protocol requests
- Connects to OpenAI’s GPT-4o model
- Streams responses back as AG-UI events
- Handles tool calls and state management
Prerequisites
Before we begin, make sure you have:1. Provide your OpenAI API key
First, let’s set up your API key:2. Install build utilities
Install the following tools:Step 1 – Scaffold your server
Start by cloning the repo and navigating to the TypeScript SDK:Update metadata
Openintegrations/openai-server/package.json
and update the fields to match
your new folder:
integrations/openai-server/src/index.ts
:
apps/dojo/src/menu.ts
:
apps/dojo/src/agents.ts
:
Step 2 – Add package to dojo dependencies
Openapps/dojo/package.json
and add the package @ag-ui/openai-server
:
Step 3 – Start the dojo and server
Now let’s see your work in action. First, start your Python server:Step 4 – Bridge OpenAI with AG-UI
Let’s transform our stub into a real server that streams completions from OpenAI.Install the OpenAI SDK
First, we need the OpenAI SDK:AG-UI recap
An AG-UI server implements the endpoint and emits a sequence of events to signal:- lifecycle events (
RUN_STARTED
,RUN_FINISHED
,RUN_ERROR
) - content events (
TEXT_MESSAGE_*
,TOOL_CALL_*
, and more)
Implement the streaming server
Now we’ll transform our stub server into a real OpenAI integration. The key difference is that instead of sending a hardcoded “Hello world!” message, we’ll connect to OpenAI’s API and stream the response back through AG-UI events. The implementation follows the same event flow as our stub, but we’ll add the OpenAI client initialization and replace our mock response with actual API calls. We’ll also handle tool calls if they’re present in the response, making our server fully capable of using functions when needed.What happens under the hood?
Let’s break down what your server is doing:- Setup – We create an OpenAI client and emit
RUN_STARTED
- Request – We send the user’s messages to
chat.completions
withstream=True
- Streaming – We forward each chunk as either
TEXT_MESSAGE_CHUNK
orTOOL_CALL_CHUNK
- Finish – We emit
RUN_FINISHED
(orRUN_ERROR
if something goes wrong)
Step 5 – Chat with your server
Reload the dojo page and start typing. You’ll see GPT-4o streaming its answer in real-time, word by word. Tools like CopilotKit already understand AG-UI and provide plug-and-play React components. Point them at your server endpoint and you get a full-featured chat UI out of the box.Share your integration
Did you build a custom server that others could reuse? We welcome community contributions!- Fork the AG-UI repository
- Add your package under
typescript-sdk/integrations/
. See Contributing for more details and naming conventions. - Open a pull request describing your use-case and design decisions
Conclusion
You now have a fully-functional AG-UI server for OpenAI and a local playground to test it. From here you can:- Add tool calls to enhance your server
- Deploy your server to production
- Bring AG-UI to any other model or service