GitHub Copilot SDK
Embed Copilot's agentic workflows in any application using Python, TypeScript, Go, or .NET.
Overview
The GitHub Copilot SDK exposes the same engine behind Copilot CLI: a production-tested agent runtime you can invoke programmatically. No need to build your own orchestration - you define agent behavior, Copilot handles planning, tool invocation, file edits, and more.
Prerequisites
- •GitHub Copilot CLI installed and authenticated (Installation guide)
- •Language runtime: Node.js 18+, Python 3.8+, Go 1.21+, or .NET 8.0+
Verify CLI: copilot --version
Installation
Node.js/TypeScript
mkdir copilot-demo && cd copilot-demo npm init -y --init-type module npm install @github/copilot-sdk tsx
Python
pip install github-copilot-sdk
Go
mkdir copilot-demo && cd copilot-demo go mod init copilot-demo go get github.com/github/copilot-sdk/go
.NET
dotnet new console -n CopilotDemo && cd CopilotDemo dotnet add package GitHub.Copilot.SDK
Quick Start
TypeScript
import { CopilotClient } from "@github/copilot-sdk";
const client = new CopilotClient();
const session = await client.createSession({ model: "gpt-4.1" });
const response = await session.sendAndWait({ prompt: "What is 2 + 2?" });
console.log(response?.data.content);
await client.stop();
process.exit(0);
Run: npx tsx index.ts
Python
import asyncio
from copilot import CopilotClient
async def main():
client = CopilotClient()
await client.start()
session = await client.create_session({"model": "gpt-4.1"})
response = await session.send_and_wait({"prompt": "What is 2 + 2?"})
print(response.data.content)
await client.stop()
asyncio.run(main())
Go
package main
import (
"fmt"
"log"
"os"
copilot "github.com/github/copilot-sdk/go"
)
func main() {
client := copilot.NewClient(nil)
if err := client.Start(); err != nil {
log.Fatal(err)
}
defer client.Stop()
session, err := client.CreateSession(&copilot.SessionConfig{Model: "gpt-4.1"})
if err != nil {
log.Fatal(err)
}
response, err := session.SendAndWait(copilot.MessageOptions{Prompt: "What is 2 + 2?"}, 0)
if err != nil {
log.Fatal(err)
}
fmt.Println(*response.Data.Content)
os.Exit(0)
}
.NET (C#)
using GitHub.Copilot.SDK;
await using var client = new CopilotClient();
await using var session = await client.CreateSessionAsync(new SessionConfig { Model = "gpt-4.1" });
var response = await session.SendAndWaitAsync(new MessageOptions { Prompt = "What is 2 + 2?" });
Console.WriteLine(response?.Data.Content);
Run: dotnet run
Streaming Responses
Enable real-time output for better UX:
TypeScript
import { CopilotClient, SessionEvent } from "@github/copilot-sdk";
const client = new CopilotClient();
const session = await client.createSession({
model: "gpt-4.1",
streaming: true,
});
session.on((event: SessionEvent) => {
if (event.type === "assistant.message_delta") {
process.stdout.write(event.data.deltaContent);
}
if (event.type === "session.idle") {
console.log(); // New line when done
}
});
await session.sendAndWait({ prompt: "Tell me a short joke" });
await client.stop();
process.exit(0);
Python
import asyncio
import sys
from copilot import CopilotClient
from copilot.generated.session_events import SessionEventType
async def main():
client = CopilotClient()
await client.start()
session = await client.create_session({
"model": "gpt-4.1",
"streaming": True,
})
def handle_event(event):
if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
sys.stdout.write(event.data.delta_content)
sys.stdout.flush()
if event.type == SessionEventType.SESSION_IDLE:
print()
session.on(handle_event)
await session.send_and_wait({"prompt": "Tell me a short joke"})
await client.stop()
asyncio.run(main())
Go
session, err := client.CreateSession(&copilot.SessionConfig{
Model: "gpt-4.1",
Streaming: true,
})
session.On(func(event copilot.SessionEvent) {
if event.Type == "assistant.message_delta" {
fmt.Print(*event.Data.DeltaContent)
}
if event.Type == "session.idle" {
fmt.Println()
}
})
_, err = session.SendAndWait(copilot.MessageOptions{Prompt: "Tell me a short joke"}, 0)
.NET
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-4.1",
Streaming = true,
});
session.On(ev =>
{
if (ev is AssistantMessageDeltaEvent deltaEvent)
Console.Write(deltaEvent.Data.DeltaContent);
if (ev is SessionIdleEvent)
Console.WriteLine();
});
await session.SendAndWaitAsync(new MessageOptions { Prompt = "Tell me a short joke" });
Custom Tools
Define tools that Copilot can invoke during reasoning. When you define a tool, you tell Copilot:
- •What the tool does (description)
- •What parameters it needs (schema)
- •What code to run (handler)
TypeScript (JSON Schema)
import { CopilotClient, defineTool, SessionEvent } from "@github/copilot-sdk";
const getWeather = defineTool("get_weather", {
description: "Get the current weather for a city",
parameters: {
type: "object",
properties: {
city: { type: "string", description: "The city name" },
},
required: ["city"],
},
handler: async (args: { city: string }) => {
const { city } = args;
// In a real app, call a weather API here
const conditions = ["sunny", "cloudy", "rainy", "partly cloudy"];
const temp = Math.floor(Math.random() * 30) + 50;
const condition = conditions[Math.floor(Math.random() * conditions.length)];
return { city, temperature: `${temp}°F`, condition };
},
});
const client = new CopilotClient();
const session = await client.createSession({
model: "gpt-4.1",
streaming: true,
tools: [getWeather],
});
session.on((event: SessionEvent) => {
if (event.type === "assistant.message_delta") {
process.stdout.write(event.data.deltaContent);
}
});
await session.sendAndWait({
prompt: "What's the weather like in Seattle and Tokyo?",
});
await client.stop();
process.exit(0);
Python (Pydantic)
import asyncio
import random
import sys
from copilot import CopilotClient
from copilot.tools import define_tool
from copilot.generated.session_events import SessionEventType
from pydantic import BaseModel, Field
class GetWeatherParams(BaseModel):
city: str = Field(description="The name of the city to get weather for")
@define_tool(description="Get the current weather for a city")
async def get_weather(params: GetWeatherParams) -> dict:
city = params.city
conditions = ["sunny", "cloudy", "rainy", "partly cloudy"]
temp = random.randint(50, 80)
condition = random.choice(conditions)
return {"city": city, "temperature": f"{temp}°F", "condition": condition}
async def main():
client = CopilotClient()
await client.start()
session = await client.create_session({
"model": "gpt-4.1",
"streaming": True,
"tools": [get_weather],
})
def handle_event(event):
if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
sys.stdout.write(event.data.delta_content)
sys.stdout.flush()
session.on(handle_event)
await session.send_and_wait({
"prompt": "What's the weather like in Seattle and Tokyo?"
})
await client.stop()
asyncio.run(main())
Go
type WeatherParams struct {
City string `json:"city" jsonschema:"The city name"`
}
type WeatherResult struct {
City string `json:"city"`
Temperature string `json:"temperature"`
Condition string `json:"condition"`
}
getWeather := copilot.DefineTool(
"get_weather",
"Get the current weather for a city",
func(params WeatherParams, inv copilot.ToolInvocation) (WeatherResult, error) {
conditions := []string{"sunny", "cloudy", "rainy", "partly cloudy"}
temp := rand.Intn(30) + 50
condition := conditions[rand.Intn(len(conditions))]
return WeatherResult{
City: params.City,
Temperature: fmt.Sprintf("%d°F", temp),
Condition: condition,
}, nil
},
)
session, _ := client.CreateSession(&copilot.SessionConfig{
Model: "gpt-4.1",
Streaming: true,
Tools: []copilot.Tool{getWeather},
})
.NET (Microsoft.Extensions.AI)
using GitHub.Copilot.SDK;
using Microsoft.Extensions.AI;
using System.ComponentModel;
var getWeather = AIFunctionFactory.Create(
([Description("The city name")] string city) =>
{
var conditions = new[] { "sunny", "cloudy", "rainy", "partly cloudy" };
var temp = Random.Shared.Next(50, 80);
var condition = conditions[Random.Shared.Next(conditions.Length)];
return new { city, temperature = $"{temp}°F", condition };
},
"get_weather",
"Get the current weather for a city"
);
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-4.1",
Streaming = true,
Tools = [getWeather],
});
How Tools Work
When Copilot decides to call your tool:
- •Copilot sends a tool call request with the parameters
- •The SDK runs your handler function
- •The result is sent back to Copilot
- •Copilot incorporates the result into its response
Copilot decides when to call your tool based on the user's question and your tool's description.
Interactive CLI Assistant
Build a complete interactive assistant:
TypeScript
import { CopilotClient, defineTool, SessionEvent } from "@github/copilot-sdk";
import * as readline from "readline";
const getWeather = defineTool("get_weather", {
description: "Get the current weather for a city",
parameters: {
type: "object",
properties: {
city: { type: "string", description: "The city name" },
},
required: ["city"],
},
handler: async ({ city }) => {
const conditions = ["sunny", "cloudy", "rainy", "partly cloudy"];
const temp = Math.floor(Math.random() * 30) + 50;
const condition = conditions[Math.floor(Math.random() * conditions.length)];
return { city, temperature: `${temp}°F`, condition };
},
});
const client = new CopilotClient();
const session = await client.createSession({
model: "gpt-4.1",
streaming: true,
tools: [getWeather],
});
session.on((event: SessionEvent) => {
if (event.type === "assistant.message_delta") {
process.stdout.write(event.data.deltaContent);
}
});
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
console.log("Weather Assistant (type 'exit' to quit)");
console.log("Try: 'What's the weather in Paris?'\n");
const prompt = () => {
rl.question("You: ", async (input) => {
if (input.toLowerCase() === "exit") {
await client.stop();
rl.close();
return;
}
process.stdout.write("Assistant: ");
await session.sendAndWait({ prompt: input });
console.log("\n");
prompt();
});
};
prompt();
Python
import asyncio
import random
import sys
from copilot import CopilotClient
from copilot.tools import define_tool
from copilot.generated.session_events import SessionEventType
from pydantic import BaseModel, Field
class GetWeatherParams(BaseModel):
city: str = Field(description="The name of the city to get weather for")
@define_tool(description="Get the current weather for a city")
async def get_weather(params: GetWeatherParams) -> dict:
conditions = ["sunny", "cloudy", "rainy", "partly cloudy"]
temp = random.randint(50, 80)
condition = random.choice(conditions)
return {"city": params.city, "temperature": f"{temp}°F", "condition": condition}
async def main():
client = CopilotClient()
await client.start()
session = await client.create_session({
"model": "gpt-4.1",
"streaming": True,
"tools": [get_weather],
})
def handle_event(event):
if event.type == SessionEventType.ASSISTANT_MESSAGE_DELTA:
sys.stdout.write(event.data.delta_content)
sys.stdout.flush()
session.on(handle_event)
print("Weather Assistant (type 'exit' to quit)")
print("Try: 'What's the weather in Paris?'\n")
while True:
try:
user_input = input("You: ")
except EOFError:
break
if user_input.lower() == "exit":
break
sys.stdout.write("Assistant: ")
await session.send_and_wait({"prompt": user_input})
print("\n")
await client.stop()
asyncio.run(main())
MCP Server Integration
Connect to MCP (Model Context Protocol) servers for pre-built tools. Connect to GitHub's MCP server for repository, issue, and PR access:
TypeScript
const session = await client.createSession({
model: "gpt-4.1",
mcpServers: {
github: {
type: "http",
url: "https://api.githubcopilot.com/mcp/",
},
},
});
Python
session = await client.create_session({
"model": "gpt-4.1",
"mcp_servers": {
"github": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp/",
},
},
})
Go
session, _ := client.CreateSession(&copilot.SessionConfig{
Model: "gpt-4.1",
MCPServers: map[string]copilot.MCPServerConfig{
"github": {
Type: "http",
URL: "https://api.githubcopilot.com/mcp/",
},
},
})
.NET
await using var session = await client.CreateSessionAsync(new SessionConfig
{
Model = "gpt-4.1",
McpServers = new Dictionary<string, McpServerConfig>
{
["github"] = new McpServerConfig
{
Type = "http",
Url = "https://api.githubcopilot.com/mcp/",
},
},
});
Custom Agents
Define specialized AI personas for specific tasks:
TypeScript
const session = await client.createSession({
model: "gpt-4.1",
customAgents: [{
name: "pr-reviewer",
displayName: "PR Reviewer",
description: "Reviews pull requests for best practices",
prompt: "You are an expert code reviewer. Focus on security, performance, and maintainability.",
}],
});
Python
session = await client.create_session({
"model": "gpt-4.1",
"custom_agents": [{
"name": "pr-reviewer",
"display_name": "PR Reviewer",
"description": "Reviews pull requests for best practices",
"prompt": "You are an expert code reviewer. Focus on security, performance, and maintainability.",
}],
})
System Message
Customize the AI's behavior and personality:
TypeScript
const session = await client.createSession({
model: "gpt-4.1",
systemMessage: {
content: "You are a helpful assistant for our engineering team. Always be concise.",
},
});
Python
session = await client.create_session({
"model": "gpt-4.1",
"system_message": {
"content": "You are a helpful assistant for our engineering team. Always be concise.",
},
})
External CLI Server
Run the CLI in server mode separately and connect the SDK to it. Useful for debugging, resource sharing, or custom environments.
Start CLI in Server Mode
copilot --server --port 4321
Connect SDK to External Server
TypeScript
const client = new CopilotClient({
cliUrl: "localhost:4321"
});
const session = await client.createSession({ model: "gpt-4.1" });
Python
client = CopilotClient({
"cli_url": "localhost:4321"
})
await client.start()
session = await client.create_session({"model": "gpt-4.1"})
Go
client := copilot.NewClient(&copilot.ClientOptions{
CLIUrl: "localhost:4321",
})
if err := client.Start(); err != nil {
log.Fatal(err)
}
session, _ := client.CreateSession(&copilot.SessionConfig{Model: "gpt-4.1"})
.NET
using var client = new CopilotClient(new CopilotClientOptions
{
CliUrl = "localhost:4321"
});
await using var session = await client.CreateSessionAsync(new SessionConfig { Model = "gpt-4.1" });
Note: When cliUrl is provided, the SDK will not spawn or manage a CLI process - it only connects to the existing server.
Event Types
| Event | Description |
|---|---|
user.message | User input added |
assistant.message | Complete model response |
assistant.message_delta | Streaming response chunk |
assistant.reasoning | Model reasoning (model-dependent) |
assistant.reasoning_delta | Streaming reasoning chunk |
tool.execution_start | Tool invocation started |
tool.execution_complete | Tool execution finished |
session.idle | No active processing |
session.error | Error occurred |
Client Configuration
| Option | Description | Default |
|---|---|---|
cliPath | Path to Copilot CLI executable | System PATH |
cliUrl | Connect to existing server (e.g., "localhost:4321") | None |
port | Server communication port | Random |
useStdio | Use stdio transport instead of TCP | true |
logLevel | Logging verbosity | "info" |
autoStart | Launch server automatically | true |
autoRestart | Restart on crashes | true |
cwd | Working directory for CLI process | Inherited |
Session Configuration
| Option | Description |
|---|---|
model | LLM to use ("gpt-4.1", "claude-sonnet-4.5", etc.) |
sessionId | Custom session identifier |
tools | Custom tool definitions |
mcpServers | MCP server connections |
customAgents | Custom agent personas |
systemMessage | Override default system prompt |
streaming | Enable incremental response chunks |
availableTools | Whitelist of permitted tools |
excludedTools | Blacklist of disabled tools |
Session Persistence
Save and resume conversations across restarts:
Create with Custom ID
const session = await client.createSession({
sessionId: "user-123-conversation",
model: "gpt-4.1"
});
Resume Session
const session = await client.resumeSession("user-123-conversation");
await session.send({ prompt: "What did we discuss earlier?" });
List and Delete Sessions
const sessions = await client.listSessions();
await client.deleteSession("old-session-id");
Error Handling
try {
const client = new CopilotClient();
const session = await client.createSession({ model: "gpt-4.1" });
const response = await session.sendAndWait(
{ prompt: "Hello!" },
30000 // timeout in ms
);
} catch (error) {
if (error.code === "ENOENT") {
console.error("Copilot CLI not installed");
} else if (error.code === "ECONNREFUSED") {
console.error("Cannot connect to Copilot server");
} else {
console.error("Error:", error.message);
}
} finally {
await client.stop();
}
Graceful Shutdown
process.on("SIGINT", async () => {
console.log("Shutting down...");
await client.stop();
process.exit(0);
});
Common Patterns
Multi-turn Conversation
const session = await client.createSession({ model: "gpt-4.1" });
await session.sendAndWait({ prompt: "My name is Alice" });
await session.sendAndWait({ prompt: "What's my name?" });
// Response: "Your name is Alice"
File Attachments
await session.send({
prompt: "Analyze this file",
attachments: [{
type: "file",
path: "./data.csv",
displayName: "Sales Data"
}]
});
Abort Long Operations
const timeoutId = setTimeout(() => {
session.abort();
}, 60000);
session.on((event) => {
if (event.type === "session.idle") {
clearTimeout(timeoutId);
}
});
Available Models
Query available models at runtime:
const models = await client.getModels(); // Returns: ["gpt-4.1", "gpt-4o", "claude-sonnet-4.5", ...]
Best Practices
- •Always cleanup: Use
try-finallyordeferto ensureclient.stop()is called - •Set timeouts: Use
sendAndWaitwith timeout for long operations - •Handle events: Subscribe to error events for robust error handling
- •Use streaming: Enable streaming for better UX on long responses
- •Persist sessions: Use custom session IDs for multi-turn conversations
- •Define clear tools: Write descriptive tool names and descriptions
Architecture
Your Application
|
SDK Client
| JSON-RPC
Copilot CLI (server mode)
|
GitHub (models, auth)
The SDK manages the CLI process lifecycle automatically. All communication happens via JSON-RPC over stdio or TCP.
Resources
- •GitHub Repository: https://github.com/github/copilot-sdk
- •Getting Started Tutorial: https://github.com/github/copilot-sdk/blob/main/docs/tutorials/first-app.md
- •GitHub MCP Server: https://github.com/github/github-mcp-server
- •MCP Servers Directory: https://github.com/modelcontextprotocol/servers
- •Cookbook: https://github.com/github/copilot-sdk/tree/main/cookbook
- •Samples: https://github.com/github/copilot-sdk/tree/main/samples
Status
This SDK is in Technical Preview and may have breaking changes. Not recommended for production use yet.