This OpenAI post “Build Git MCP Server with the OpenAI Agents SDK” shows how to implement an MCP Server into an agent.
Table of contents
If you want your Python agent to answer questions about a Git repository—“Who’s the top contributor?”, “What changed last?”—the cleanest way is to plug in a Model Context Protocol (MCP) server. MCP is an open protocol that standardizes how LLM apps connect to tools and data; the OpenAI Agents SDK has first-class support for it, so your agents can discover MCP tools and call them safely. Think of MCP as the USB-C of AI apps: a single, consistent way to plug in new capabilities. OpenAI GitHubmodelcontextprotocol.io
Below is a complete code sample using the official Git MCP server (mcp-server-git
) and the Agents SDK. After the code, we’ll cover the packages, key components, server types, and how we scope the server to a specific repository directory.
import asyncio
import shutil
from pydantic import BaseModel
from agents import Agent, Runner, function_tool
from datetime import datetime
import os
import secrets
import string
from agents import Agent, InputGuardrail, GuardrailFunctionOutput, Runner, trace
from agents.exceptions import InputGuardrailTripwireTriggered
from agents.run_context import RunContextWrapper # MCP server context wrapper
from agents.mcp import MCPServer, MCPServerStdio # MCP server
# Load .env variables early (uses python-dotenv if installed)
try:
from dotenv import load_dotenv
load_dotenv() # Loads variables from a .env file into os.environ
except ImportError:
# Fallback: simple manual parser (only KEY=VALUE lines) if python-dotenv not installed
env_path = ".env"
if os.path.exists(env_path):
with open(env_path) as f:
for line in f:
line = line.strip()
if not line or line.startswith("#") or "=" not in line:
continue
k, v = line.split("=", 1)
os.environ.setdefault(k.strip(), v.strip())
async def run(mcp_server: MCPServer, directory_path: str):
agent = Agent(
name="Assistant",
instructions=f"Answer questions about the git repository at {directory_path}, use that for repo_path",
mcp_servers=[mcp_server],
)
message = "Who's the most frequent contributor?"
print("\n" + "-" * 40)
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)
message = "Summarize the last change in the repository."
print("\n" + "-" * 40)
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)
@function_tool
def generate_password(length: int = 16, digits: bool = True, symbols: bool = True) -> str:
"""Generate a cryptographically secure random password."""
print("[debug] generate_password called")
alphabet = string.ascii_letters
if digits:
alphabet += string.digits
if symbols:
alphabet += "!@#$%^&*()-_=+[]{};:,.?/"
if length < 4:
raise ValueError("Password length must be at least 4.")
# Ensure at least one of each selected category (if enabled)
password_chars = []
if digits:
password_chars.append(secrets.choice(string.digits))
if symbols:
password_chars.append(secrets.choice("!@#$%^&*()-_=+[]{};:,.?/"))
password_chars.append(secrets.choice(string.ascii_lowercase))
password_chars.append(secrets.choice(string.ascii_uppercase))
while len(password_chars) < length:
password_chars.append(secrets.choice(alphabet))
secrets.SystemRandom().shuffle(password_chars)
return "".join(password_chars[:length])
@function_tool
def get_time() -> str:
"""Get the current time in ISO8601 format."""
print("[debug] get_time called")
return datetime.now().isoformat(sep=' ', timespec='seconds')
python_tutor_agent = Agent(
name="Python Tutor",
handoff_description="Specialist agent for Python coding questions",
instructions="You provide assistance with Python code queries. Explain code logic and syntax clearly.",
)
async def main():
directory_path = "/workspaces/openAIAgent"
async with MCPServerStdio(
cache_tools_list=True,
params={"command": "uvx", "args": ["mcp-server-git"]},
) as server:
# Server is started; skip immediate repo questions
triage_agent = Agent(
name="My Agent",
instructions="You are a helpful agent. You can generate passwords, provide the current time and answer questions about Python code.",
tools=[generate_password, get_time],
handoffs=[python_tutor_agent],
mcp_servers=[server],
)
try:
with trace(workflow_name="MCP Git Example"):
# First (triage) question
question = "generate a password, explain what is a class in python and tell me the current time?"
result = await Runner.run(triage_agent, question)
print(result.final_output)
# Now ask MCP (repository) questions AFTER the first one
repo_agent = Agent(
name="Repo Assistant",
instructions=f"Answer questions about the git repository at {directory_path}, use that for repo_path",
mcp_servers=[server],
)
repo_q1 = "Who's the most frequent contributor?"
repo_result1 = await Runner.run(repo_agent, repo_q1)
print("\n--- Repo Q1 ---")
print(repo_result1.final_output)
repo_q2 = "Summarize the last change in the repository."
repo_result2 = await Runner.run(repo_agent, repo_q2)
print("\n--- Repo Q2 ---")
print(repo_result2.final_output)
# (Add more sequential questions by repeating Runner.run with repo_agent)
except Exception as e:
print(f"Triage agent run failed: {e}")
if __name__ == "__main__":
if not shutil.which("uv"):
raise RuntimeError("uv is not installed. Please install it with `pip install uv`.")
asyncio.run(main())
What’s the Git MCP server?
mcp-server-git
is a standalone MCP server that exposes Git operations (read/search/manipulate) as MCP tools. Agents don’t shell out to git
directly; instead, they call these tools through the MCP connection. Because the server is a separate process, it can enforce arguments, validate inputs, and give you clean, structured results. It’s actively evolving, so expect tool names and capabilities to grow over time. GitHubPyPI
Why MCP with the Agents SDK?
The Agents SDK discovers MCP servers at runtime, lists the tools they provide, and then forwards tool calls when the LLM decides to use them—this keeps your agent code small and the tool boundary explicit. You can also enable caching of the tool list to reduce latency on repeated runs. OpenAI GitHub
Packages you need
- OpenAI Agents SDK (
openai-agents
) providesAgent
,Runner
, tool decorators, handoffs, tracing, and the MCP client adapters you’re using (MCPServerStdio
, etc.). OpenAI GitHub mcp-server-git
is the Git MCP server you’re launching. You can run it directly or viauvx
as shown here. PyPIuv
/uvx
lets you run tools in an ephemeral environment without globally installing them—handy for CLIs like MCP servers. (Installuv
, which providesuvx
.) Astral DocsPyPIpython-dotenv
(optional) to loadOPENAI_API_KEY
and other settings from a.env
file, as the snippet demonstrates.
Components in the code
- Two general-purpose tools exposed via
@function_tool
:generate_password
(crypto-secure) andget_time
. The decorator makes them discoverable and schema-validated. OpenAI GitHub - A specialist agent (
python_tutor_agent
) used via handoff—your “triage” agent can delegate Python Q&A to it when appropriate. OpenAI GitHub - Tracing:
with trace(workflow_name="MCP Git Example")
captures a timeline of calls (including MCP tool listing), which is helpful in debugging and evaluation. OpenAI GitHub - MCP binding:
async with MCPServerStdio(...):
launchesmcp-server-git
as a subprocess over STDIO, one of MCP’s standard transports. The SDK also supports SSE and Streamable HTTP if you host servers remotely. OpenAI GitHub
Types of MCP servers (transports)
MCP recognizes three transport styles, and the Agents SDK mirrors them with MCPServerStdio
, MCPServerSse
, and MCPServerStreamableHttp
. In this example we use STDIO for a simple local subprocess, but the same Agent can switch to remote servers later by changing the transport. OpenAI GitHub
How the run works
uvx mcp-server-git
starts the Git MCP server in a temporary tool environment; the SDK connects over STDIO. Astral Docs- The triage agent answers a multi-part question by calling your local tools and possibly handing off to the Python specialist.
- You then create a repo-focused agent that asks two Git questions. Behind the scenes, the SDK has already called
list_tools()
on the Git server so the LLM knows what it can do; when it chooses to, say, summarize the last change, the SDK forwards acall_tool()
to the MCP server and returns structured results. OpenAI GitHub
Scoping Git operations to a directory
Notice directory_path = "/workspaces/openAIAgent"
and the instruction string: “use that for repo_path.” Most Git MCP servers expect a repo_path
argument when you call tools (e.g., log, status, diff). By explicitly telling the agent which path to use, you ensure every tool call is scoped to the intended repository. This is a best practice for both safety (no accidental operations outside your sandbox) and clarity (multi-repo environments). Some Git MCP variants even allow multiple repos; in those cases, you can pass different repo_path
values per call. PyPIAwesome MCP Servers
If you prefer to avoid relying on instructions, you can also wrap or pre-configure the server to set a default working directory (server-specific) or add light tool filtering so only read-only tools are exposed to the agent. The Agents SDK supports both static and dynamic tool filtering. OpenAI GitHub
Why this pattern scales
- Separation of concerns. Your agent focuses on reasoning; Git operations live behind an MCP boundary with clear inputs/outputs. That keeps your codebase clean and auditable. OpenAI GitHub
- Portability. You can swap or add servers (Filesystem, Fetch, Databases, etc.) without changing agent logic—just attach another MCP server to the agent. GitHub
- Security posture. Because MCP servers are processes with well-defined contracts, you can gate write actions (commit, push) or require explicit flags and prompts before enabling them. Treat
repo_path
as a guardrail: point it at a throwaway checkout or CI workspace for analysis. PyPI - Performance knobs. Set
cache_tools_list=True
to avoid re-listing tools every run, especially for remote servers. Use tracing to spot slow calls and the STDIO vs SSE vs Streamable HTTP trade-offs if you move off-host. OpenAI GitHub
Getting this running locally
- Install the SDK (
pip install openai-agents
) and uv (which providesuvx
). Then the code can launchmcp-server-git
as shown. OpenAI GitHubAstral Docs - Ensure
OPENAI_API_KEY
is available; the snippet loads.env
automatically if present. - Point
directory_path
at a real Git repo. - Run the script. You’ll see outputs for the triage question, followed by Git-aware answers powered by the MCP server.
That’s it: your agent now speaks Git through MCP—modular, auditable, and ready to grow as you add more servers or move them off-machine.
Discover more from CPI Consulting
Subscribe to get the latest posts sent to your email.