PydanticAI's DI system lets you inject databases, API clients, and config into agents without global state. Here is how it works.

What Dependency Injection Solves

Most agent frameworks rely on global state or closure variables to give tools access to databases, API clients, and configuration. This makes testing painful (you have to mock globals) and makes agents tightly coupled to specific infrastructure.

PydanticAI has a first-class dependency injection system. You declare what your agent needs, pass it in at run time, and tools receive it via RunContext. Tests swap in lightweight fakes without touching production code.

Defining Dependencies

from dataclasses import dataclass
from httpx import AsyncClient
from pydantic_ai import Agent, RunContext
 
# Define the dependency container as a dataclass
@dataclass
class AgentDeps:
    db_client: DatabaseClient       # your database abstraction
    http_client: AsyncClient        # for external API calls
    user_id: str                    # per-request context
    feature_flags: dict[str, bool]  # runtime config
 
# Declare deps_type on the agent
agent: Agent[AgentDeps, str] = Agent(
    model="claude-sonnet-4-6",
    deps_type=AgentDeps,
    system_prompt="You are a helpful assistant with access to user data.",
)

Using Dependencies in Tools

@agent.tool
async def get_user_orders(ctx: RunContext[AgentDeps], limit: int = 10) -> str:
    # Access injected dependencies via ctx.deps
    orders = await ctx.deps.db_client.query(
        "SELECT * FROM orders WHERE user_id = $1 LIMIT $2",
        ctx.deps.user_id,
        limit,
    )
    return f"Found {len(orders)} orders: " + ", ".join(o.id for o in orders)
 
@agent.tool
async def fetch_product_details(ctx: RunContext[AgentDeps], product_id: str) -> str:
    # Use the injected HTTP client -- no global session needed
    response = await ctx.deps.http_client.get(
        f"https://api.example.com/products/{product_id}"
    )
    data = response.json()
    return f"{data['name']}: ${data['price']:.2f} -- {data['description']}"
 
@agent.tool
def check_feature_flag(ctx: RunContext[AgentDeps], flag_name: str) -> bool:
    # Sync tools work too -- just omit async
    return ctx.deps.feature_flags.get(flag_name, False)

Running the Agent with Real Dependencies

import asyncio
from httpx import AsyncClient
 
async def handle_request(user_id: str, message: str) -> str:
    async with AsyncClient() as http_client:
        deps = AgentDeps(
            db_client=get_db_client(),          # your production DB
            http_client=http_client,
            user_id=user_id,
            feature_flags=await load_flags(user_id),
        )
        result = await agent.run(message, deps=deps)
        return result.data
 
# Each request gets its own deps instance -- no shared state between users
asyncio.run(handle_request("user-123", "Show me my last 5 orders"))

Testing with Injected Fakes

This is where the DI system pays off. Tests inject lightweight fakes instead of real clients -- no mocking globals, no patching, no test databases required.

from dataclasses import dataclass
from pydantic_ai.models.test import TestModel
 
# Fake database that returns predictable test data
@dataclass
class FakeDatabase:
    async def query(self, sql: str, *args):
        return [
            type("Order", (), {"id": "ORD-001"})(),
            type("Order", (), {"id": "ORD-002"})(),
        ]
 
# Fake HTTP client
class FakeHttpClient:
    async def get(self, url: str):
        return type("Response", (), {
            "json": lambda self: {"name": "Widget", "price": 9.99, "description": "A widget"}
        })()
 
async def test_order_lookup():
    deps = AgentDeps(
        db_client=FakeDatabase(),
        http_client=FakeHttpClient(),
        user_id="test-user",
        feature_flags={"new_ui": True},
    )
    with agent.override(model=TestModel()):
        result = await agent.run("Show my orders", deps=deps)
        assert result.data is not None
Structure your dependency container so the real and fake versions implement the same interface (Protocol or ABC). This guarantees your fakes stay in sync with the real implementation as APIs change.

System Prompts That Use Dependencies

Dependencies are also available inside dynamic system prompts -- useful when the system prompt needs to include per-user context.

@agent.system_prompt
async def build_system_prompt(ctx: RunContext[AgentDeps]) -> str:
    user = await ctx.deps.db_client.query(
        "SELECT name, tier FROM users WHERE id = $1", ctx.deps.user_id
    )
    return (
        f"You are a helpful assistant for {user[0].name}. "
        f"They are on the {user[0].tier} plan. "
        "Tailor your responses to their subscription level."
    )

Quick Reference

  • Define a dataclass as deps_type -- it holds all per-run dependencies
  • Access deps via ctx.deps inside any @agent.tool or @agent.system_prompt function
  • Pass deps=AgentDeps(...) to agent.run() -- each run gets its own isolated instance
  • Tests inject fake implementations -- no global mocking required
  • Use @agent.system_prompt with RunContext for per-user dynamic prompts
  • Type the agent as Agent[YourDepsType, YourResultType] for full IDE type safety