Skip to main content

Skills Integration

Agent Skills integrates seamlessly with Pydantic AI agents via AgentSkillsToolset.

Overview

The AgentSkillsToolset exposes skills as tools that AI agents can use:

  • list_skills() - List available skills
  • load_skill(skill_name) - Get full skill instructions
  • read_skill_resource(skill_name, resource_name) - Read skill resources
  • run_skill_script(skill_name, script_name, args) - Execute skill scripts

When used with Pydantic AI's get_instructions() hook, the toolset injects an XML listing of available skills into the system prompt to support progressive disclosure:

<skills>
You have access to skills that extend your capabilities.
Skills are modular packages with instructions, resources, and scripts.

<available_skills>
<skill name="data-analyzer" description="Analyzes datasets" location="/path/to/skills/data-analyzer" />
</available_skills>

<usage>
1. Use load_skill(skill_name) to read full instructions
2. Use read_skill_resource(skill_name, resource) for docs
3. Use run_skill_script(skill_name, script, args) to execute
</usage>
</skills>

Basic Setup

from pydantic_ai import Agent
from agent_skills import AgentSkillsToolset, SandboxExecutor
from code_sandboxes import LocalEvalSandbox

# Create sandbox for execution
sandbox = LocalEvalSandbox()

# Create toolset
toolset = AgentSkillsToolset(
directories=["./skills"],
executor=SandboxExecutor(sandbox),
)

# Create agent with toolset
agent = Agent(
model='openai:gpt-4o',
toolsets=[toolset],
)

# Run the agent
result = await agent.run("List all available skills")
print(result.output)

Toolset Configuration

toolset = AgentSkillsToolset(
directories=["./skills", "./shared-skills"], # Multiple directories
executor=SandboxExecutor(sandbox), # Execution backend
include_resources=True, # Enable resource access
include_scripts=True, # Enable script execution
)

Parameters

ParameterTypeDefaultDescription
directorieslist[str]RequiredDirectories containing skills
executorExecutorRequiredExecution backend
include_resourcesboolTrueEnable read_skill_resource tool
include_scriptsboolTrueEnable run_skill_script tool

Available Tools

list_skills

Lists all available skills:

# Agent calls: list_skills()
# Returns:
[
{
"name": "data-analyzer",
"description": "Analyzes datasets and provides insights",
"tags": ["data", "analysis"]
},
{
"name": "file-processor",
"description": "Process files in batches",
"tags": ["files", "batch"]
}
]

load_skill

Load full instructions for a skill:

# Agent calls: load_skill(skill_name="data-analyzer")
# Returns:
{
"name": "data-analyzer",
"description": "Analyzes datasets and provides insights",
"content": "# Data Analyzer\n\nUse this skill to analyze...",
"scripts": ["analyze", "summarize"],
"resources": ["reference", "examples"]
}

read_skill_resource

Read a skill's resource:

# Agent calls: read_skill_resource(skill_name="data-analyzer", resource_name="reference")
# Returns:
"Reference documentation for the data analyzer..."

run_skill_script

Execute a skill script:

# Agent calls: run_skill_script(
# skill_name="data-analyzer",
# script_name="analyze",
# args={"file_path": "/data/sales.csv"}
# )
# Returns:
{
"success": True,
"output": "Analysis complete. Found 1000 rows...",
"result": {"rows": 1000, "columns": 5}
}

Programmatic Skills

Define skills in code with decorators:

from agent_skills import AgentSkill

# Create a skill
skill = AgentSkill(
name="data-analyzer",
description="Analyzes datasets and provides insights",
content="Use this skill to analyze CSV and JSON data files.",
)

# Add a script
@skill.script
async def analyze(ctx, file_path: str) -> dict:
"""Analyze a data file.

Args:
file_path: Path to the file to analyze.

Returns:
Analysis results.
"""
data = await ctx.deps.filesystem.read(file_path)
lines = data.strip().split('\n')
return {
"rows": len(lines),
"size": len(data),
}

# Add a resource
@skill.resource
def get_reference() -> str:
"""Get reference documentation."""
return """
# Data Analyzer Reference

Supported formats:
- CSV files
- JSON files
- TSV files
"""

# Register with toolset
toolset = AgentSkillsToolset(
skills=[skill],
executor=SandboxExecutor(sandbox),
)

Agent Workflow

A typical workflow for an agent using skills:

agent = Agent(
model='openai:gpt-4o',
system_prompt="""You are a helpful assistant with access to skills.

When given a task:
1. Use list_skills() to see available skills
2. Use load_skill(name) to get detailed instructions
3. Use run_skill_script(name, script, args) to execute

Skills are pre-built capabilities that can help you accomplish tasks efficiently.
""",
toolsets=[toolset],
)

# The agent will:
# 1. List skills to find relevant ones
# 2. Load skill instructions to understand usage
# 3. Execute skill scripts with appropriate arguments
result = await agent.run("Analyze the sales data in /data/sales.csv")

Execution Backends

LocalEvalSandbox

Execute skills locally using Python eval:

from code_sandboxes import LocalEvalSandbox
from agent_skills import SandboxExecutor

sandbox = LocalEvalSandbox()
executor = SandboxExecutor(sandbox)

DatalayerRuntimeSandbox

Execute skills in cloud-based Datalayer Runtime:

from code_sandboxes import DatalayerRuntimeSandbox
from agent_skills import SandboxExecutor

sandbox = DatalayerRuntimeSandbox(
api_key="...",
runtime_id="...",
)
executor = SandboxExecutor(sandbox)

Custom Executor

Implement your own executor:

from agent_skills import Executor

class MyExecutor(Executor):
async def execute(
self,
skill: Skill,
script_name: str,
args: dict,
) -> ExecutionResult:
# Your execution logic
...

Error Handling

result = await agent.run("Run the broken skill")

# The agent sees errors and can respond appropriately:
# "The skill execution failed with error: ..."

Skill execution errors are returned as tool results, allowing the agent to:

  • Retry with different parameters
  • Try alternative skills
  • Report the error to the user

Dependencies Injection

Skills can access dependencies via context:

@skill.script
async def process(ctx, path: str) -> str:
"""Process a file using injected dependencies.

Args:
path: File path to process.

Returns:
Processing result.
"""
# Access filesystem dependency
content = await ctx.deps.filesystem.read(path)

# Access database dependency
await ctx.deps.database.insert({"path": path, "content": content})

return f"Processed {path}"

Configure dependencies in the toolset:

toolset = AgentSkillsToolset(
skills=[skill],
executor=executor,
deps={
"filesystem": FilesystemDep(),
"database": DatabaseDep(),
},
)

Combining with Other Toolsets

Use skills alongside other toolsets:

from pydantic_ai import Agent
from pydantic_ai_slim.pydantic_ai.mcp import MCPServerStdio
from agent_skills import AgentSkillsToolset

# Skills toolset
skills_toolset = AgentSkillsToolset(
directories=["./skills"],
executor=executor,
)

# Direct MCP server access
filesystem_server = MCPServerStdio(
"npx",
args=["-y", "@anthropic/mcp-server-filesystem", "/tmp"],
)

# Agent has both
agent = Agent(
model='openai:gpt-4o',
system_prompt="Use skills for complex tasks, direct tools for simple operations.",
toolsets=[skills_toolset, filesystem_server],
)

Integration with Agent Codemode

For skills integrated with Agent Codemode (code-first tool composition), see the Agent Codemode skills example.

This demonstrates using AgentSkillsToolset alongside CodemodeToolset:

from agent_codemode import CodemodeToolset, ToolRegistry, CodeModeConfig
from agent_skills import AgentSkillsToolset

# Codemode toolset for MCP tools
codemode_toolset = CodemodeToolset(registry=registry, config=config)

# Skills toolset for skill discovery and execution
skills_toolset = AgentSkillsToolset(directories=["./skills"])

# Use both with pydantic-ai
agent = Agent(
model='anthropic:claude-sonnet-4-0',
toolsets=[codemode_toolset, skills_toolset],
)