Search Docs…

Search Docs…

Guide

Agent Configuration and Management

This guide covers everything you need to know about creating, configuring, and managing AI agents in CoAgent. From basic setup to advanced optimization techniques.

Overview

CoAgent's architecture separates concerns into distinct but interconnected components:

  • Agent Configurations: Define behavior through system prompts and metadata

  • Model Providers: Connect to LLM services (OpenAI, Anthropic, etc.)

  • Bound Agents: Combine agents with specific models for execution

  • Sandbox Configurations: Define runtime environments with tools and parameters

Agent Configurations

Agent configurations define the core behavior and personality of your AI agents.

Creating Agent Configurations

Via Web UI

  1. Navigate to Agent Configurations in the CoAgent web interface

  2. Click "Create New Agent"

  3. Fill in the configuration fields:

Basic Information:

  • Name: Unique identifier (alphanumeric + hyphens)

  • Description: Brief explanation of the agent's purpose

  • System Prompt: Core instructions that define behavior

Example Configuration:

Name: customer-support-agent
Description: Helpful customer support agent for e-commerce inquiries
System Prompt: You are a knowledgeable and empathetic customer support agent for an online store. 
              Help customers with orders, returns, product questions, and general inquiries. 
              Always be polite, clear, and solution-focused. If you cannot resolve an issue, 
              escalate to human support

Via REST API

curl -X POST http://localhost:3000/api/v1/agents \
  -H "Content-Type: application/json" \
  -d '{
    "name": "customer-support-agent",
    "description": "Helpful customer support agent for e-commerce inquiries",
    "preamble": "You are a knowledgeable and empathetic customer support agent...",
    "custom": {
      "domain": "ecommerce",
      "escalation_threshold": 3
    }
  }'

Best Practices for System Prompts

1. Be Specific and Clear

Poor: "You are a helpful technical support specialist."
Good: "You are a technical support specialist for cloud infrastructure. Provide step-by-step troubleshooting guidance, ask clarifying questions when needed, and include relevant documentation links."

2. Set Clear Boundaries

Example: "Only provide information about our products and services. For questions outside your expertise, direct users to appropriate resources or human support."

3. Define Tone and Style

Example: "Maintain a professional yet friendly tone. Use clear, jargon-free language. Break complex information into digestible steps."

4. Include Output Format Preferences

Example: "When providing troubleshooting steps, use numbered lists. For code examples, use proper markdown formatting with language tags."

Advanced Agent Features

Custom Metadata

Use the custom field to store additional configuration:

{
  "custom": {
    "domain": "healthcare",
    "compliance_level": "hipaa",
    "escalation_keywords": ["lawsuit", "complaint", "urgent"],
    "max_conversation_turns": 10,
    "languages": ["en", "es", "fr"]
  }
}

Context-Aware Agents (Python Client)

For agents that adapt behavior based on context:

from coagent_types import CoagentContext

# Define specialized contexts
contexts = [
    CoagentContext(
        name="technical_support",
        description="Handle technical product issues and troubleshooting",
        prompt="You are a technical support expert. Focus on diagnosing issues methodically and providing step-by-step solutions."
    ),
    CoagentContext(
        name="billing_support", 
        description="Handle billing inquiries and account issues",
        prompt="You are a billing specialist. Help customers understand charges, process refunds, and resolve account issues with empathy."
    )
]

config = CoagentConfig(
    model_name="gpt-4",
    contexts=contexts
)

Model Providers

Model providers connect your agents to LLM services.

Supported Provider Types

OpenAI

{
  "name": "OpenAI Production",
  "provider_type": "openai",
  "api_key": "sk-...",
  "available_models": ["gpt-4", "gpt-3.5-turbo", "gpt-4-turbo"],
  "custom_url": null
}

Anthropic

{
  "name": "Anthropic Claude",
  "provider_type": "anthropic", 
  "api_key": "sk-ant-...",
  "available_models": ["claude-3-opus", "claude-3-sonnet", "claude-3-haiku"]
}

Mistral

{
  "name": "Mistral AI",
  "provider_type": "mistral",
  "api_key": "...",
  "available_models": ["mistral-large", "mistral-medium", "mistral-small"]
}

Custom/Local Models

{
  "name": "Local Ollama",
  "provider_type": "openai",
  "api_key": "not-needed",
  "custom_url": "http://localhost:11434/v1",
  "available_models": ["llama3.1:8b", "codellama:13b"]
}

Provider Management

Creating Providers via Web UI

  1. Go to Providers in the web interface

  2. Click "Add Provider"

  3. Configure the provider settings:

    • Name: Descriptive name for the provider

    • Type: Select from supported provider types

    • API Key: Your authentication key

    • Available Models: List of accessible models

    • Custom URL: For self-hosted or custom endpoints

Managing API Keys Securely

Best Practices:

  • Use environment variables for API keys in production

  • Rotate keys regularly

  • Monitor usage and set billing alerts

  • Use separate keys for development and production

Environment Variable Setup:

# .env file
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
MISTRAL_API_KEY=...

# Docker Compose
environment:
  - OPENAI_API_KEY=${OPENAI_API_KEY}
  - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}

Bound Agents

Bound agents combine agent configurations with model providers to create executable AI systems.

Creating Bound Agents

Via Web UI

  1. Navigate to Bound Agents

  2. Click "Create Bound Agent"

  3. Configure the binding:

    • Name: Unique identifier for the bound agent

    • Description: Brief explanation

    • Agent Configuration: Select from available agents

    • Model Provider: Choose the provider

    • Model: Select specific model from provider's available models

Naming Convention Best Practices

Use descriptive names that indicate both purpose and model:

Good examples:
- customer-support-gpt4
- technical-docs-claude-opus
- code-review-codellama-13b
- content-writer-mistral-large

Poor examples:
- agent1
- my-agent
- test

Extended Bound Agent Information

Use the ?ext=1 parameter to get comprehensive information:

curl "http://localhost:3000/api/v1/bound_agents/customer-support-gpt4?ext=1"

This returns:

{
  "bound_agent": { ... },
  "agent_config": { ... },
  "model_provider": { ... }
}

Sandbox Configurations

Sandbox configurations define runtime environments for agent execution.

Core Components

System Parameters

{
  "parameters": {
    "temperature": 0.7,      // Creativity level (0.0-1.0)
    "max_tokens": 2048,      // Maximum response length  
    "top_p": 1.0            // Token selection diversity
  }
}

Model Selection

{
  "selected_model_reference": {
    "provider_id": "openai-prod",
    "model_name": "gpt-4"
  }
}

Tool Integration

{
  "tools": [
    {
      "id": "web-search-tools",
      "tool_names": ["google_search", "webpage_extract"]
    },
    {
      "id": "data-tools", 
      "tool_names": ["*"]  // All tools from this provider
    }
  ]
}

Creating Sandbox Configurations

Basic Configuration

{
  "name": "Customer Support Environment",
  "description": "Production environment for customer support agents",
  "system_prompt": "You are operating in a customer support context. Always prioritize customer satisfaction and follow company policies.",
  "parameters": {
    "temperature": 0.3,
    "max_tokens": 1024,
    "top_p": 0.9
  },
  "tools": [],
  "category": "production"
}

Advanced Configuration with Tools

{
  "name": "Research Assistant Environment",
  "description": "Environment with web search and data analysis capabilities",
  "system_prompt": "You have access to web search and data analysis tools. Use them to provide comprehensive, well-researched responses.",
  "parameters": {
    "temperature": 0.5,
    "max_tokens": 4096,
    "top_p": 0.95
  },
  "tools": [
    {
      "id": "web-tools",
      "tool_names": ["search", "extract_content", "summarize_webpage"]
    },
    {
      "id": "data-tools",
      "tool_names": ["analyze_csv", "create_chart"]
    }
  ],
  "category": "research"
}

Tool Providers

Tool providers extend agent capabilities with external functions.

MCP (Model Context Protocol) Tools

CoAgent supports MCP tools for extending agent capabilities:

Web Search Tools

{
  "name": "Web Search Provider",
  "description": "Provides web search and content extraction",
  "provider_type": "mcp_link",
  "config": {
    "transport": "stdio",
    "command": ["npx", "-y", "@modelcontextprotocol/server-web-search"],
    "args": [],
    "env": {
      "GOOGLE_API_KEY": "${GOOGLE_API_KEY}",
      "GOOGLE_CSE_ID": "${GOOGLE_CSE_ID}"
    }
  }
}

Database Tools

{
  "name": "Database Provider",
  "description": "SQL query and database interaction tools",
  "provider_type": "mcp_link", 
  "config": {
    "transport": "stdio",
    "command": ["python", "-m", "mcp_server_database"],
    "env": {
      "DATABASE_URL": "postgresql://user:pass@localhost/db"
    }
  }
}

Built-in Tools

CoAgent includes several built-in tool categories:

  • File Operations: Read, write, and manipulate files

  • Web Requests: HTTP requests and API calls

  • Data Processing: JSON/CSV parsing and transformation

  • System Commands: Safe system command execution

Performance Optimization

Model Selection Guidelines

Task-Based Recommendations

Complex Reasoning Tasks:

  • GPT-4, Claude-3-Opus

  • Higher temperature (0.7-0.9) for creativity

  • Higher max_tokens for detailed responses

Factual Q&A:

  • GPT-3.5-turbo, Claude-3-Sonnet

  • Lower temperature (0.1-0.3) for consistency

  • Moderate max_tokens (512-1024)

Code Generation:

  • GPT-4, CodeLlama, Claude-3-Sonnet

  • Low temperature (0.1-0.2) for accuracy

  • Higher max_tokens for complete code blocks

Customer Support:

  • GPT-3.5-turbo, Claude-3-Haiku

  • Medium temperature (0.3-0.5) for balanced responses

  • Moderate max_tokens with clear length limits

Parameter Tuning

Temperature Guidelines

  • 0.0-0.2: Highly deterministic, factual responses

  • 0.3-0.5: Balanced creativity and consistency

  • 0.6-0.8: More creative, varied responses

  • 0.9-1.0: Highly creative, potentially unpredictable

Token Management

  • Set max_tokens based on expected response length

  • Monitor token usage through CoAgent's monitoring system

  • Use shorter limits for cost optimization

  • Consider model-specific token limits

Cost Optimization Strategies

1. Model Tiering

# Use different models based on complexity
simple_tasks_config = CoagentConfig(model_name="gpt-3.5-turbo")
complex_tasks_config = CoagentConfig(model_name="gpt-4")

2. Response Length Control

{
  "parameters": {
    "max_tokens": 512  // Limit for cost control
  }
}

3. Context Management

  • Use context switching to avoid repeated information

  • Implement conversation summarization for long interactions

  • Clear context when switching topics

Monitoring and Maintenance

Performance Metrics

Monitor these key metrics through CoAgent's monitoring system:

  • Response Time: Average and 95th percentile

  • Token Usage: Input/output token consumption

  • Success Rate: Percentage of successful requests

  • Cost: Total spending by model and time period

  • Tool Usage: Frequency of tool calls

Maintenance Best Practices

Regular Reviews

  • Review system prompts quarterly

  • Analyze performance metrics monthly

  • Update model selections based on new releases

  • Rotate API keys according to security policies

A/B Testing

Use CoAgent's testing framework to compare configurations:

  • Test different system prompts

  • Compare model performance

  • Evaluate parameter changes

  • Measure user satisfaction

Continuous Improvement

  • Collect user feedback through CoAgent's feedback system

  • Analyze failed interactions

  • Update prompts based on common issues

  • Optimize tool configurations

Troubleshooting

Common Issues

Agent Not Responding

# Check bound agent status
curl http://localhost:3000/api/v1/bound_agents/your-agent-name

# Verify model provider connectivity
curl

High Response Times

  • Check model provider latency

  • Reduce max_tokens if responses are too long

  • Consider switching to faster models for simple tasks

  • Monitor tool execution times

Inconsistent Responses

  • Lower temperature for more consistent behavior

  • Review and refine system prompts

  • Check for conflicting instructions

  • Ensure proper context management

Cost Issues

  • Monitor token usage in the CoAgent dashboard

  • Set up billing alerts with providers

  • Implement rate limiting if needed

  • Consider model downgrading for non-critical tasks

Advanced Debugging

Enable Debug Logging

config = CoagentConfig(
    model_name="gpt-4",
    logger_config=LoggerConfig(
        enabled=True,
        base_url="http://localhost:3000"
    )
)

Analyze Log Data

Use CoAgent's monitoring tools to:

  • View detailed request/response logs

  • Track token usage patterns

  • Identify performance bottlenecks

  • Monitor tool execution

Next Steps