Search Docs…

Search Docs…

Guide

Getting started with CoAgent

CoAgent is a comprehensive framework for building, testing, and monitoring AI agents. This guide will get you up and running with CoAgent in just a few minutes.

What is CoAgent?

CoAgent provides:

  • Agent Framework: Build context-aware AI agents with Python or Rust

  • Test Studio: Create comprehensive test suites and compare agent performance

  • Monitoring: Track performance, costs, and detect anomalies in real-time

  • Sandbox Environment: Safe testing environment with configurable parameters

Quick Start

The fastest way to get CoAgent running is using Docker:

Prerequisites

  • Docker and Docker Compose installed

  • Git for cloning repositories

  • 8GB+ RAM recommended

  • Ports 3000 and 7878 available

Installation

  1. Clone the Docker setup repository:

    git clone https://github.com/your-org/coagent-docker
    cd
    
    
  2. Start CoAgent services:

  3. Access the web interface: Open your browser to http://localhost:3000

That's it! CoAgent is now running with all components available.

First Steps

1. Explore the Web Interface

After starting CoAgent, you'll see the main dashboard with several key sections:

  • Dashboard: Overview of recent activity and system health

  • Test Studio: Create and run agent tests

  • Monitoring: Track performance and costs

  • Sandbox: Interactive testing environment

2. Understanding Core Concepts

Agents

AI agents are the core units that process prompts and generate responses. Each agent has:

  • System Prompt: Instructions that define the agent's behavior

  • Context: Specialized knowledge or capabilities

  • Configuration: Parameters like temperature, max tokens

Providers

Model providers give agents access to LLMs:

  • OpenAI: GPT-3.5, GPT-4, etc.

  • Anthropic: Claude models

  • Mistral: Open source models

  • Custom: Your own model endpoints

Bound Agents

A bound agent combines an agent configuration with a specific model provider, creating a ready-to-use AI system.

Test Sets

Collections of test cases that evaluate agent performance across different scenarios.

3. Create Your First Agent

Let's create a simple assistant agent:

  1. Navigate to Agent Configurations in the web UI

  2. Click "Create New Agent"

  3. Fill in the details:

    Name: Personal Assistant
    Description: A helpful personal assistant
    System Prompt: You are a helpful personal assistant. Be concise and friendly
    
    
  4. Click "Save"

4. Set Up a Model Provider

  1. Go to Providers in the web UI

  2. Click "Add Provider"

  3. Configure a provider (example with OpenAI):

    Name: OpenAI GPT-4
    Type: openai
    API Key: [Your OpenAI API key]
    Available Models: gpt-4, gpt-3.5-turbo
  4. Click "Save"

5. Create a Bound Agent

  1. Navigate to Bound Agents

  2. Click "Create Bound Agent"

  3. Configure the binding:

    Name: assistant-gpt4
    Agent: Personal Assistant
    Provider: OpenAI GPT-4
    Model: gpt-4
  4. Click "Save"

6. Test Your Agent

  1. Go to Sandbox in the web UI

  2. Select your bound agent: assistant-gpt4

  3. Enter a test prompt: "Help me plan a productive morning routine"

  4. Click "Send"

  5. Review the response and execution details

Working with the Python Client

CoAgent includes a Python client that integrates with LangChain:

Installation

# Navigate to the langchain directory in your CoAgent installation
cd langchain
pip install -r

Basic Usage

from coagent import Coagent
from coagent_types import CoagentConfig, LoggerConfig

# Configure the agent
config = CoagentConfig(
    model_name="llama3.1:8b",  # or your preferred model
    logger_config=LoggerConfig(
        base_url="http://localhost:3000",
        enabled=True
    )
)

# Create the agent
agent = Coagent(config)

# Process a prompt
response = agent.process_prompt(
    human_prompt="What are three healthy breakfast options?",
    context="nutrition"  # optional context selection
)

print(f"Response: {response.response}")
print(f"Context used: {response.meta.get('context_name')}")

Working with the Rust Client

For production integrations and high-performance scenarios:

Installation

Add to your Cargo.toml:

[dependencies]
coagent-client = { git = "https://github.com/your-org/coagent", branch = "main" }
tokio = { version = "1.0", features = ["full"] }

Basic Usage

use coagent_client::{CoaClient, LogEntry, LogEntryHeader, UserInputLog};
use serde_json::json;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = CoaClient::new("http://localhost:3000/api")?;
    
    // Log a user input
    let log_entry = LogEntry::UserInput(UserInputLog {
        hdr: LogEntryHeader {
            run_id: "demo-run-123".to_string(),
            timestamp: chrono::Utc::now().to_rfc3339(),
            meta: json!({"source": "rust-client"}),
        },
        content: "Hello, world!".to_string(),
    });
    
    let response = client.log_entry(log_entry).await?;
    println!("Logged successfully: {:?}", response);
    
    Ok(())
}

Next Steps

Now that you have CoAgent running, explore these areas:

For Testing and QA

For Agent Development

For Production Use

Troubleshooting

Common Issues

Port already in use:

# Check what's using port 3000
lsof -i :3000
# Stop the process or change the port in docker-compose.yml

Docker out of memory:

# Increase Docker memory limit to 8GB+ in Docker Desktop settings
# Or use lighter configuration
docker-compose -f

API key errors:

  • Ensure your API keys are properly configured in the Providers section

  • Check that API keys have sufficient permissions and credits

  • Verify the provider URL is correct

Getting Help

  • Documentation: Browse the complete reference documentation

  • GitHub Issues: Report bugs and request features

  • Community: Join our Discord/Slack for support

Summary

You now have CoAgent running and understand the basic concepts. The system provides:

  1. Web interface for managing agents, tests, and monitoring

  2. Python client for LangChain integration

  3. Rust client for high-performance applications

  4. REST API for custom integrations

Choose your path based on your needs:

  • Developers: Start with the Python or Rust client tutorials

  • QA Engineers: Explore the Testing guide and Test Studio

  • DevOps: Check the Deployment and Monitoring documentation

  • Researchers: Dive into multi-agent comparison and analysis features