How the agent works
Tyler uses an iterative approach to process messages and execute tools. Here’s how it works:Processing Flow
When you callagent.go()
(with or without streaming), Tyler follows these steps:
-
Message Processing
- Loads the conversation thread
- Processes any attached files (images, PDFs, etc.)
- Ensures the system prompt is set
-
Step Execution
- Makes an LLM call with the current context
- Processes the response for content and tool calls
- Streams responses in real-time (if using
stream=True
)
-
Tool Execution
- If tool calls are present, executes them in parallel
- Adds tool results back to the conversation
- Returns to step execution if more tools are needed
-
Completion
- Saves the final thread state
- Returns the processed thread and new messages
Key Components
- ToolRunner: Manages the registry of available tools and handles execution
- Thread: Maintains conversation history and context
- Message: Represents user, assistant, and tool messages
- ExecutionEvent: Provides detailed execution telemetry and streaming updates
Error Handling & Limits
Tyler includes built-in safeguards:- Maximum tool iteration limit (default: 10)
- Automatic error recovery
- Structured error responses
- Tool execution timeout handling
Creating an Agent
Basic Agent
Agent with Tools
Agent Capabilities
1. Tool Usage
Agents can intelligently select and use tools based on the task:2. Multi-step Reasoning
Agents can break down complex tasks:3. Context Awareness
With proper thread management, agents maintain conversation context:Advanced features
Streaming Responses
For long-running tasks or real-time interaction:Custom System Prompts
Fine-tune agent behavior:Tool Configuration
Control how agents use tools:Agent Patterns
1. Supervisor Pattern
2. Tool Specialist Pattern
3. Validation Pattern
Best practices
1. Name agents descriptively
1. Name agents descriptively
Use clear, descriptive names that indicate the agent’s purpose:
2. Limit tool access
2. Limit tool access
Only provide tools the agent actually needs:
3. Use appropriate models
3. Use appropriate models
Match model capabilities to task complexity:
4. Handle errors gracefully
4. Handle errors gracefully
Always implement error handling:
Testing Agents
Tyler provides a comprehensive evaluation framework for testing your agents:- Mock Tools: Prevent real API calls during testing
- Flexible Expectations: Test content, behavior, and tool usage
- Multiple Scorers: Evaluate tone, task completion, and more
- Multi-turn Conversations: Test complex interaction flows
Always test your agents with the evaluation framework before deployment. See the full evaluation guide for details.
Performance Considerations
- Token Usage: Monitor and optimize prompts to reduce token consumption
- Tool Calls: Minimize unnecessary tool invocations
- Caching: Use Narrator’s thread system to avoid redundant work
- Parallel Execution: Enable
parallel_tool_calls
when tools can run concurrently