How the agent works

Tyler uses an iterative approach to process messages and execute tools. Here’s how it works:

Processing Flow

When you call agent.go() (with or without streaming), Tyler follows these steps:
  1. Message Processing
    • Loads the conversation thread
    • Processes any attached files (images, PDFs, etc.)
    • Ensures the system prompt is set
  2. Step Execution
    • Makes an LLM call with the current context
    • Processes the response for content and tool calls
    • Streams responses in real-time (if using stream=True)
  3. Tool Execution
    • If tool calls are present, executes them in parallel
    • Adds tool results back to the conversation
    • Returns to step execution if more tools are needed
  4. Completion
    • Saves the final thread state
    • Returns the processed thread and new messages

Key Components

  • ToolRunner: Manages the registry of available tools and handles execution
  • Thread: Maintains conversation history and context
  • Message: Represents user, assistant, and tool messages
  • ExecutionEvent: Provides detailed execution telemetry and streaming updates

Error Handling & Limits

Tyler includes built-in safeguards:
  • Maximum tool iteration limit (default: 10)
  • Automatic error recovery
  • Structured error responses
  • Tool execution timeout handling

Creating an Agent

Basic Agent

from tyler import Agent

# Minimal agent configuration
agent = Agent(
    name="assistant",
    model_name="gpt-4",
    purpose="To be a helpful assistant"
)

# With additional configuration
agent = Agent(
    name="gpt4-assistant",
    model_name="gpt-4",
    purpose="To assist with various tasks",
    temperature=0.7
)

Agent with Tools

from tyler import Agent
from lye import WEB_TOOLS, FILES_TOOLS, IMAGE_TOOLS
from lye.web import search, fetch
from lye.files import read_file, write_file
from lye.image import analyze_image

# Using tool groups
agent = Agent(
    name="research-assistant",
    model_name="gpt-4",
    purpose="To help with research tasks",
    tools=[
        *WEB_TOOLS,      # All web tools
        *FILES_TOOLS,    # All file tools
        analyze_image    # Specific image tool
    ]
)

# Or using specific tools
agent = Agent(
    name="focused-assistant",
    model_name="gpt-4",
    purpose="To search and save information",
    tools=[search, fetch, read_file, write_file]
)

Agent Capabilities

1. Tool Usage

Agents can intelligently select and use tools based on the task:
from tyler import Agent, Thread, Message

# Create thread and message
thread = Thread()
message = Message(
    role="user",
    content="Search for recent AI developments and save a summary"
)
thread.add_message(message)

# Agent automatically chooses the right tools
result = await agent.go(thread)
# Agent will: 1) Use web.search, 2) Use web.fetch for details, 3) Use files.write

2. Multi-step Reasoning

Agents can break down complex tasks:
thread = Thread()
message = Message(
    role="user",
    content="""
    1. Find the top 3 Python web frameworks
    2. Compare their features
    3. Create a comparison chart
    4. Save the analysis
    """
)
thread.add_message(message)

result = await agent.go(thread)

3. Context Awareness

With proper thread management, agents maintain conversation context:
from tyler import Agent, Thread, Message, ThreadStore

# Set up persistent storage
thread_store = await ThreadStore.create("sqlite+aiosqlite:///conversations.db")

agent = Agent(
    name="assistant",
    model_name="gpt-4",
    thread_store=thread_store
)

# Create a thread
thread = Thread(id="research-session")

# First query
message1 = Message(role="user", content="What is FastAPI?")
thread.add_message(message1)
result = await agent.go(thread)

# Save the thread
await thread_store.save_thread(result.thread)

# Follow-up uses context
message2 = Message(role="user", content="How does it compare to Flask?")
result.thread.add_message(message2)
final_result = await agent.go(result.thread)
# Agent knows we're talking about FastAPI

Advanced features

Streaming Responses

For long-running tasks or real-time interaction:
from tyler import Agent, Thread, Message
from tyler.models.execution import ExecutionEvent, EventType

thread = Thread()
message = Message(role="user", content="Write a detailed analysis of...")
thread.add_message(message)

async for event in agent.go(thread, stream=True):
    if event.type == EventType.LLM_STREAM_CHUNK:
        print(event.data.get("content_chunk", ""), end="", flush=True)
    elif event.type == EventType.TOOL_SELECTED:
        print(f"\n[Using tool: {event.data.get('tool_name', '')}]")

Custom System Prompts

Fine-tune agent behavior:
agent = Agent(
    name="code-reviewer",
    model_name="gpt-4",
    purpose="""You are an expert code reviewer. 
    Focus on: 
    - Security vulnerabilities
    - Performance optimizations
    - Best practices
    Always explain your reasoning."""
)

Tool Configuration

Control how agents use tools:
from lye import FILES_TOOLS

agent = Agent(
    name="safe-agent",
    model_name="gpt-4",
    purpose="To safely read and analyze files",
    tools=[FILES_TOOLS[0]],  # Just read_file tool
    tool_choice="auto",  # or "none", "required", or specific tool name
    parallel_tool_calls=True  # Enable parallel execution
)

Agent Patterns

1. Supervisor Pattern

# Note: This is a conceptual pattern - implement delegate_task and create_sub_agent
supervisor = Agent(
    name="supervisor",
    model_name="gpt-4",
    purpose="You coordinate work between specialized agents",
    tools=[delegate_task, create_sub_agent]
)

researcher = Agent(
    name="researcher", 
    model_name="gpt-4",
    purpose="To conduct research",
    tools=[*WEB_TOOLS]
)

writer = Agent(
    name="writer",
    model_name="gpt-4", 
    purpose="To write content",
    tools=[*FILES_TOOLS]
)

2. Tool Specialist Pattern

from lye import IMAGE_TOOLS
from lye.files import read_csv, write_file

# Image specialist
image_agent = Agent(
    name="image-expert",
    model_name="gpt-4",
    purpose="You are an expert at image analysis and manipulation",
    tools=IMAGE_TOOLS
)

# Data specialist
data_agent = Agent(
    name="data-analyst",
    model_name="gpt-4",
    purpose="You are a data analysis expert",
    tools=[read_csv, write_file]  # Add your data analysis tools
)

3. Validation Pattern

# Note: validation_fn is not a current Tyler feature
# Instead, use the evaluation framework for validation
from tyler.eval import AgentEval, Conversation, Expectation

eval = AgentEval(
    name="validation_test",
    conversations=[
        Conversation(
            user="Analyze this data",
            expect=Expectation(
                custom=lambda response: len(response["content"]) > 100
            )
        )
    ]
)

Best practices

Testing Agents

Tyler provides a comprehensive evaluation framework for testing your agents:
from tyler.eval import AgentEval, Conversation, Expectation, ToolUsageScorer

# Define test scenarios
eval = AgentEval(
    name="agent_test",
    conversations=[
        Conversation(
            user="Calculate the sum of 15 and 27",
            expect=Expectation(
                mentions=["42"],
                completes_task=True
            )
        )
    ],
    scorers=[ToolUsageScorer()]
)

# Run tests with mock tools
results = await eval.run(agent)
Key testing features:
  • Mock Tools: Prevent real API calls during testing
  • Flexible Expectations: Test content, behavior, and tool usage
  • Multiple Scorers: Evaluate tone, task completion, and more
  • Multi-turn Conversations: Test complex interaction flows
Always test your agents with the evaluation framework before deployment. See the full evaluation guide for details.

Performance Considerations

  • Token Usage: Monitor and optimize prompts to reduce token consumption
  • Tool Calls: Minimize unnecessary tool invocations
  • Caching: Use Narrator’s thread system to avoid redundant work
  • Parallel Execution: Enable parallel_tool_calls when tools can run concurrently

Next steps