Overview
The Agent
class is the central component of Tyler, providing a flexible interface for creating AI agents with tool use, delegation capabilities, and conversation management.
Creating an Agent
from tyler import Agent
agent = Agent(
name="MyAssistant",
model_name="gpt-4o",
purpose="To help users with their tasks",
temperature=0.7,
tools=[...], # Optional tools
agents=[...] # Optional sub-agents for delegation
)
All Parameters
The name of your agent. This is used in the system prompt to give the agent an identity.
The LLM model to use. Supports any LiteLLM compatible model including OpenAI, Anthropic, Gemini, and more.
purpose
string | Prompt
default:"To be a helpful assistant."
The agent’s purpose or system prompt. Can be a string or a Tyler Prompt object for more complex prompts.
Controls randomness in responses. Range is 0.0 to 2.0, where lower values make output more focused and deterministic.
Whether to automatically drop unsupported parameters for specific models. When True, parameters like temperature
are automatically removed for models that don’t support them (e.g., O-series models). This ensures seamless
compatibility across different model providers without requiring model-specific configuration.
tools
List[Union[str, Dict, Callable, ModuleType]]
default:"[]"
List of tools available to the agent. Can include:
- Direct tool function references (callables)
- Tool module namespaces (modules like
web
, files
)
- Built-in tool module names (strings like
"web"
, "files"
)
- Custom tool definitions (dicts with ‘definition’, ‘implementation’, and optional ‘attributes’ keys)
For module names, you can specify specific tools using 'module:tool1,tool2'
format.
List of sub-agents that this agent can delegate tasks to. Enables multi-agent systems and task delegation.
Maximum number of tool calls allowed per conversation turn. Prevents infinite loops in tool usage.
api_base
string | None
default:"None"
Custom API base URL for the model provider (e.g., for using alternative inference services).
You can also use base_url
as an alias for this parameter.
base_url
string | None
default:"None"
Alias for api_base
. Either parameter can be used to specify a custom API endpoint.
Additional headers to include in API requests. Useful for authentication tokens, API keys, or tracking headers.
notes
string | Prompt
default:""
Supporting notes to help the agent accomplish its purpose. These are included in the system prompt
and can provide additional context or instructions.
Version identifier for the agent. Useful for tracking agent iterations and changes.
thread_store
ThreadStore | None
default:"None"
Thread store instance for managing conversation threads. If not provided, uses the default thread store.
This parameter is excluded from serialization.
file_store
FileStore | None
default:"None"
File store instance for managing file attachments. If not provided, uses the default file store.
This parameter is excluded from serialization.
Processing Conversations
The go()
method is the primary interface for processing conversations:
Non-Streaming Mode
from tyler import Thread, Message
# Create a conversation thread
thread = Thread()
thread.add_message(Message(role="user", content="Hello!"))
# Process the thread
result = await agent.go(thread)
# Access the response
print(result.content) # The agent's final response
print(result.thread) # Updated thread with all messages
print(result.new_messages) # New messages added in this turn
print(result.execution) # Detailed execution information
Streaming Mode
from tyler import EventType
# Stream responses in real-time
async for event in agent.go(thread, stream=True):
if event.type == EventType.LLM_STREAM_CHUNK:
# Content being generated
print(event.data["content_chunk"], end="", flush=True)
elif event.type == EventType.TOOL_SELECTED:
# Tool about to be called
print(f"Using tool: {event.data['tool_name']}")
elif event.type == EventType.MESSAGE_CREATED:
# New message added to thread
msg = event.data["message"]
print(f"New {msg.role} message")
Return Values
AgentResult (Non-Streaming)
@dataclass
class AgentResult:
thread: Thread # Updated thread with all messages
new_messages: List[Message] # New messages from this execution
content: Optional[str] # The final assistant response
ExecutionEvent (Streaming)
@dataclass
class ExecutionEvent:
type: EventType # Type of event
timestamp: datetime # When the event occurred
data: Dict[str, Any] # Event-specific data
metadata: Optional[Dict[str, Any]] # Additional metadata
Event Types
ITERATION_START
- New iteration beginning
LLM_REQUEST
- Request sent to LLM
LLM_RESPONSE
- Complete response received
LLM_STREAM_CHUNK
- Streaming content chunk
TOOL_SELECTED
- Tool about to be called
TOOL_RESULT
- Tool execution completed
TOOL_ERROR
- Tool execution failed
MESSAGE_CREATED
- New message added
EXECUTION_COMPLETE
- All processing done
EXECUTION_ERROR
- Processing failed
ITERATION_LIMIT
- Max iterations reached
Execution Details
You can access execution information through the thread and messages:
# Calculate timing from messages
if result.new_messages:
start_time = min(msg.timestamp for msg in result.new_messages)
end_time = max(msg.timestamp for msg in result.new_messages)
duration_ms = (end_time - start_time).total_seconds() * 1000
print(f"Duration: {duration_ms:.0f}ms")
print(f"Started: {start_time}")
print(f"Ended: {end_time}")
# Token usage from thread
token_stats = result.thread.get_total_tokens()
print(f"Total tokens: {token_stats['overall']['total_tokens']}")
# Tool usage from thread
tool_usage = result.thread.get_tool_usage()
if tool_usage['total_calls'] > 0:
print(f"\nTools used:")
for tool_name, count in tool_usage['tools'].items():
print(f" {tool_name}: {count} calls")
from lye import WEB_TOOLS, FILES_TOOLS
agent = Agent(
name="ResearchAssistant",
model_name="gpt-4o",
purpose="To research topics and create reports",
tools=[*WEB_TOOLS, *FILES_TOOLS]
)
# The agent can now browse the web and work with files
result = await agent.go(thread)
# Check which tools were used
tool_usage = result.thread.get_tool_usage()
for tool_name, count in tool_usage['tools'].items():
print(f"Used {tool_name}: {count} times")
Agent Delegation
researcher = Agent(
name="Researcher",
purpose="To find information",
tools=[*WEB_TOOLS]
)
writer = Agent(
name="Writer",
purpose="To create content",
tools=[*FILES_TOOLS]
)
coordinator = Agent(
name="Coordinator",
purpose="To manage research and writing tasks",
agents=[researcher, writer] # Can delegate to these agents
)
# The coordinator can now delegate tasks
result = await coordinator.go(thread)
Custom Configuration
# Use a custom API endpoint
agent = Agent(
model_name="gpt-4",
api_base="https://your-api.com/v1",
extra_headers={"Authorization": "Bearer token"}
)
# Configure storage
from narrator import ThreadStore, FileStore
agent = Agent(
thread_store=ThreadStore(backend="postgresql"),
file_store=FileStore(path="/custom/path")
)
Best practices
- Clear Purpose: Define a specific, focused purpose for each agent
- Tool Selection: Only include tools the agent actually needs
- Temperature: Use lower values (0.0-0.3) for consistency, higher (0.7-1.0) for creativity
- Error Handling: Always handle potential errors in production
- Token Limits: Monitor token usage to avoid hitting limits
- Streaming: Use streaming for better user experience in interactive applications
Example: Complete Application
import asyncio
from tyler import Agent, Thread, Message, EventType
from lye import WEB_TOOLS
async def main():
# Create agent
agent = Agent(
name="WebAssistant",
model_name="gpt-4o",
purpose="To help users find information online",
tools=WEB_TOOLS,
temperature=0.3
)
# Create thread
thread = Thread()
# Add user message
thread.add_message(Message(
role="user",
content="What's the latest news about AI?"
))
# Process with streaming
print("Assistant: ", end="", flush=True)
async for event in agent.go(thread, stream=True):
if event.type == EventType.LLM_STREAM_CHUNK:
print(event.data["content_chunk"], end="", flush=True)
elif event.type == EventType.TOOL_SELECTED:
print(f"\n[Searching: {event.data['tool_name']}...]\n", end="", flush=True)
print("\n")
if __name__ == "__main__":
asyncio.run(main())