Skip to main content
In this quickstart, you’ll build an AI agent that can search the web and analyze images. Let’s dive in!
Requirements: Python 3.11 or higher
1

Install Slide

2

Set Up Your API Key

Your agent needs an API key to use the LLM. Choose your provider:
Create your API key from platform.openai.com then add it to your environment:
export OPENAI_API_KEY="sk-..."
For production, use a .env file. If you choose this approach:
  1. Install python-dotenv: uv add python-dotenv or pip install python-dotenv
  2. Create a .env file with your API keys
  3. Uncomment the dotenv import lines in the code example below
3

Create Your Agent

Create a file called agent.py:
# Optional: If you're using a .env file for API keys, uncomment these lines
# from dotenv import load_dotenv
# load_dotenv()

import asyncio
from tyler import Agent, Thread, Message, EventType
from lye import WEB_TOOLS, IMAGE_TOOLS, FILES_TOOLS

# Optional: Uncomment these lines if you want observiabiltiy with W&B Weave
# import weave
# weave.init("wandb-designers/my-agent")

async def main():
    # Create your agent
    agent = Agent(
        name="research-assistant",
        model_name="gpt-4o",  # Use the model for your API key provider
        purpose="To help with research and analysis tasks",
        tools=[
            *WEB_TOOLS,      # Can search and fetch web content
            *IMAGE_TOOLS    # Can analyze and describe images
        ]
    )

    # Create a conversation thread
    thread = Thread()
    thread.add_message(Message(
        role="user",
        content="Search for information about the Mars Perseverance rover and create a summary"
    ))
    
    # Watch your agent work in real-time
    print("🤖 Agent is working...\n")
    async for event in agent.stream(thread):
        # Print content as it's generated
        if event.type == EventType.LLM_STREAM_CHUNK:
            print(event.data['content_chunk'], end="", flush=True)
        # Show tool usage
        elif event.type == EventType.TOOL_SELECTED:
            print(f"\n\n🔧 Using {event.data['tool_name']}...", flush=True)

if __name__ == "__main__":
    asyncio.run(main())
4

Run Your Agent

uv run agent.py
Your agent will search for information about the Mars rover and create a summary. That’s it! 🎉

What’s Next?

Now that you have a working agent, explore these guides to add more capabilities:

Your First Agent

Detailed walkthrough with persistence and interactive sessions

Tyler CLI

Chat with agents instantly using the interactive CLI

Conversation Persistence

Make your agent maintain conversation history

Streaming Responses

See responses as they’re generated in real-time

Testing Agents

Write tests to ensure your agent behaves correctly

Deploy Your Agent

Deploy to Slack

Turn your agent into a Slack agent

Adding Tools

Add built-in and custom tools to agents

Troubleshooting

If you see errors like “No solution found when resolving dependencies” or “requires Python>=3.11”:For uv users:
# If you initialized with an older Python version, edit your pyproject.toml:
# requires-python = ">=3.11"

# Then recreate your virtual environment:
rm -rf .venv
uv sync
For pip users: We recommend switching to uv for better dependency management:
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh

# Recreate your project with uv
uv init my-agent
cd my-agent
uv add slide-tyler slide-lye slide-narrator
Make sure to set your OpenAI API key:
export OPENAI_API_KEY="sk-..."
Or use a different model provider:
agent = Agent(
    model_name="claude-3-opus-20240229",  # Anthropic
    # or
    model_name="gemini-pro",  # Google
    # or
    model_name="o3",  # OpenAI O-series
)
Tyler automatically handles model-specific parameter restrictions. For example, O-series models only support temperature=1, but Tyler will automatically drop incompatible parameters, so you can use the same agent configuration across all models.
Make sure you’ve installed all packages:
uv add slide-tyler slide-lye slide-narrator
Remember to use asyncio.run() or run in an async context:
import asyncio

async def main():
    # Your agent code here
    pass

asyncio.run(main())