Skip to main content
In this quickstart, you’ll build an AI agent that can search the web and analyze images. Let’s dive in!
Requirements: Python 3.11 or higher
1

Install Slide

2

Set Up Your API Key

Your agent needs an API key to use the LLM. Choose your provider:
  • OpenAI
  • Anthropic
  • Other Providers
Create your API key from platform.openai.com then add it to your environment:
export OPENAI_API_KEY="sk-..."
For production, use a .env file. If you choose this approach:
  1. Install python-dotenv: uv add python-dotenv or pip install python-dotenv
  2. Create a .env file with your API keys
  3. Uncomment the dotenv import lines in the code example below
3

Create Your Agent

Create a file called agent.py:
# Optional: If you're using a .env file for API keys, uncomment these lines
# from dotenv import load_dotenv
# load_dotenv()

import asyncio
from tyler import Agent, Thread, Message, EventType
from lye import WEB_TOOLS, IMAGE_TOOLS, FILES_TOOLS

# Optional: Uncomment these lines if you want observiabiltiy with W&B Weave
# import weave
# weave.init("wandb-designers/my-agent")

async def main():
    # Create your agent
    agent = Agent(
        name="research-assistant",
        model_name="gpt-4o",  # Use the model for your API key provider
        purpose="To help with research and analysis tasks",
        tools=[
            *WEB_TOOLS,      # Can search and fetch web content
            *IMAGE_TOOLS    # Can analyze and describe images
        ]
    )

    # Create a conversation thread
    thread = Thread()
    thread.add_message(Message(
        role="user",
        content="Search for information about the Mars Perseverance rover and create a summary"
    ))
    
    # Watch your agent work in real-time
    print("🤖 Agent is working...\n")
    async for event in agent.stream(thread):
        # Print content as it's generated
        if event.type == EventType.LLM_STREAM_CHUNK:
            print(event.data['content_chunk'], end="", flush=True)
        # Show tool usage
        elif event.type == EventType.TOOL_SELECTED:
            print(f"\n\n🔧 Using {event.data['tool_name']}...", flush=True)

if __name__ == "__main__":
    asyncio.run(main())
4

Run Your Agent

  • uv
  • python
uv run agent.py
Your agent will search for information about the Mars rover and create a summary. That’s it! 🎉

What’s Next?

Now that you have a working agent, explore these guides to add more capabilities:

Deploy Your Agent

Troubleshooting

If you see errors like “No solution found when resolving dependencies” or “requires Python>=3.11”:For uv users:
# If you initialized with an older Python version, edit your pyproject.toml:
# requires-python = ">=3.11"

# Then recreate your virtual environment:
rm -rf .venv
uv sync
For pip users:
# Make sure you're using Python 3.11+
python --version  # Should show 3.11 or higher

# If not, create a new venv with Python 3.11+:
python3.11 -m venv venv
source venv/bin/activate
pip install slide-tyler slide-lye slide-narrator
Make sure to set your OpenAI API key:
export OPENAI_API_KEY="sk-..."
Or use a different model provider:
agent = Agent(
    model_name="claude-3-opus-20240229",  # Anthropic
    # or
    model_name="gemini-pro",  # Google
    # or
    model_name="o3",  # OpenAI O-series
)
Tyler automatically handles model-specific parameter restrictions. For example, O-series models only support temperature=1, but Tyler will automatically drop incompatible parameters, so you can use the same agent configuration across all models.
Make sure you’ve installed all packages:
uv add slide-tyler slide-lye slide-narrator
Remember to use asyncio.run() or run in an async context:
import asyncio

async def main():
    # Your agent code here
    pass

asyncio.run(main())