Skip to main content
Slide Logo
Build an agent with just a few lines of code.
import asyncio
from tyler import Agent, Thread, Message, EventType
from lye import WEB_TOOLS

# Create an AI agent that can browse the web
agent = Agent(
    name="web_summarizer",
    model_name="gpt-4o",
    purpose="To summarize web content clearly and concisely",
    tools=WEB_TOOLS
)

# Ask your agent to visit and summarize a webpage
thread = Thread()
thread.add_message(Message(
    role="user", 
    content="Why should I use https://slide.mintlify.app/?"
))

# Watch your agent work in real-time
async def main():
    async for event in agent.stream(thread):
        # Print content as it's generated
        if event.type == EventType.LLM_STREAM_CHUNK:
            print(event.data['content_chunk'], end="", flush=True)
        # Show when tools are being used
        elif event.type == EventType.TOOL_SELECTED:
            print(f"\n🔧 Using {event.data['tool_name']}...")

asyncio.run(main())
That’s it! Your agent can now search the web, analyze images, and complete complex tasks autonomously—and you can watch it happen in real-time.

Why use Slide?

Slide gives you everything you need to build, test, and deploy intelligent AI agents. It is:
  • Extensible and open source: Built on a foundation you can inspect, customize, and contribute to
  • Flexible with model support: Compatible with any LLM provider supported by LiteLLM (100+ providers including OpenAI, Anthropic, etc.)
  • MCP and A2A compatible: Seamlessly integrated with Model Context Protocol (MCP) servers and Agent-to-Agent (A2A) protocol for multi-agent interoperability
  • Agent delegation built-in: Create multi-agent systems where specialized agents collaborate to solve complex tasks
  • Multimodal by design: Capable of processing and understanding images, audio, PDFs, and more out of the box
  • Persistent with conversations: Equipped with built-in support for threads, messages, and attachments with flexible storage options (in-memory, SQLite, or PostgreSQL)
  • Ready-to-use with tools: Packed with built-in tools for web interaction, file handling, and browser automation with support for custom tools
  • Real-time streaming enabled: Designed to build interactive applications with streaming responses from both agents and tools
  • Transparent reasoning: Access model thinking tokens from OpenAI o1 and Anthropic Claude to see how agents arrive at decisions
  • Interactive CLI included: Chat with your agents instantly using the built-in tyler chat command, or scaffold new projects with tyler init
  • Evaluation-ready: Equipped with a framework to test agents safely with mock tools, prebuilt LLM judges, and multi-turn conversation scenarios
  • Debuggable: Integrated with W&B Weave for powerful tracing and debugging capabilities

Ready to Build?

What Can You Build?