Workshop Quick Reference Card

Print this page and keep it handy during the workshop!

Git Commands

# List all workshop tags
git tag -l

# Checkout specific step
git checkout v0.1.0  # Step 1: Script Agent
git checkout v0.3.0  # Step 2: Social Agent
git checkout v0.4.0  # Step 3: TTS Agent
git checkout v0.5.0  # Step 4: Research Agent
git checkout v0.7.0  # Step 5: Orchestrator

# Return to latest
git checkout main

# See current position
git describe --tags

Environment Setup

# Install dependencies
uv sync

# Or with pip
pip install -e .

# Activate virtual environment (if needed)
source .venv/bin/activate  # macOS/Linux
.venv\Scripts\activate     # Windows

# Verify installation
python -c "import strands; print('βœ… Ready!')"

API Keys (.env file)

# Required keys
ANTHROPIC_API_KEY=sk-ant-...
TAVILY_API_KEY=tvly-...
ELEVENLABS_API_KEY=sk_...

# Optional
OPENAI_API_KEY=sk-...

# Test keys loaded
python -c "from dotenv import load_dotenv; import os; load_dotenv(); print('βœ… Keys loaded' if os.getenv('ANTHROPIC_API_KEY') else '❌ Keys missing')"

Running Examples

# Run main pipeline
python main.py

# Run specific example
python examples/quick_test.py
python examples/orchestrator_example.py

# Run tests
python tests/test_minimal.py
python tests/test_full_pipeline.py

Agent Creation Pattern

from strands import Agent
from strands_ai.models.anthropic import AnthropicModel

# 1. Create model
model = AnthropicModel(
    model="claude-sonnet-4-5-20250929",
    max_tokens=4000,
    temperature=0.7,
)

# 2. Load prompt
with open("prompt.md", "r") as f:
    system_prompt = f.read()

# 3. Create agent
agent = Agent(
    model=model,
    system_prompt=system_prompt,
    tools=[],  # Add tools if needed
    enable_streaming=False,
)

# 4. Execute
result = agent.execute("Your prompt here")
print(result.output)

Adding Tools

# Built-in tools
from strands_agents_tools.web import fetch_url, web_search

agent = Agent(
    model=model,
    system_prompt=prompt,
    tools=[fetch_url, web_search],  # β¬… Add here
)

# Custom tool
from strands.tools import tool
from pydantic import Field

@tool
def my_tool(
    input: str = Field(description="Input description")
) -> str:
    """Tool description that agents can see."""
    # Your implementation
    return "result"

# Use custom tool
agent = Agent(tools=[my_tool])

Model Options

Default Models Used in This Workshop:

πŸ’‘ Cost Tip: We use GPT-5 Mini as the default OpenAI model to reduce costs while maintaining excellent quality. For social media content generation, GPT-5 Mini provides substantial cost savings without sacrificing output quality. Upgrade to GPT-4o if your use case requires more advanced reasoning.

# Anthropic Claude (Default for most agents)
from strands_ai.models.anthropic import AnthropicModel
model = AnthropicModel(model="claude-sonnet-4-5-20250929")

# OpenAI GPT-5 Mini (Default for social agent - cost efficient)
from strands_ai.models.openai import OpenAIModel
model = OpenAIModel(model="gpt-5-mini-2025-08-07")

# OpenAI GPT-4o (More powerful, higher cost)
model = OpenAIModel(model="gpt-4o")

# Local with Ollama (Free, runs on your machine)
from strands_ai.models.ollama import OllamaModel
model = OllamaModel(model="llama2")

Temperature Guide

ValueUse CaseBehavior
0.0-0.3Tool use, structured outputDeterministic, focused
0.4-0.7General contentBalanced
0.8-1.0Creative writingVaried, creative

Common Error Messages

ModuleNotFoundError: No module named β€˜strands’

# Solution: Install dependencies
uv sync

AuthenticationError: Invalid API key

# Solution: Check .env file
cat .env
# Verify format: ANTHROPIC_API_KEY=sk-ant-...

ValueError: Unknown model

# Solution: Use full model name with date
model="claude-sonnet-4-5-20250929"  # βœ…
# Not: model="claude-3-5-sonnet"     # ❌

RateLimitError

# Solution: Add retry logic or slow down
import time
time.sleep(1)  # Wait between calls

Token limit exceeded

# Solution: Reduce max_tokens or input length
model = AnthropicModel(
    model="claude-sonnet-4-5-20250929",
    max_tokens=2000,  # β¬… Reduce this
)

Logging

import logging

# Basic logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

logger.info("Info message")
logger.warning("Warning message")
logger.error("Error message")

# Detailed logging
logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)

Debugging Tips

# Print agent configuration
print(f"Model: {agent.model}")
print(f"Tools: {agent.tools}")
print(f"Prompt length: {len(agent.system_prompt)}")

# Check result metadata
result = agent.execute(prompt)
print(f"Tokens: {result.metadata}")

# Enable verbose output
agent = Agent(
    model=model,
    system_prompt=prompt,
    verbose=True,  # β¬… Add this
)

File Structure

rooting-pipeline/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ script_agent/
β”‚   β”‚   β”œβ”€β”€ agent.py      # Agent implementation
β”‚   β”‚   └── prompt.md     # System prompt
β”‚   β”œβ”€β”€ social_agent/
β”‚   β”œβ”€β”€ tts_agent/
β”‚   β”‚   └── tools/
β”‚   β”‚       └── elevenlabs_tool.py  # Custom tool
β”‚   β”œβ”€β”€ research_agent/
β”‚   β”‚   └── tools/
β”‚   β”‚       └── tavily_tool.py
β”‚   └── orchestrator_agent/
β”œβ”€β”€ main.py              # Entry point
β”œβ”€β”€ .env                 # API keys (create this!)
└── pyproject.toml       # Dependencies

Useful Commands

# Check Python version
python --version

# List installed packages
pip list | grep strands

# Find files
find . -name "*.py" -type f

# Count lines of code
find . -name "*.py" | xargs wc -l

# Check syntax
python -m py_compile main.py

# Format code
ruff format .

# Lint code
ruff check .

Workshop Timeline

TimeStepTopic
0-15minStep 1First Agent (Script)
15-30minStep 2Built-in Tools (Social)
30-40minStep 3Custom Tools (TTS)
40-55minStep 4Complex Workflows (Research)
55-70minStep 5Orchestration
70-80minStep 6Testing & Production
80-90minWrap-upQ&A

Keyboard Shortcuts (Terminal)

MacWindows/LinuxAction
⌘+CCtrl+CCancel running process
⌘+KCtrl+LClear terminal
↑↑Previous command
⌘+ZCtrl+ZSuspend process

Getting Help

During Workshop:

After Workshop:

Quick Wins

If stuck, try these quick fixes:

# Nuclear option: fresh start
rm -rf .venv
uv sync
source .venv/bin/activate

# Clear Python cache
find . -type d -name __pycache__ -exec rm -rf {} +

# Restart from clean state
git stash
git checkout v0.1.0

Cheat Sheet: Pydantic Models

from pydantic import BaseModel, Field
from typing import Optional, Literal

class MyInput(BaseModel):
    """Input validation with Pydantic."""
    
    # Required field
    name: str
    
    # Optional field with default
    age: Optional[int] = None
    
    # Field with description
    topic: str = Field(
        description="What to write about"
    )
    
    # Limited choices
    priority: Literal["high", "medium", "low"] = "medium"
    
    # Validated field
    email: str = Field(pattern=r"^[\w\.-]+@[\w\.-]+\.\w+$")

# Use it
input_data = MyInput(
    name="Alice",
    topic="AI",
    email="alice@example.com"
)

print(input_data.model_dump())  # Convert to dict
print(input_data.model_dump_json())  # Convert to JSON

Common Patterns

Error Handling

try:
    result = agent.execute(prompt)
    print(result.output)
except Exception as e:
    logger.error(f"Agent failed: {e}")
    # Handle error

Retry Logic

from app.utils.retry_utils import retry_with_backoff

@retry_with_backoff(max_retries=3)
def call_agent():
    return agent.execute(prompt)

Streaming

agent = Agent(enable_streaming=True)
for chunk in agent.stream(prompt):
    print(chunk, end='', flush=True)

Notes Section

Use this space for your own notes:

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

_____________________________________________________________

Keep this handy! Refer back as needed during the workshop.

Having issues? Ask for help - that’s what we’re here for! πŸŽ‰