Workshop Facilitator Guide

Overview

This guide helps instructors effectively deliver the Multi-Agent System Workshop. It includes timing strategies, common pitfalls, and facilitation tips.

Pre-Workshop Preparation (1 week before)

Prerequisites Email

Send participants this checklist 1 week before:

Subject: Workshop Preparation - Multi-Agent Systems

Hi [Name],

To get the most out of our workshop, please complete these steps before we meet:

βœ… Install Python 3.11+ (https://www.python.org/downloads/)
βœ… Install uv package manager (https://github.com/astral-sh/uv) or pip
βœ… Sign up for API keys (instructions below)
βœ… Clone the repository and test your setup

API Keys Needed:
- Anthropic (required): https://console.anthropic.com
- OpenAI (optional): https://platform.openai.com
- Tavily (required): https://tavily.com
- ElevenLabs (required): https://elevenlabs.io

Test your setup:
$ git clone https://github.com/radixia/rooting-agentic-pipeline.git
$ cd rooting-agentic-pipeline
$ uv sync
$ python -c "import strands; print('βœ… Setup complete!')"

See you at the workshop!

Room Setup

Instructor Setup

# Create a clean workshop environment
git clone https://github.com/radixia/rooting-agentic-pipeline.git workshop-demo
cd workshop-demo

# Verify all tags
git tag -l

# Test each checkpoint
for tag in v0.1.0 v0.3.0 v0.4.0 v0.5.0 v0.7.0; do
    git checkout $tag
    python main.py  # Verify it works
done

# Return to main
git checkout main

Demo Accounts

Set up demo API accounts with budget limits:

Share these with participants who have issues.

Workshop Delivery

Opening (5 minutes)

Script:

Welcome! Today we're building a production-ready multi-agent system.

By the end, you'll have:
- 5 working agents
- Tool integration patterns
- Orchestration knowledge
- Deployable code

The workshop is hands-on. We'll code together, but feel free to 
experiment. If you get stuck, raise your hand.

Let's start by understanding what we're building...
[Show architecture diagram on board]

Any questions before we dive in?

Tips:

Part 1: Foundation (15 minutes)

Timing:

Live Coding Approach:

# Type slowly, explain as you go

# "First, we import the Strands framework..."
from strands import Agent
from strands_ai.models.anthropic import AnthropicModel

# "Now let's configure our model..."
# "Notice we're using Claude 3.5 Sonnet - great for complex tasks"
model = AnthropicModel(
    model="claude-sonnet-4-5-20250929",
    max_tokens=8000,  # "Enough for long scripts"
    temperature=0.7,  # "Balanced creativity"
)

# Continue with explanation...

Common Issues:

  1. API Key Not Found

    # Stop and help debug
    cat .env  # Check file exists
    python -c "import os; from dotenv import load_dotenv; load_dotenv(); print(os.getenv('ANTHROPIC_API_KEY'))"
  2. Import Errors

    # Usually missing sync
    uv sync
    # Or not in venv
    source .venv/bin/activate
  3. Model Name Wrong

    # Common mistake: omitting date
    # ❌ model="claude-3-5-sonnet"
    # βœ… model="claude-sonnet-4-5-20250929"

Checkpoint Questions:

Part 2: Tool Integration (25 minutes)

Timing:

Teaching Tools:

Draw on whiteboard:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Agent  β”‚
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
     β”‚
     β”œβ”€β†’ Tool 1 (fetch_url)
     β”œβ”€β†’ Tool 2 (web_search)
     └─→ Tool 3 (custom)
          β”‚
          └─→ External API

Key Points to Emphasize:

  1. Tools extend agent capabilities
  2. Built-in vs custom tools
  3. Tool selection happens automatically
  4. Agent decides when to use tools

Live Demo Flow:

# "Watch what happens when we give the agent a URL..."
agent.execute("Create post about https://example.com")

# "See how it called fetch_url? The agent decided to do that."
# "Now watch when we don't give a URL..."
agent.execute("Create post about AI agents")

# "No tool call - it used training knowledge instead."

Common Issues:

  1. Tools Not Found

    uv add strands-agents-tools
  2. Agent Doesn’t Use Tools

    • Prompt not clear about tool usage
    • Task doesn’t require tools
    • Show example that definitely needs tools
  3. ElevenLabs Authentication

    • Most common: wrong API key variable name
    • Check: echo $ELEVENLABS_API_KEY

Pacing Check: Should be at 40min. If behind, skip exercises.

Part 3: Advanced Features (30 minutes)

Timing:

Research Agent Strategy:

This is complex - break it down:

  1. β€œFirst, let’s understand WHY multi-turn matters”
  2. Show token limit problem
  3. Explain retry logic with diagram
  4. Run simple example
  5. Then show full version

Whiteboard:

Research Loop:

Query β†’ Search β†’ Analyze β†’ Done? ─Noβ†’ Refine Query ↡
                            β”‚
                           Yes
                            ↓
                         Report

Orchestrator Teaching:

Critical concepts:

  1. Coordination vs execution
  2. TODO-driven planning
  3. Dependency management
  4. Error recovery

Demo Script:

# "The orchestrator is the conductor of our orchestra..."
# "It decides which agents to run, in what order..."

# Show input
pipeline_input = PipelineInput(
    topic="AI Safety",
    priority="audio",  # "Notice: only audio, skip social"
)

# "Watch how it adapts the plan..."
result = orchestrator.execute(pipeline_input)

# "See? It skipped social media because we said audio priority"

Common Issues:

  1. Tavily Rate Limits

    • Free tier: 1000 searches/month
    • Add delays if hitting limits
    • Use demo account if needed
  2. Token Limits

    • Explain early
    • Show summarization strategy
    • Set max_iterations low for demo
  3. Long Execution Time

    • Full pipeline: 2-3 minutes
    • Start early so it runs during explanation
    • Have pre-run example ready as backup

Pacing Check: Should be at 70min. On track for finish!

Part 4: Wrap-up (20 minutes)

Timing:

Testing Demo:

# "Let's quickly verify everything works..."
python tests/test_minimal.py

# "Good! Now the full pipeline..."
python tests/test_full_pipeline.py

# "While this runs, let's discuss production..."

Production Checklist (show on screen):

βœ… Environment variables
βœ… Error handling
βœ… Logging
βœ… Monitoring
βœ… Rate limiting
βœ… Cost tracking
βœ… Health checks
βœ… Documentation

Q&A Strategy:

Closing Script:

Great work everyone! You've built a real production system.

Key takeaways:
1. Agents are powerful but need structure
2. Tools extend capabilities
3. Orchestration enables complexity
4. Production readiness matters

Next steps:
- Extend with your own agents
- Deploy to production
- Join our community
- Share what you build!

Thank you! I'll stick around for questions.

Handling Different Scenarios

If Running Behind (> 5min)

Priority cuts:

  1. Skip exercises (save 10min)
  2. Shorten research agent (save 5min)
  3. Skip test writing (save 5min)
  4. Demo only, no live coding (save 10min)

If Running Ahead (< 5min)

Extensions:

  1. Deep dive into prompt engineering
  2. Live debugging session
  3. Performance optimization demo
  4. Architecture discussion
  5. Advanced patterns

If API Issues

Backup strategies:

  1. Use pre-recorded outputs
  2. Switch to demo accounts
  3. Show pre-generated examples
  4. Focus on code structure vs execution

If Participants Struggle

Help strategies:

  1. Pair programming
  2. Share working code
  3. Debug common issues first
  4. Skip ahead and return later

Engagement Techniques

Keep It Interactive

Every 10 minutes, ask:

Show Enthusiasm

Handle Questions

Good question response:

"Great question! Let me show you..."
[Live demo or diagram]
"Does that help? Anyone else wondering the same thing?"

Defer when needed:

"Excellent question, but complex. Let's discuss after
 so we stay on track. Remind me!"

Encourage Experimentation

"Try changing the temperature to 0.9..."
"What happens if you modify the prompt?"
"Can you make it do X instead?"

Common Participant Questions

”Why Strands over LangChain?”

Answer:

Good question! Strands focuses on:
1. Clean patterns (less magic)
2. Production-ready defaults
3. Better type safety
4. Simpler mental model

LangChain is great for experimentation.
Strands is great for production.

Both are valid choices depending on needs.

β€œHow much does this cost to run?”

Answer:

Let's break it down:
- One pipeline run: ~$0.35
- Per month (100 runs): ~$35
- With caching: -90% = $3.50

Tips to reduce cost:
1. Use Claude Haiku for simple tasks
2. Enable prompt caching
3. Optimize prompt length
4. Cache research results

Show the benchmark slide...

β€œCan I use other models?”

Answer:

Absolutely! Strands supports:
- Anthropic (Claude)
- OpenAI (GPT-4, GPT-3.5)
- Ollama (local models)
- AWS Bedrock
- Azure OpenAI
- And more...

Just swap the model class:
[Show code example]

β€œHow do I deploy this?”

Answer:

Multiple options:
1. FastAPI server (easiest)
2. AWS Lambda (serverless)
3. Docker container (flexible)
4. Kubernetes (enterprise)

We cover basics in Step 6.
For production, see docs:
[Point to deployment guide]

β€œWhat about security?”

Answer:

Critical considerations:
1. Never commit API keys
2. Validate all inputs
3. Rate limit requests
4. Monitor for abuse
5. Use least-privilege access

See security best practices in docs.
Post-workshop, happy to discuss your specific needs.

Post-Workshop

Follow-up Email

Send within 24 hours:

Subject: Workshop Materials & Next Steps

Thanks for attending! Here are your resources:

πŸ“š Materials:
- Workshop repository: https://github.com/radixia/rooting-agentic-pipeline
- Slides: [URL]
- Recording: [URL]

πŸš€ Next Steps:
1. Complete exercises you missed
2. Extend with your own agents
3. Join community: [Discord/Slack]
4. Share what you build!

πŸ’¬ Questions?
Reply to this email or join our community.

Keep building!

Gather Feedback

Simple Google Form:

1. What worked well? (text)
2. What could improve? (text)
3. Pace: Too fast / Just right / Too slow (radio)
4. Difficulty: Too easy / Just right / Too hard (radio)
5. Would you recommend? (1-10 scale)
6. Additional comments (text)

Track Metrics

Monitor:

Use to improve next session.

Facilitator Self-Assessment

After each workshop, review:

βœ… Timing

βœ… Engagement

βœ… Technical

βœ… Content

Tips for Success

Preparation

Delivery

Troubleshooting

Energy

Resources for Facilitators

Technical

Teaching

Community

Version History


Questions about facilitation? Contact [email] or join facilitator community.

Good luck with your workshop! πŸŽ‰