Workshop Facilitator Guide
Overview
This guide helps instructors effectively deliver the Multi-Agent System Workshop. It includes timing strategies, common pitfalls, and facilitation tips.
Pre-Workshop Preparation (1 week before)
Prerequisites Email
Send participants this checklist 1 week before:
Subject: Workshop Preparation - Multi-Agent Systems
Hi [Name],
To get the most out of our workshop, please complete these steps before we meet:
β
Install Python 3.11+ (https://www.python.org/downloads/)
β
Install uv package manager (https://github.com/astral-sh/uv) or pip
β
Sign up for API keys (instructions below)
β
Clone the repository and test your setup
API Keys Needed:
- Anthropic (required): https://console.anthropic.com
- OpenAI (optional): https://platform.openai.com
- Tavily (required): https://tavily.com
- ElevenLabs (required): https://elevenlabs.io
Test your setup:
$ git clone https://github.com/radixia/rooting-agentic-pipeline.git
$ cd rooting-agentic-pipeline
$ uv sync
$ python -c "import strands; print('β
Setup complete!')"
See you at the workshop!
Room Setup
- Projector/screen for live coding
- Stable WiFi (critical for API calls)
- Power outlets for participants
- Whiteboard for diagrams
- Backup USB with installer files
- Example .env file ready
Instructor Setup
# Create a clean workshop environment
git clone https://github.com/radixia/rooting-agentic-pipeline.git workshop-demo
cd workshop-demo
# Verify all tags
git tag -l
# Test each checkpoint
for tag in v0.1.0 v0.3.0 v0.4.0 v0.5.0 v0.7.0; do
git checkout $tag
python main.py # Verify it works
done
# Return to main
git checkout main
Demo Accounts
Set up demo API accounts with budget limits:
- Anthropic: $10 limit
- OpenAI: $10 limit
- Tavily: Free tier
- ElevenLabs: Free tier
Share these with participants who have issues.
Workshop Delivery
Opening (5 minutes)
Script:
Welcome! Today we're building a production-ready multi-agent system.
By the end, you'll have:
- 5 working agents
- Tool integration patterns
- Orchestration knowledge
- Deployable code
The workshop is hands-on. We'll code together, but feel free to
experiment. If you get stuck, raise your hand.
Let's start by understanding what we're building...
[Show architecture diagram on board]
Any questions before we dive in?
Tips:
- Keep intro brief - people want to code
- Show end result demo if possible
- Set expectations for pacing
- Emphasize hands-on nature
Part 1: Foundation (15 minutes)
Timing:
- 00:00-05:00: Checkout v0.1.0, setup check
- 05:00-10:00: Walk through agent.py
- 10:00-12:00: Explain prompt engineering
- 12:00-15:00: Run agent, debug issues
Live Coding Approach:
# Type slowly, explain as you go
# "First, we import the Strands framework..."
from strands import Agent
from strands_ai.models.anthropic import AnthropicModel
# "Now let's configure our model..."
# "Notice we're using Claude 3.5 Sonnet - great for complex tasks"
model = AnthropicModel(
model="claude-sonnet-4-5-20250929",
max_tokens=8000, # "Enough for long scripts"
temperature=0.7, # "Balanced creativity"
)
# Continue with explanation...
Common Issues:
-
API Key Not Found
# Stop and help debug cat .env # Check file exists python -c "import os; from dotenv import load_dotenv; load_dotenv(); print(os.getenv('ANTHROPIC_API_KEY'))" -
Import Errors
# Usually missing sync uv sync # Or not in venv source .venv/bin/activate -
Model Name Wrong
# Common mistake: omitting date # β model="claude-3-5-sonnet" # β model="claude-sonnet-4-5-20250929"
Checkpoint Questions:
- βWho has their agent running?β
- βWhat temperature did you choose and why?β
- βAnyone getting interesting results?β
Part 2: Tool Integration (25 minutes)
Timing:
- 15:00-20:00: Checkout v0.3.0, explain tools
- 20:00-25:00: Social agent with fetch_url
- 25:00-30:00: TTS agent with custom tool
- 30:00-35:00: Test and debug
- 35:00-40:00: Discussion and exercises
Teaching Tools:
Draw on whiteboard:
βββββββββββ
β Agent β
ββββββ¬βββββ
β
βββ Tool 1 (fetch_url)
βββ Tool 2 (web_search)
βββ Tool 3 (custom)
β
βββ External API
Key Points to Emphasize:
- Tools extend agent capabilities
- Built-in vs custom tools
- Tool selection happens automatically
- Agent decides when to use tools
Live Demo Flow:
# "Watch what happens when we give the agent a URL..."
agent.execute("Create post about https://example.com")
# "See how it called fetch_url? The agent decided to do that."
# "Now watch when we don't give a URL..."
agent.execute("Create post about AI agents")
# "No tool call - it used training knowledge instead."
Common Issues:
-
Tools Not Found
uv add strands-agents-tools -
Agent Doesnβt Use Tools
- Prompt not clear about tool usage
- Task doesnβt require tools
- Show example that definitely needs tools
-
ElevenLabs Authentication
- Most common: wrong API key variable name
- Check:
echo $ELEVENLABS_API_KEY
Pacing Check: Should be at 40min. If behind, skip exercises.
Part 3: Advanced Features (30 minutes)
Timing:
- 40:00-47:00: Research agent overview
- 47:00-52:00: Retry logic explanation
- 52:00-58:00: Run research agent
- 58:00-65:00: Orchestrator concept
- 65:00-70:00: Complete pipeline run
Research Agent Strategy:
This is complex - break it down:
- βFirst, letβs understand WHY multi-turn mattersβ
- Show token limit problem
- Explain retry logic with diagram
- Run simple example
- Then show full version
Whiteboard:
Research Loop:
Query β Search β Analyze β Done? βNoβ Refine Query β΅
β
Yes
β
Report
Orchestrator Teaching:
Critical concepts:
- Coordination vs execution
- TODO-driven planning
- Dependency management
- Error recovery
Demo Script:
# "The orchestrator is the conductor of our orchestra..."
# "It decides which agents to run, in what order..."
# Show input
pipeline_input = PipelineInput(
topic="AI Safety",
priority="audio", # "Notice: only audio, skip social"
)
# "Watch how it adapts the plan..."
result = orchestrator.execute(pipeline_input)
# "See? It skipped social media because we said audio priority"
Common Issues:
-
Tavily Rate Limits
- Free tier: 1000 searches/month
- Add delays if hitting limits
- Use demo account if needed
-
Token Limits
- Explain early
- Show summarization strategy
- Set max_iterations low for demo
-
Long Execution Time
- Full pipeline: 2-3 minutes
- Start early so it runs during explanation
- Have pre-run example ready as backup
Pacing Check: Should be at 70min. On track for finish!
Part 4: Wrap-up (20 minutes)
Timing:
- 70:00-75:00: Testing strategies
- 75:00-80:00: Production best practices
- 80:00-85:00: Q&A
- 85:00-90:00: Next steps and resources
Testing Demo:
# "Let's quickly verify everything works..."
python tests/test_minimal.py
# "Good! Now the full pipeline..."
python tests/test_full_pipeline.py
# "While this runs, let's discuss production..."
Production Checklist (show on screen):
β
Environment variables
β
Error handling
β
Logging
β
Monitoring
β
Rate limiting
β
Cost tracking
β
Health checks
β
Documentation
Q&A Strategy:
- Take 2-3 questions
- Defer complex questions to post-workshop
- Point to documentation for details
- Offer to help 1-on-1 after
Closing Script:
Great work everyone! You've built a real production system.
Key takeaways:
1. Agents are powerful but need structure
2. Tools extend capabilities
3. Orchestration enables complexity
4. Production readiness matters
Next steps:
- Extend with your own agents
- Deploy to production
- Join our community
- Share what you build!
Thank you! I'll stick around for questions.
Handling Different Scenarios
If Running Behind (> 5min)
Priority cuts:
- Skip exercises (save 10min)
- Shorten research agent (save 5min)
- Skip test writing (save 5min)
- Demo only, no live coding (save 10min)
If Running Ahead (< 5min)
Extensions:
- Deep dive into prompt engineering
- Live debugging session
- Performance optimization demo
- Architecture discussion
- Advanced patterns
If API Issues
Backup strategies:
- Use pre-recorded outputs
- Switch to demo accounts
- Show pre-generated examples
- Focus on code structure vs execution
If Participants Struggle
Help strategies:
- Pair programming
- Share working code
- Debug common issues first
- Skip ahead and return later
Engagement Techniques
Keep It Interactive
Every 10 minutes, ask:
- βEveryone following?β
- βQuestions so far?β
- βWhoβs gotten X working?β
Show Enthusiasm
- Celebrate when things work!
- Share your own debugging process
- Laugh at errors (theyβre learning moments)
- Show genuine excitement about the tech
Handle Questions
Good question response:
"Great question! Let me show you..."
[Live demo or diagram]
"Does that help? Anyone else wondering the same thing?"
Defer when needed:
"Excellent question, but complex. Let's discuss after
so we stay on track. Remind me!"
Encourage Experimentation
"Try changing the temperature to 0.9..."
"What happens if you modify the prompt?"
"Can you make it do X instead?"
Common Participant Questions
βWhy Strands over LangChain?β
Answer:
Good question! Strands focuses on:
1. Clean patterns (less magic)
2. Production-ready defaults
3. Better type safety
4. Simpler mental model
LangChain is great for experimentation.
Strands is great for production.
Both are valid choices depending on needs.
βHow much does this cost to run?β
Answer:
Let's break it down:
- One pipeline run: ~$0.35
- Per month (100 runs): ~$35
- With caching: -90% = $3.50
Tips to reduce cost:
1. Use Claude Haiku for simple tasks
2. Enable prompt caching
3. Optimize prompt length
4. Cache research results
Show the benchmark slide...
βCan I use other models?β
Answer:
Absolutely! Strands supports:
- Anthropic (Claude)
- OpenAI (GPT-4, GPT-3.5)
- Ollama (local models)
- AWS Bedrock
- Azure OpenAI
- And more...
Just swap the model class:
[Show code example]
βHow do I deploy this?β
Answer:
Multiple options:
1. FastAPI server (easiest)
2. AWS Lambda (serverless)
3. Docker container (flexible)
4. Kubernetes (enterprise)
We cover basics in Step 6.
For production, see docs:
[Point to deployment guide]
βWhat about security?β
Answer:
Critical considerations:
1. Never commit API keys
2. Validate all inputs
3. Rate limit requests
4. Monitor for abuse
5. Use least-privilege access
See security best practices in docs.
Post-workshop, happy to discuss your specific needs.
Post-Workshop
Follow-up Email
Send within 24 hours:
Subject: Workshop Materials & Next Steps
Thanks for attending! Here are your resources:
π Materials:
- Workshop repository: https://github.com/radixia/rooting-agentic-pipeline
- Slides: [URL]
- Recording: [URL]
π Next Steps:
1. Complete exercises you missed
2. Extend with your own agents
3. Join community: [Discord/Slack]
4. Share what you build!
π¬ Questions?
Reply to this email or join our community.
Keep building!
Gather Feedback
Simple Google Form:
1. What worked well? (text)
2. What could improve? (text)
3. Pace: Too fast / Just right / Too slow (radio)
4. Difficulty: Too easy / Just right / Too hard (radio)
5. Would you recommend? (1-10 scale)
6. Additional comments (text)
Track Metrics
Monitor:
- Completion rate (who finished all steps)
- Common drop-off points
- Most asked questions
- Pace feedback
- Overall satisfaction
Use to improve next session.
Facilitator Self-Assessment
After each workshop, review:
β Timing
- Started on time?
- Stayed on schedule?
- Finished on time?
β Engagement
- Participants asking questions?
- Hands raised when stuck?
- Energized vs disengaged?
β Technical
- Demos worked smoothly?
- Handled issues well?
- Clear explanations?
β Content
- Covered all key concepts?
- Appropriate difficulty?
- Valuable takeaways?
Tips for Success
Preparation
- Run through entire workshop day before
- Test all code examples
- Prepare backup demos
- Know common issues cold
Delivery
- Make eye contact
- Speak clearly and pace yourself
- Check for understanding frequently
- Stay positive when things break
Troubleshooting
- Stay calm
- Acknowledge the issue
- Have backup plan ready
- Turn errors into teaching moments
Energy
- Take breaks when behind screen
- Stand and move around
- Vary voice tone and pace
- Show enthusiasm!
Resources for Facilitators
Technical
Teaching
Community
- Facilitator Slack: [URL]
- Monthly facilitator calls: [Schedule]
- Share experiences: [Forum]
Version History
- v1.0 (2024-10): Initial workshop
- Future: Updates based on feedback
Questions about facilitation? Contact [email] or join facilitator community.
Good luck with your workshop! π