Workshop Troubleshooting Guide

Complete troubleshooting reference for common workshop issues.

Table of Contents

Setup Issues

Python Version Too Old

Symptom:

ERROR: Python 3.11 or higher is required

Solution:

# Check current version
python --version

# Install Python 3.11+ from python.org
# Or use pyenv
pyenv install 3.11.0
pyenv local 3.11.0

# Verify
python --version  # Should show 3.11+

uv Not Installed

Symptom:

command not found: uv

Solution:

# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

# Alternative: use pip
pip install strands-agents
pip install -r requirements.txt

Dependencies Won’t Install

Symptom:

ERROR: Could not find a version that satisfies the requirement...

Solutions:

  1. Clear cache and retry:
uv cache clear
uv sync
  1. Use pip instead:
pip install -e .
  1. Check Python version (must be 3.11+):
python --version
  1. Install system dependencies (Linux):
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y python3-dev build-essential

# Fedora/RHEL
sudo dnf install python3-devel gcc

Virtual Environment Issues

Symptom:

ImportError: No module named 'strands'

Solution:

# Ensure venv is activated
source .venv/bin/activate  # macOS/Linux
.venv\Scripts\activate     # Windows

# Or create fresh venv
python -m venv .venv
source .venv/bin/activate
uv sync

API Key Problems

Keys Not Loading

Symptom:

ValueError: ANTHROPIC_API_KEY not found in environment

Debug Steps:

  1. Verify .env file exists:
ls -la .env
cat .env  # Check contents
  1. Check format (no spaces around =):
# βœ… Correct
ANTHROPIC_API_KEY=sk-ant-...

# ❌ Wrong
ANTHROPIC_API_KEY = sk-ant-...
ANTHROPIC_API_KEY= "sk-ant-..."
  1. Test loading:
from dotenv import load_dotenv
import os

load_dotenv()
print(os.getenv("ANTHROPIC_API_KEY"))  # Should print key
  1. Check .env location:
# Must be in project root
pwd  # Check current directory
ls .env  # Should be in same dir as main.py

Invalid API Key

Symptom:

AuthenticationError: Invalid API key

Solutions:

  1. Regenerate key at provider console
  2. Check for extra spaces:
# Remove spaces/newlines
sed -i 's/ //g' .env
  1. Verify key format:
Anthropic: sk-ant-...
OpenAI: sk-...
Tavily: tvly-...
ElevenLabs: sk_...

Rate Limit Errors

Symptom:

RateLimitError: Rate limit exceeded

Solutions:

  1. Add delays:
import time
time.sleep(1)  # Wait between calls
  1. Use retry logic:
from app.utils.retry_utils import retry_with_backoff

@retry_with_backoff(max_retries=3, initial_delay=2.0)
def call_api():
    return agent.execute(prompt)
  1. Check usage limits in provider console

  2. Upgrade tier if on free plan

Import Errors

Module Not Found: strands

Symptom:

ModuleNotFoundError: No module named 'strands'

Solutions:

  1. Install package:
uv sync
# or
pip install strands-agents
  1. Activate venv:
source .venv/bin/activate
  1. Verify installation:
python -c "import strands; print(strands.__version__)"

Module Not Found: elevenlabs/tavily

Symptom:

ModuleNotFoundError: No module named 'elevenlabs'

Solution:

# Install missing package
uv add elevenlabs
uv add tavily-python

# Or with pip
pip install elevenlabs tavily-python

Import Error: Circular Import

Symptom:

ImportError: cannot import name 'X' from partially initialized module

Solution:

# Fix circular import by moving import inside function
def create_agent():
    from app.model import AgentInput  # β¬… Import here
    # ... rest of code

Agent Execution Issues

Agent Returns Empty Output

Symptom:

result.output == ""

Debug Steps:

  1. Check prompt:
print(f"Prompt length: {len(system_prompt)}")
print(f"First 100 chars: {system_prompt[:100]}")
  1. Verify model config:
print(f"Max tokens: {model.max_tokens}")  # Should be > 100
  1. Check for errors in result:
print(f"Errors: {result.errors}")
print(f"Metadata: {result.metadata}")
  1. Test with simple prompt:
result = agent.execute("Say hello")
print(result.output)  # Should work

Token Limit Exceeded

Symptom:

AnthropicError: maximum context length exceeded

Solutions:

  1. Reduce max_tokens:
model = AnthropicModel(
    model="claude-sonnet-4-5-20250929",
    max_tokens=2000,  # β¬… Reduce this
)
  1. Shorten input:
def truncate_input(text: str, max_chars: int = 4000) -> str:
    if len(text) > max_chars:
        return text[:max_chars] + "...[truncated]"
    return text

prompt = truncate_input(long_text)
  1. Use conversation summarization:
if len(conversation) > 5:
    # Summarize old messages
    conversation = [
        conversation[0],
        "...[previous messages summarized]...",
        conversation[-2:]
    ]

Agent Takes Too Long

Symptom: Agent runs for >2 minutes

Solutions:

  1. Add timeout:
import signal

def timeout_handler(signum, frame):
    raise TimeoutError("Agent timeout")

signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(60)  # 60 second timeout

try:
    result = agent.execute(prompt)
finally:
    signal.alarm(0)  # Cancel alarm
  1. Reduce max_tokens:
max_tokens=2000  # Faster than 8000
  1. Use faster model:
model="claude-3-haiku-20240307"  # Much faster

Model Not Found

Symptom:

ValueError: Unknown model: claude-3-5-sonnet

Solution:

# Use full model name with date
model="claude-sonnet-4-5-20250929"  # βœ…
# Not: model="claude-3-5-sonnet"    # ❌

Tool Problems

Tool Not Being Called

Symptom: Agent doesn’t use tools even when appropriate

Solutions:

  1. Make prompt explicit:
## Instructions

ALWAYS use the fetch_url tool when the user provides a URL.
Do not skip this step.
  1. Verify tool is added:
print(f"Tools: {agent.tools}")  # Should show your tool
  1. Test tool directly:
from app.tts_agent.tools.elevenlabs_tool import text_to_speech
result = text_to_speech("Test", output_path="test.mp3")
print(result)
  1. Check tool description:
# Tool description must be clear
@tool
def my_tool(param: str = Field(description="Clear description")):
    """
    Clear docstring that explains what the tool does.
    """
    ...

Tool Execution Fails

Symptom:

ToolExecutionError: Failed to execute tool

Debug Steps:

  1. Run tool standalone:
# Test outside agent
result = my_tool(test_input)
print(result)
  1. Add detailed logging:
import logging
logger = logging.getLogger(__name__)

@tool
def my_tool(param: str) -> str:
    logger.info(f"Tool called with: {param}")
    try:
        result = do_work(param)
        logger.info(f"Tool result: {result}")
        return result
    except Exception as e:
        logger.error(f"Tool failed: {e}", exc_info=True)
        raise
  1. Check API credentials (for API tools)

  2. Verify network connectivity:

curl https://api.elevenlabs.io/

ElevenLabs Audio Generation Fails

Symptom:

RuntimeError: Failed to generate speech

Solutions:

  1. Check API key:
import os
print(os.getenv("ELEVENLABS_API_KEY"))
  1. Verify voice ID:
# Default Rachel voice
voice_id = "21m00Tcm4TlvDq8ikWAM"
  1. Test with short text:
text_to_speech("Hello world", output_path="test.mp3")
  1. Check text length (max ~50k characters)

  2. Verify file permissions:

# Ensure can write to output directory
touch test.mp3 && rm test.mp3

Tavily Search Fails

Symptom:

RuntimeError: Search failed

Solutions:

  1. Check API key:
echo $TAVILY_API_KEY
  1. Verify query format:
# Query must be string
query = "test query"  # βœ…
query = ["list"]      # ❌
  1. Check rate limits:
  1. Test API directly:
curl -X POST https://api.tavily.com/search \
  -H "Content-Type: application/json" \
  -d '{"api_key":"YOUR_KEY","query":"test"}'

Performance Issues

Slow Agent Execution

Symptoms:

Solutions:

  1. Enable prompt caching:
model = AnthropicModel(
    model="claude-sonnet-4-5-20250929",
    cache_system_prompt=True,  # β¬… Add this
)
  1. Use faster model:
# Haiku is 3x faster than Sonnet
model = AnthropicModel(
    model="claude-3-haiku-20240307"
)
  1. Reduce token generation:
max_tokens=1000  # Faster than 8000
  1. Parallel execution:
import asyncio

async def run_parallel():
    social, audio = await asyncio.gather(
        create_social_agent().execute_async(prompt),
        create_tts_agent().execute_async(prompt)
    )
  1. Skip research for testing:
# Use cached research
research = "Cached research results..."
# Instead of
research = execute_research(topic)

High API Costs

Symptom: Unexpected high bills

Solutions:

  1. Enable caching (90% savings):
cache_system_prompt=True
  1. Use cheaper models:
# Haiku is 10x cheaper than Sonnet
model="claude-3-haiku-20240307"
  1. Optimize prompts:
# Remove unnecessary details
system_prompt = optimize_prompt(system_prompt)
  1. Set usage limits in provider console

  2. Cache results:

from functools import lru_cache

@lru_cache(maxsize=100)
def cached_research(topic: str) -> str:
    return execute_research(topic)

Memory Issues

Symptom:

MemoryError: Unable to allocate memory

Solutions:

  1. Process in chunks:
for chunk in text_chunks:
    process_chunk(chunk)
  1. Clear conversation history:
if len(conversation) > 10:
    conversation = conversation[-5:]  # Keep last 5
  1. Use generators:
def stream_results():
    for item in large_dataset:
        yield process(item)

Git Issues

Detached HEAD State

Symptom:

You are in 'detached HEAD' state

Solution:

# Return to main branch
git checkout main

# Or create branch from current state
git checkout -b my-branch

Uncommitted Changes Block Checkout

Symptom:

error: Your local changes would be overwritten by checkout

Solutions:

  1. Stash changes:
git stash
git checkout v0.1.0
# Later: git stash pop
  1. Discard changes:
git checkout -- .
git checkout v0.1.0
  1. Commit changes:
git add .
git commit -m "WIP"
git checkout v0.1.0

Tag Not Found

Symptom:

error: pathspec 'v0.1.0' did not match any file(s) known to git

Solution:

# Fetch all tags
git fetch --tags

# List available tags
git tag -l

# Checkout specific tag
git checkout v0.1.0

Platform-Specific Issues

macOS: SSL Certificate Error

Symptom:

SSL: CERTIFICATE_VERIFY_FAILED

Solution:

# Install certificates
/Applications/Python\ 3.11/Install\ Certificates.command

# Or upgrade certifi
pip install --upgrade certifi

Windows: Path Issues

Symptom:

FileNotFoundError: app/script_agent/prompt.md

Solution:

# Use Path for cross-platform compatibility
from pathlib import Path

prompt_path = Path("app") / "script_agent" / "prompt.md"
with open(prompt_path, "r") as f:
    prompt = f.read()

Windows: PowerShell Execution Policy

Symptom:

cannot be loaded because running scripts is disabled

Solution:

# Run PowerShell as Administrator
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

Linux: Permission Denied

Symptom:

PermissionError: [Errno 13] Permission denied: 'output.mp3'

Solution:

# Fix permissions
chmod 755 .
chmod 666 output.mp3

# Or write to temp directory
output_path = "/tmp/output.mp3"

Advanced Debugging

Enable Maximum Logging

import logging
import sys

# Configure root logger
logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.StreamHandler(sys.stdout),
        logging.FileHandler('debug.log')
    ]
)

# Enable HTTP logging
import http.client
http.client.HTTPConnection.debuglevel = 1

Inspect Network Calls

# Use mitmproxy to see API calls
pip install mitmproxy
mitmproxy -p 8080

# Configure proxy in code
import os
os.environ['HTTP_PROXY'] = 'http://localhost:8080'
os.environ['HTTPS_PROXY'] = 'http://localhost:8080'

Profile Performance

import cProfile
import pstats

# Profile code
profiler = cProfile.Profile()
profiler.enable()

# Run your code
result = agent.execute(prompt)

profiler.disable()

# Print stats
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(20)  # Top 20 slowest

Memory Profiling

from memory_profiler import profile

@profile
def my_function():
    # Your code
    pass

# Run with: python -m memory_profiler script.py

Still Stuck?

If none of these solutions work:

  1. Check GitHub Issues: https://github.com/[repo]/issues
  2. Ask in Discord: [Discord link]
  3. Email instructor: [email]
  4. Create minimal reproduction:
# Minimal example that shows the problem
from strands import Agent
from strands_ai.models.anthropic import AnthropicModel

model = AnthropicModel(model="claude-sonnet-4-5-20250929")
agent = Agent(model=model, system_prompt="You are helpful")
result = agent.execute("Hello")
print(result.output)  # What goes wrong here?
  1. Include:
    • Python version: python --version
    • OS: uname -a (Linux/Mac) or systeminfo (Windows)
    • Package versions: pip list | grep strands
    • Full error traceback
    • Minimal reproduction code

Quick Fixes Checklist

When something goes wrong, try these in order:


Still having issues? Don’t hesitate to ask for help! πŸ†˜