Step 1: Building Your First Agent (15 minutes)
Code Location: code/v0.1.0/
Time: 00:00-15:00
Goal: Create a basic script generation agent using the Strands framework
Overview
In this step, youβll learn:
- How to initialize a Strands agent
- Configure AI models (Anthropic Claude)
- Write effective prompts
- Run and test your agent
- Monitor execution metrics
Understanding the Strands Agent Framework
The Strands Agents framework provides a clean, production-ready pattern for building AI agents. Key concepts:
- Agent: The main execution unit that processes tasks
- Model: The LLM provider (Anthropic, OpenAI, etc.)
- Prompt: Instructions that guide the agentβs behavior
- Tools: Functions the agent can call (covered in later steps)
Architecture

Checkpoint: Setup
Before proceeding, ensure:
- Dependencies installed (
uv sync) -
.envfile configured withANTHROPIC_API_KEY - Navigate to
code/v0.1.0/directory
Step 1.1: Understanding the Project Structure
Letβs examine the basic structure:
app/
βββ script_agent/
β βββ agent.py # Agent implementation
β βββ prompt.md # Agent instructions
βββ rooting_agent/ # (renamed to script_agent in later versions)
βββ agent.py
βββ prompt.md
main.py # Entry point
Step 1.2: Creating the Agent Module
File: app/script_agent/agent.py (or app/rooting_agent/agent.py at v0.1.0)
"""Script generation agent for podcast content."""
import os
from strands import Agent
from strands_ai.models.anthropic import AnthropicModel
def create_script_agent() -> Agent:
"""
Create and configure the script generation agent.
Returns:
Configured Agent instance
"""
# 1. Initialize the AI model
model = AnthropicModel(
model=os.getenv("ANTHROPIC_MODEL_ID"),
max_tokens=8000,
temperature=0.7,
)
# 2. Load the prompt instructions
with open("app/script_agent/prompt.md", "r") as f:
system_prompt = f.read()
# 3. Create the agent
agent = Agent(
model=model,
system_prompt=system_prompt,
enable_streaming=True,
)
return agent
Key Points to Discuss
Model Configuration:
model: Specific Claude model versionmax_tokens: Maximum response length (8000 = ~6000 words)temperature: Creativity level (0.0 = deterministic, 1.0 = creative)
Agent Configuration:
system_prompt: Core instructions for the agentenable_streaming: Real-time output display
Step 1.3: Writing the Agent Prompt
File: app/script_agent/prompt.md
# Podcast Script Writer
You are an expert podcast script writer specializing in engaging,
conversational content.
## Your Role
Generate professional podcast scripts that:
- Are conversational and engaging
- Include natural transitions
- Have clear structure (intro, body, conclusion)
- Include speaker cues and timing notes
- Are approximately 10-15 minutes when read
## Script Format
Structure your scripts as follows:
### [INTRO - 1 minute]
Host: [Opening statement]
### [MAIN CONTENT - 8-12 minutes]
Host: [Content sections with natural flow]
### [CONCLUSION - 1-2 minutes]
Host: [Closing thoughts and call-to-action]
## Guidelines
1. **Conversational Tone**: Write as if speaking to a friend
2. **Natural Language**: Use contractions, questions, pauses
3. **Engaging Hooks**: Start strong to grab attention
4. **Clear Structure**: Logical flow between topics
5. **Practical Value**: Include actionable insights
6. **Time Awareness**: Keep pacing appropriate for audio
## Output
Provide only the script content. No meta-commentary or explanations.
Prompt Engineering Tips
Good prompts are:
- Specific about the task
- Include output format examples
- Set clear constraints
- Define success criteria
- Use professional persona
Step 1.4: Creating the Main Entry Point
File: main.py
"""Main entry point for the rooting pipeline."""
import logging
from app.script_agent.agent import create_script_agent
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
def main():
"""Execute the script generation agent."""
# User input
topic = "Building AI Agents with Python"
logger.info(f"ποΈ Generating podcast script for topic: {topic}")
# Create and run agent
agent = create_script_agent()
result = agent.execute(topic)
# Display results
print("\n" + "="*80)
print("GENERATED SCRIPT")
print("="*80 + "\n")
print(result.output)
# Show metrics
if hasattr(result, 'metadata'):
print(f"\nπ Input tokens: {result.metadata.get('input_tokens', 0)}")
print(f"π Output tokens: {result.metadata.get('output_tokens', 0)}")
if __name__ == "__main__":
main()
Step 1.5: Running Your Agent
# Navigate to the release folder
cd code/v0.1.0/
# Run the agent
python main.py
Expected Output:
2024-10-12 10:30:45 - __main__ - INFO - ποΈ Generating podcast script for topic: Building AI Agents with Python
================================================================================
GENERATED SCRIPT
================================================================================
[INTRO - 1 minute]
Host: Hey there, tech enthusiasts! Welcome back to the show...
[Content continues...]
π Input tokens: 245
π Output tokens: 1523
Checkpoint: Verify Your Agent Works
- Agent runs without errors
- Script is generated successfully
- Output is properly formatted
- Metrics are displayed
Common Issues & Solutions
Issue: Import Error
ModuleNotFoundError: No module named 'strands'
Solution: Install dependencies
uv sync
# or
pip install -e .
Issue: API Key Error
AuthenticationError: Invalid API key
Solution: Check .env file
# Verify file exists
cat .env
# Should contain
ANTHROPIC_API_KEY=sk-ant-...
Issue: Model Not Found
ValueError: Unknown model: claude-3-5-sonnet
Solution: Use exact model name
model="claude-sonnet-4-5-20250929" # Include date suffix
Understanding the Agent Lifecycle
# 1. Initialization
agent = Agent(model=model, system_prompt=prompt)
# 2. Execution
result = agent.execute(user_input)
# 3. Result handling
output = result.output # Main response
metadata = result.metadata # Execution details
Exercise: Customize Your Agent
Try modifying the agent:
- Change temperature: Experiment with values 0.3, 0.7, 1.0
- Modify prompt: Add specific style requirements
- Different topic: Generate scripts for various subjects
Example Modification
# Make the agent more creative
model = AnthropicModel(
model="claude-sonnet-4-5-20250929",
max_tokens=8000,
temperature=0.9, # More creative β¬
Changed
)
Key Concepts Review
What You Learned
- Agent Initialization: How to create and configure agents
- Model Selection: Choosing and configuring LLM models
- Prompt Engineering: Writing effective system prompts
- Execution Flow: Running agents and handling results
- Error Handling: Common issues and solutions
Production Considerations
- β Use environment variables for API keys
- β Implement logging for debugging
- β Set appropriate token limits
- β Monitor API usage and costs
- β Handle errors gracefully
Advanced Topics (Time Permitting)
Adding Streaming Output
def stream_response(agent: Agent, prompt: str):
"""Stream agent response in real-time."""
for chunk in agent.stream(prompt):
print(chunk, end='', flush=True)
print() # Newline at end
Custom Callbacks
from strands.callbacks import CallbackHandler
class MetricsCallback(CallbackHandler):
"""Track detailed execution metrics."""
def on_agent_start(self, agent, input):
print(f"π Starting: {input[:50]}...")
def on_agent_end(self, agent, output):
print(f"β
Completed: {len(output)} chars")
# Use with agent
agent = Agent(
model=model,
system_prompt=prompt,
callbacks=[MetricsCallback()]
)
Next Steps
Youβve successfully created your first agent! Now you understand:
- Basic agent architecture
- Model configuration
- Prompt engineering fundamentals
- Execution and result handling
Ready to add tools? Continue to Step 2: Social Media Agent with Built-in Tools
Additional Resources
Questions for Discussion
- When would you use higher vs lower temperature?
- How would you handle very long scripts (>8000 tokens)?
- What other use cases could this agent handle?
- How would you test the agent systematically?
Time Check: You should be at approximately 15 minutes. If ahead, experiment with customizations. If behind, move to Step 2.