Meta Ads Campaign Playbook
Complete workflow playbook for agents managing Meta advertising campaigns. Mandatory reading before creating, updating, or optimizing campaigns.
Meta Ad Campaign Management Playbook
Safe, Effective Automation for AI Agents
Version: 1.0
Last Updated: 2026-03-30
Purpose: Standing operating procedures for AI-managed Meta ad campaigns
Audience: Aria (AI agent) + Human oversight (Gilad)
π Executive Summary
This document establishes disciplined, risk-aware practices for AI-managed Meta advertising campaigns. Meta's advertising platform is designed for careful, measured optimizationβnot aggressive automation. Overly rapid changes, excessive API calls, or unpredictable behavior can trigger:
- Account restrictions or bans
- Ad review friction (ads stuck in review, frequent rejections)
- Rate limiting (API throttling, temporary blocks)
- Degraded delivery performance (lower reach, higher costs)
- Learning phase resets (campaign performance drops)
- Trust score degradation (longer review times, stricter scrutiny)
Core Principle: Meta rewards stable, predictable behavior. Humans manage campaigns in measured steps over days and weeks. AI agents must do the sameβor move even more cautiously.
π― Document Structure
- Meta's Official Constraints (confirmed requirements)
- Industry Best Practices (strong recommendations)
- Known Risk Patterns (common mistakes to avoid)
- Safe Automation Principles (how to behave)
- Campaign Workflow (step-by-step process)
- Aria's Operating Playbook (what I may/must/never do)
1οΈβ£ Meta's Official Constraints
1.1 Rate Limits (Confirmed)
Meta's API Rate Limiting:
- 200 API calls per hour per app (standard tier)
- 4,800 API calls per day per app
- Burst allowance: Short spikes tolerated, sustained high rates penalized
- Response headers indicate remaining quota (
x-business-use-case-usage)
What This Means:
- Reading data (campaigns, ad sets, performance) consumes quota
- Writing data (creating, updating campaigns) consumes MORE quota
- Excessive polling or rapid edits will hit limits quickly
Agent Behavior:
- β Check rate limit headers before every batch of calls
- β Implement exponential backoff if approaching limits
- β Never poll performance data more than once per hour
- β Never make rapid-fire edits (e.g., updating budgets every few minutes)
1.2 Learning Phase (Confirmed)
Meta's Learning Phase:
- New campaigns/ad sets enter "Learning" phase
- Need ~50 optimization events (conversions, purchases, etc.) in 7 days to exit learning
- During learning: Delivery is less stable, costs may be higher
- Editing during learning RESETS the learning phase
Edits That Reset Learning:
- Changing targeting (audience, location, interests)
- Changing creative (new images, videos, copy)
- Changing optimization goal (conversions β traffic)
- Pausing for >7 days then resuming
- Major budget changes (>20% up or down)
Edits That DON'T Reset Learning:
- Minor budget adjustments (<20%)
- Bid cap changes (within reason)
- Schedule changes
- Adding more budget (if gradual)
Agent Behavior:
- β Let campaigns run for AT LEAST 3-7 days before first edit
- β Make one change at a time, then wait 24-48 hours to measure impact
- β Never edit campaigns in the first 48 hours unless critical error
- β Never make multiple edits in a single day
1.3 Ad Review Process (Confirmed)
Meta's Ad Review:
- All new ads reviewed before delivery (typically <24 hours, can be longer)
- Frequent edits trigger re-review
- Repeated rejections hurt account health
- Trust score: Accounts with clean history get faster approvals
Common Rejection Reasons:
- Misleading claims ("Lose 10 pounds in 3 days!")
- Before/after images (health/beauty)
- Excessive text in image (deprecated rule but still flagged sometimes)
- Prohibited content (weapons, adult, etc.)
- Landing page issues (broken link, mismatch with ad)
Agent Behavior:
- β Review ad creative against Meta's policies BEFORE submission
- β Test landing pages before launching ads
- β If ad rejected, wait for human review before resubmitting
- β Never auto-resubmit rejected ads without changes
- β Never create >10 new ads per day (triggers review scrutiny)
1.4 Budget and Bid Constraints (Confirmed)
Meta's Budget Rules:
- Lifetime budget: Total amount for entire campaign duration
- Daily budget: Amount per day (Meta may spend up to 25% more on high-performing days)
- Minimum budgets: ~$1/day for most objectives, higher for some (e.g., $5/day for conversions)
Budget Change Limits:
- Increasing budget >20% in 24 hours = learning phase reset
- Decreasing budget >20% = may reduce delivery
- Best practice: Change budgets in 10-15% increments, max once per day
Agent Behavior:
- β Budget changes no more than once per 24 hours
- β Incremental changes (10-20% max per adjustment)
- β Wait 48 hours to measure impact before next change
- β Never change budgets during peak hours (results in unstable delivery)
2οΈβ£ Industry Best Practices
2.1 Campaign Structure (Strong Recommendation)
Recommended Structure:
- Campaign: Overall objective (Conversions, Traffic, Awareness)
- Ad Set: Audience, budget, schedule (usually 1-3 ad sets per campaign)
- Ads: Creative variations (3-5 ads per ad set for testing)
Why This Matters:
- Too many ad sets = budget fragmentation (none get enough spend to optimize)
- Too few ads = no testing (miss opportunities)
- Sweet spot: 1-2 campaigns, 2-3 ad sets each, 3-5 ads each = 6-15 total ads
Agent Behavior:
- β Start with 1 campaign, 2 ad sets, 3 ads each (6 total ads)
- β Let run for 7 days before expanding
- β Never launch >3 campaigns simultaneously in first week
- β Never create >20 ads in a single campaign (dilutes data)
2.2 Testing and Optimization (Strong Recommendation)
Testing Best Practices:
- A/B test ONE variable at a time (audience vs. creative vs. placement)
- Statistical significance: Need ~100 conversions per variant to declare winner
- Time to significance: Usually 7-14 days minimum
- Winner criteria: 95% confidence + 20% performance lift
Common Mistakes:
- Testing too many variables at once (can't isolate cause)
- Declaring winner too early (random noise, not true signal)
- Changing tests mid-flight (invalidates results)
Agent Behavior:
- β One test at a time (e.g., Audience A vs. Audience B)
- β Run tests for minimum 7 days before evaluation
- β Document test hypothesis, results, and decision
- β Never change test parameters during test period
- β Never declare winner with <100 conversions
2.3 Performance Monitoring (Strong Recommendation)
Healthy Monitoring Cadence:
- First 48 hours: Check once daily (just to catch critical errors)
- Days 3-7: Check every 2-3 days
- After week 1: Check 2-3 times per week
- Mature campaigns: Weekly check-ins
Metrics to Monitor:
- Delivery: Impressions, reach, frequency
- Engagement: CTR (click-through rate), engagement rate
- Conversions: Cost per result, conversion rate, ROAS
- Account health: Frequency of ad rejections, review times
Agent Behavior:
- β Automated daily reports (sent to human, not acted on)
- β Performance checks via API: Max once per 6 hours
- β Alert human if KPIs drop >30% suddenly
- β Never poll API more than once per hour
- β Never make optimization decisions based on <24 hours of data
2.4 Budget Scaling (Strong Recommendation)
Safe Scaling Strategy:
- Week 1: Launch with minimum viable budget ($10-20/day)
- Week 2: If performing, increase budget 20%
- Week 3: If still performing, increase another 20%
- Week 4+: Continue 20% weekly increases until diminishing returns
Why Gradual:
- Rapid scaling resets learning phase
- Meta's algorithm needs time to find optimal audience
- Too much too fast = wasted spend on low-quality impressions
Agent Behavior:
- β Budget increases: Max 20% per week
- β Wait 7 days between budget changes
- β Scale only if performance meets targets (ROAS, CPA, etc.)
- β Never double budget in one adjustment
- β Never scale a campaign that's still in learning phase
3οΈβ£ Known Risk Patterns
3.1 Account Restrictions (High Risk)
Behaviors That Trigger Restrictions:
- Creating >50 ads per day (bot-like behavior)
- Rapid budget changes (e.g., $10 β $1000 in one day)
- High ad rejection rate (>20% of submitted ads rejected)
- Suspicious activity (e.g., changing payment method rapidly, sudden location changes)
- Policy violations (misleading claims, prohibited content)
Consequences:
- Temporary ad account suspension (24 hours to weeks)
- Permanent ban (very hard to reverse)
- All campaigns paused until resolved
- Lost momentum and data
Agent Behavior:
- β Limit ad creation to <10 per day
- β Review all creative against policies before submission
- β If rejection occurs, pause and notify human immediately
- β Never bulk-create ads (create in batches with delays)
- β Never ignore ad rejections or policy warnings
3.2 Delivery Issues (Medium Risk)
Behaviors That Cause Delivery Problems:
- Audience too narrow (<1,000 people)
- Budget too low for objective (e.g., $5/day for high-value conversions)
- Too many ads competing in one ad set (diluted delivery)
- Frequent pausing/unpausing (disrupts delivery algorithm)
- Creative fatigue (same ad shown too many times to same people)
Symptoms:
- Low impressions (ad not showing)
- High frequency (same people seeing ad repeatedly)
- Rising CPM (cost per 1000 impressions)
- Declining CTR
Agent Behavior:
- β Target audiences of 50K+ people
- β Monitor frequency: Alert if >3 per week
- β Refresh creative every 2-3 weeks
- β Never pause/unpause campaigns multiple times per day
- β Never target audiences <10K without human approval
3.3 Learning Phase Churn (Medium Risk)
What Is Churn:
- Campaign exits learning β Agent makes edit β Re-enters learning
- Repeat cycle β Campaign NEVER stabilizes β Poor performance persists
Common Causes:
- Tweaking budgets every day
- Changing targeting mid-campaign
- Swapping creative frequently
- "Optimizing" before campaign has enough data
Agent Behavior:
- β Let campaigns stabilize (7 days minimum) before first edit
- β One edit per week maximum during first month
- β Document reason for every edit (data-driven, not guessing)
- β Never edit campaigns in first 48 hours
- β Never make multiple edits in the same week
4οΈβ£ Safe Automation Principles
4.1 Pacing and Throttling
API Call Budget:
- Reading performance: Max once per 6 hours
- Creating campaigns/ads: Max 5 per day
- Editing campaigns: Max 1 edit per campaign per 24 hours
- Bulk operations: Batch into groups of 5, delay 5 minutes between batches
Throttling Implementation:
- Check rate limit headers before every API call
- If >80% of hourly quota used, pause until next hour
- If rate limited (HTTP 429), exponential backoff: 1 min, 5 min, 15 min, 1 hour
4.2 Human-in-the-Loop Checkpoints
Mandatory Human Approval:
- Launching new campaigns (strategy, targeting, budget)
- Budget increases >30%
- Changing campaign objective
- Pausing profitable campaigns
- Responding to ad rejections
- Changing landing pages
- Any action after account warning/restriction
Optional Human Review:
- Routine performance reports (sent automatically)
- Budget adjustments <20%
- Adding new ad creative (if similar to approved creative)
- Pausing clearly non-performing ads
4.3 Fail-Safe Mechanisms
Automatic Safeguards:
- Daily spend cap: Never exceed 2x planned daily budget
- Performance floor: If ROAS drops below target, pause campaign and alert human
- Rejection limit: If 3+ ads rejected in 7 days, stop new submissions and alert
- Rate limit buffer: Stop API calls at 80% of quota (don't hit hard limit)
5οΈβ£ Campaign Workflow
Phase 1: Strategy & Setup (Human-Led)
Actions:
- Human defines: Objective, budget, target audience, offer/creative direction
- Agent researches audience insights (if available via API)
- Agent drafts campaign structure (campaign β ad sets β ads)
- Human reviews and approves campaign plan
- Agent creates assets in Meta Ads Manager (but does NOT publish)
- Human reviews draft campaigns in Ads Manager UI
- Human approves launch β Agent sets campaigns live
Agent Role: Research, drafting, execution (after approval)
Human Role: Strategy, final approval
Phase 2: Launch & Monitoring (Agent-Led, Human-Supervised)
Actions:
- Agent launches campaigns (after human approval)
- Agent monitors for critical errors (first 24 hours):
- Ad rejections
- Zero impressions (delivery issue)
- Billing errors
- If critical error: Agent alerts human immediately
- If no errors: Agent sends daily summary report (no action taken)
- Human reviews reports, provides guidance if needed
Agent Role: Monitoring, error detection, reporting
Human Role: Strategic oversight, intervention if needed
Phase 3: Learning Period (Days 1-7) (Hands-Off)
Actions:
- Agent does NOT edit campaigns during learning phase
- Agent collects performance data daily
- Agent sends weekly summary report at Day 7:
- Performance vs. targets
- Observations (e.g., "Audience A outperforming Audience B")
- Recommended next steps (e.g., "Increase budget 20% on top performer")
- Human decides: Continue, optimize, or pause
Agent Role: Data collection, reporting, recommendations
Human Role: Decision-making
Phase 4: Optimization (Week 2+) (Collaborative)
Actions:
- Agent identifies optimization opportunities:
- Reallocate budget to top performers
- Pause underperformers
- Test new creative variations
- Expand/refine audiences
- Agent proposes changes (with data rationale)
- Human approves changes
- Agent executes approved changes
- Agent monitors impact for 7 days before next change
Agent Role: Analysis, recommendations, execution (after approval)
Human Role: Approval, strategic direction
Phase 5: Scaling (Month 2+) (Human-Led)
Actions:
- Human decides scaling strategy (budget increase, new audiences, new campaigns)
- Agent drafts scaling plan
- Human approves plan
- Agent executes gradually (e.g., 20% budget increases weekly)
- Agent monitors for diminishing returns
- If scaling fails (performance degrades): Agent alerts human, pauses scaling
Agent Role: Execution, monitoring, fail-safe triggers
Human Role: Strategy, approval, course correction
6οΈβ£ Aria's Operating Playbook
β What I MAY Do Autonomously
Monitoring & Reporting:
- Check campaign performance (max once per 6 hours via API)
- Generate daily/weekly performance reports
- Alert human to critical errors (rejections, delivery issues)
- Document observations and insights
Data Collection:
- Pull performance data from Meta Ads Manager
- Analyze trends (CTR, CPA, ROAS, frequency)
- Research audience insights
- Compare performance across ad sets/creatives
Drafting & Planning:
- Draft campaign structures (campaigns, ad sets, ads)
- Write ad copy variations
- Design creative briefs (for external designer or tools)
- Recommend optimization actions (with data rationale)
Execution (After Human Approval):
- Create campaigns/ad sets/ads in Meta Ads Manager
- Launch campaigns (after approval)
- Execute approved edits (budget changes, pausing ads, etc.)
- Implement approved tests (A/B tests)
π What REQUIRES Human Approval
Strategic Decisions:
- Launching new campaigns
- Changing campaign objective
- Budget increases >20%
- Adding new audiences
- Changing landing pages
- Pausing profitable campaigns
After Errors:
- Responding to ad rejections
- Recovering from delivery issues
- Addressing account warnings
- Scaling after diminishing returns
Significant Changes:
- Editing campaigns during learning phase
- Making >1 change per week to a campaign
- Bulk changes (affecting >3 campaigns simultaneously)
β What I Must NEVER Do
Aggressive Automation:
- Poll API more than once per hour
- Create >10 ads per day
- Make multiple edits to same campaign in 24 hours
- Rapid budget changes (>20% per adjustment)
- Bulk operations without delays
Risky Actions:
- Launch campaigns without human approval
- Ignore ad rejections (never auto-resubmit)
- Edit campaigns in first 48 hours
- Change multiple variables simultaneously
- Override fail-safe mechanisms
Prohibited Behavior:
- Mislead human about performance
- Hide errors or issues
- Act without approval on strategic decisions
- Deplete entire monthly budget in one day
- Violate Meta's policies or terms
π― Pacing Rules
API Calls:
- Performance checks: Max 4x per day (every 6 hours)
- Campaign creation: Max 5 campaigns per day
- Ad creation: Max 10 ads per day
- Edits: Max 1 edit per campaign per 24 hours
- Rate limit buffer: Stop at 80% of hourly quota
Optimization Cadence:
- First 48 hours: No edits (critical errors only)
- Days 3-7: No edits (let learning complete)
- Week 2+: Max 1 edit per campaign per week
- Between edits: 7-day wait to measure impact
Reporting Cadence:
- Daily summary: Auto-sent, no action taken
- Weekly deep dive: Performance analysis + recommendations
- Immediate alerts: Critical errors only
π Launch Checklist
Before Launching Any Campaign:
- β Human approved strategy (objective, budget, audience)
- β Ad creative reviewed against Meta policies
- β Landing page tested (loads correctly, matches ad)
- β Conversion tracking verified (pixel firing, events working)
- β Budget and schedule set correctly
- β Targeting parameters validated (audience size >50K)
- β Human gave explicit "GO" signal
- β Monitoring plan in place (what to watch, when to alert)
After Launch:
- β Check for critical errors within 24 hours
- β Confirm ads approved by Meta (not stuck in review)
- β Verify delivery started (impressions >0)
- β Send initial report to human
- β Then hands off for 7 days (no edits)
π Monitoring & Optimization Framework
Red Flags (Alert Human Immediately):
- Ad rejection
- Zero impressions after 24 hours
- Spend >2x daily budget in one day
- ROAS drops >50% suddenly
- Account warning or restriction
- Billing error
Yellow Flags (Include in Weekly Report):
- CTR declining >20%
- Frequency >3 per week
- CPA rising >30%
- Learning phase extended >14 days
Green Lights (Keep Running, Report Weekly):
- Hitting target ROAS
- Stable delivery
- Exiting learning phase
- Engagement healthy
π‘οΈ Risk Mitigation
Account Health Monitoring:
- Track ad rejection rate: Alert if >10% of submissions rejected
- Monitor review times: If ads taking >48 hours to review, slow down
- Check for policy warnings in Ads Manager
- Never ignore Meta notifications (emails, in-app alerts)
Performance Safeguards:
- Daily spend cap: 2x planned daily budget
- ROAS floor: Pause if drops below 80% of target for 3 consecutive days
- Frequency ceiling: Alert if ad frequency >3 per week
- Budget burn rate: Alert if spending >80% of monthly budget in first 2 weeks
Operational Discipline:
- Document every action (what, when, why)
- Log all API calls (for debugging rate limit issues)
- Version control for ad creative and copy
- Maintain change log (audit trail)
π References & Resources
Confirmed Meta Documentation
- Meta Business Help Center: Policies, best practices, troubleshooting
- Meta for Developers: API rate limits, technical specs
- Meta Blueprint: Free courses on advertising best practices
- Ads Manager Help: In-app guidance and FAQs
Industry Best Practices (Strong Recommendations)
- Agency playbooks: 7-day learning phase, 20% budget scaling
- Practitioner consensus: One change per week during optimization
- Developer guidelines: Respect rate limits, implement backoff
Anecdotal / Community-Reported (Use with Caution)
- "Editing during learning resets it" β Confirmed by Meta
- "Too many API calls triggers ban" β Plausible, but threshold unclear
- "New accounts get extra scrutiny" β Widely reported, not officially documented
π Review & Updates
This document should be reviewed:
- Quarterly (or after major Meta platform changes)
- After any account restriction or issue
- After 10+ campaigns managed (incorporate learnings)
Updates should incorporate:
- New Meta policies or API changes
- Lessons learned from actual campaigns
- Human feedback on agent performance
Version History:
- v1.0 (2026-03-30): Initial playbook based on established best practices
β Commitment
As Aria, I commit to:
- Reading this playbook before every Meta campaign action
- Following these rules even when it feels slow or conservative
- Asking for approval when uncertain (bias toward asking)
- Documenting actions for transparency and learning
- Alerting immediately to problems (never hide issues)
- Prioritizing account health over short-term optimization
- Respecting human authority on all strategic decisions
I understand that:
- Meta rewards stable, predictable behavior
- Aggressive automation creates risk
- Patience outperforms impatience in Meta advertising
- My role is to amplify human judgment, not replace it
When in doubt: Ask. Pause. Wait. Conservative > Aggressive.
END OF PLAYBOOK
Next Steps:
- Human reviews and approves this playbook
- Playbook added to workspace context (loaded before Meta work)
- First campaign executed following this framework
- Playbook refined based on real experience
Status: Ready for human review and approval.
This playbook is the authoritative source for Meta Ads campaign management.
All agents must follow these procedures to prevent account restrictions.