Here are 6 ways that brands prevent Facebook AI from working for them.
1. Assuming Broad Targeting Means "No Targeting"
Broad targeting can be beneficial by providing Facebook machine learning systems the scale and diversity needed to find the right people for high-performance ads. For brands spending $10K to $100K per month, my recommendation is to target an audience size of 10M to 50M. This range is in the sweet spot: big enough to get the benefits of broad targeting, but also narrow enough to help FB quickly find your best customers.
2. Starving Campaigns of Data Volume
Underfunded FB ad campaigns often struggle to exit the learning phase. It takes 50 conversions to give FB’s machine learning system enough data to build an accurate customer model. I’ve found that brands with fewer than 50 conversions per week averaged 1.0x to 1.5x Return on Ad Spend (ROAS), whereas those with more conversions averaged 2.0 to 2.5x ROAS.
3. Depriving the Pixel of Quality, Usable Data
For FB’s system to learn who your best customers are, you need a real-time data feedback loop between your website and your ad account. Utilize a premium server-to-server data connection like Popsixle to send rich data payloads with geolocation information, Facebook click IDs, and unique identifiers for high match rates. Remember that the built-in Shopify CAPI connection is run by FB and drops the same data as the Facebook website pixel (60% of data is dropped!).
4. Running Single Campaign, Single Audience, Single Creative Ads
There's so much to gain from using auto-optimization across your targets (Campaign Budget Optimization) and across your creatives (Dynamic Creative). For example, if you have 5M, 15M, and 50M sized audiences, put them in one campaign and let FB’s system learn from each ad set and auto-optimize. With various versions of visuals, ad text, headlines, and CTAs, you can have hundreds of creative permutations tested automatically.
5. Making Manual Optimizations
Brands often turn to 3rd-party platforms like Google Analytics, TripleWhale, and Northbeam to measure success due to historically poor accuracy in FB. While it's smart to have a 2nd opinion, don't try to handle all the campaign optimization yourself. Manual daily optimizations can consume an hour of analysis and still underperform compared to FB’s auto-optimizations.
6. Changing Campaigns Too Often
Once a campaign is live and out of the learning phase, avoid altering it. If you want to test a variation, duplicate the campaign and run the variation as a new contender. Only switch off the old campaign if the new one outperforms it. Though duplicating may risk competing with yourself in the auction, the potential downside is small compared to the guaranteed performance issues caused by resetting learnings on the original campaign.
Ultimately, the goal is to set up FB with the right data, right content, and right audiences to put their machine learning and AI systems to work for you.