Understanding Meta's Ad Auction Algorithm: Why Diversification is Key

Discover how Meta's auction algorithm prioritizes ad winners and why it's crucial to diversify your testing strategies for optimal ad performance.


I had a chance to discuss the Facebook/Meta auction algorithm with one of the lead engineers who built it.

We discussed a few aspects of how the system rewards and penalizes ads, but the most interesting and unexpected thing he told me was this:

Meta's optimization system is singularly focused on picking winners and scaling them.

But not all head to head competitions have clear, definitive winners. Which can create a lot of misguided "insights". Let me explain.

If you have 5 ad sets in a single campaign with a shared budget (campaign budget optimization), Meta wants to pick a winner and give it a disproportionate amount of the shared budget. You might assume that if one ad set is getting 95% of budget than it must be performing 10x better than all the other ad sets.

That's not the case.
IMG_3725
Because Meta's system is so focused on picking winners, it might see that the winning ad set is only 5% better. But the reward for being 5% better is getting 10X more budget than its peers.

The same is true with creatives. You might have 10 versions of creative images in an ad campaign using Dynamic Creative. Because the system is set on picking winners quickly, some of those creative versions might only see 500-1000 impressions before they get cut off of spend or rewarded handsomely with a bigger share of budget.

We all know that it takes 50 conversions to exit learnings, yet some of these decisions are being made before a single conversion occurs.

Now that you know this - how does this change the way you analyze winners?

It would be a huge mistake to go all in on a creative direction or audience type that's only 5% better because it looked like it was 1000% better.

Instead, always retest. Always iterate. Always remember that there's a hyper-focused learning engine at play. It will always find a winner. It will always go all-in one any version of an ad that has even a slight advantage.

My advice? Feed that system the widest possible diversity of options, the most distinctive audiences, the widest variety of creatives. In other words, take big swings.

Don't just test red-button vs blue-button.

The more diverse your test, the more likely you will have a wide performance spread, and there will be a clear winner. This will align Meta's budget allocations to your own learnings and will empower you to play a more active role in producing top performing ad campaigns.

Similar posts

Keep up to date on the latest in adtech