Facebook Ads

GEN12a: Testing, Optimizing, and Scaling (Part 1)

Testing, optimizing and scaling FB campaigns – this is THE biggest and most complicated topic to write about.

I always strive to keep things simple in my guides and tutorials, but FB is a complicated beast. There are simply too many variables for me to suggest a one-size-fits-all approach.

Campaign performance can be volatile and uncertain. You can launch the exact same campaign/adset/ad on different days and get different results.

Also, the FB algo is always changing – just because something works today doesn’t mean it will work tomorrow. Any testing and optimization approach can be obsolete as soon as it’s written.

So what I will aim to do is suggest some POSSIBLE ways to test and optimize FB camps. Your attitude shouldn’t be “hey I followed the set of instructions to the T – why am I not getting results?” but rather “let me try all your suggestions to maximize chances of finding something that will work”. Or, better yet, “let me use your suggestions as a starting point, and innovate to find my own testing and optimization approach based on them”.

I hereby introduce the Cardinal Rule of FB Testing: Do Your Own Testing (DYOT for short).

This rule applies to all paid traffic campaigns in general, but is especially applicable to FB due to its complexity and ever-changing algo.



Overall Testing, Optimization and Scaling Strategy

So we’ve set up a campaign to test 2-3 audiences, 2-3 ad angles per audience, and 1-3 ads per angle.

From here onwards, we have several main tasks to perform in terms of testing, optimization and scaling:

1)Pause non-performing ads/adsets.

2)Test more ads for current audiences.

3)Test more audiences to find more people to target.

4)Scale profitable audience+ad combinations to higher budgets.


We will cover each of these in detail, starting with the first 2 tasks which will be covered in this post. But first, let’s quickly go over how to find and display our data.



How to Display Stats

First of all, when you arrive in the Ads Manager interface (Business Tools > Ads Manager), the “Campaigns” tab will contain the stats of each campaign, the “Ad Sets” tab will contain the stats of each adset, and the “Ads” tab will contain the stats of each ad.

Now, the columns of data you’re seeing now, is a “preset” of columns called “Performance”, which is the default preset FB will display for us. Facebook has other presets of columns, which you can see when you click on “Columns”:

As you can see, the default data preset “Performance” is selected. Feel free to select some of the other data presets to look at what columns are displayed as a result.

If you want to build your own custom data preset, and/or see a list of ALL the different data FB provides, click on “Columns” > “Customize Columns…”:

There are a million and a half kinds of data you can choose from – just scroll down to see all of them. For a description of what each one is, put your cursor on the data name.

So what you can do is select all the data you want to display in the columns, click and drag your selections on the right to put them in the order you’d like them to appear in the reporting, check “Save as preset” and give it a name, then click “Apply”.

To set this new custom preset data as default, click on “Columns” > “Set as Default”:

If you’re not sure what data to choose for your custom preset, just don’t do it for now, and use the existing presets in the “Columns” list, until you feel the need to create a custom preset to see all the metrics that matter to you personally.

Furthermore: More data can be access by clicking on the “Breakdown” button:

You can explore all the options available – under “By Time”, “By Delivery” and “By Action”, and see how the breakdowns appear in your stats. For example if I go to “By Delivery” > “Age and Gender”:

I would see a breakdown of the campaigns or adsets or ads (depending on which tab you’re on), by age and gender, like this:

To NOT show the breakdown, just go to “Columns” > “Clear Breakdowns”:

Lastly – there are functions in the filters bar you can use to filter the data.

These are pretty self-explanatory so I won’t explain these. I’m sure you can figure it out.



Pausing Non-Performing Ads

Now that you know where and how to find and display all the data, let’s proceed to talk about what to look for when deciding which ads to pause.

First of all I want to put it on record that everyone has different criteria they use for cutting ads. I could go by statistical significance, but I’ve found that to require quite a lot more ad spend than I felt was worth the extra accuracy.

Cut too early and you risk cutting a good ad. Cut too late and you waste more money than you need to. It’s not easy to find a good balance. But for better or worse, below are my suggestions – feel free to use them and then DYOT to find your own groove.

-> DYOT <-

-First thing I want to point out, that you’ve undoubtedly already noticed, is the fact that facebook will often favor ONE ad in an adset, by sending most of the traffic to it. In order to make the decision on which ad to “favor”, facebook uses past data to analyze the ads and predict which ad will likely get the best response, as well as take into consideration of the actual engagement each ad is getting from users (i.e. ads that are getting clicked/liked/commented/shared the most will likely be shown more).

So is facebook’s decision of which ad is the best always correct? The answer is: Sometimes, but not always.

If you want to give each ad the best chances to succeed, instead of putting 3 different ads into an adset, just put one ad in each adset and duplicate it twice so you’d have 3 copies of the same ad in an adset. That way, each ad will have its own test budget (the daily budget you assign to the adset) and so will receive enough traffic to give it a fair chance to shine or bomb.

Only – that approach has its drawback: You’d have 3 times as many adsets which can require more budget. If you want to test ads in this way, you may want to assign a lower daily budget to each adset (unless you can afford to spend a lot on testing). e.g. Instead of having each adset spend 1x payout a day, you could spend 1/2 or 1/3 of the payout a day, but if you do that, be prepared to wait many days for data to roll in.

Alternatively, just trust that ads that are NOT favored by facebook, that don’t get enough traffic to test them out, are losers and leave it at that. Yes, some ads may never get enough traffic to be tested properly, but you can always test more ads, and that way you can collect data faster / test more ads on the same budget.

Or, you can use a combined approach: Test 3 different ads in an adset, but when you see that one of the “neglected” ads seems to show promising stats but is “abandoned” by facebook, take it out and create a new adset for it to give it another chance.

I’ll leave it to you which ad testing approach you want to choose.

You may ask “why have 3 copies of the same ad in an adset”? @mation explained it very well in his post here (“Facebook Audiences and Pools” section): (https://stmforum.com/forum/showthrea…LES-(In-Depth)


Try to get some average Facebook ad KPIs (key performance indicators) on the type of ads you’re promoting. Ask your affiliate manager for example, or other affiliates that are promoting something similar. Some useful KPIs include (but not limited to): Ad CTR, cost per click (CPC), cost per add to cart (for ecom), cost per lead (for lead gen), cost per purchase (for ecom), and ROAS (return on ad spend, = total revenue / total cost).

If an ad has spent 2x payout without converting, I would cut it. If it has good-looking KPIs (other than cost per purchase / cost per lead, whichever we consider to be our “payout”), I MAY run it for another payout, but I wouldn’t run any ad past 3x payout without a conversion.

Also, having these KPIs can allow us to cut an ad earlier to save money.

Below is an example to illustrate what I’m talking about.

Example: If payout=$20, and AM tells me that on average every 5 clicks to the offer results in a conversion (i.e. 20% CR or conversion rate), then I’d know that each click to the offer can’t cost more than $4. Let’s say that after running the campaign for half a day or a day, I know that my lander CTR is 20% (1 in every 5 people that see the lander clicks to the offer). This means the ad CPC can’t be more than $0.80.

So if an ad has spent 2x payout without converting, but the CPC is quite a lot LESS than the $0.80 required to break even, I would consider letting it run to 3x payout before cutting it (of course if it makes a conversion by then I’d run it further to see). But if the ad has spent 2x payout without converting and the CPC is close to or more than $0.80, I would cut the ad.

In fact, if the ad’s CPC is over $0.80 – say $1+ – I wouldn’t even wait until the 2x payout in ad spend. I may cut it after 0.5x-1x payout.

One warning here though: Facebook needs some time to find the right audience pools to target, so especially in the beginning for a new ad, the CPC can go down and the CTR can go up. Depending on what kind of daily budget you’ve set up and how big the audience size is, it could take between hours to a couple of days for these metrics to stabilize. So at first try not to cut anything too early – as you gain experience you’ll have a better idea how long to wait before cutting a new ad. (And the bigger your target audience, the more testing FB would need to do before it can find the best pools.)


-So what if you can’t get these KPIs? Don’t fret – just run the ads for a while to collect the data yourself!

To continue with the example above: Run the campaign until you have at least 5 conversions. By looking at your affiliate network dashboard, you’ll see how many clicks the offer got and how many conversions were made, so you can calculate the average CR. You’ll know your average lander CTR = 20%. So based on the offer CR and the lander CTR, you can figure out the ad CPC required to break even.

In general, after running a campaign for a few days, you’ll have a good idea on what “good” and “bad” numbers look like, which will allow you to cut any new ads you test more efficiently (we’ll talk about testing new ads later).


And CPC isn’t the only metric you can use to decide whether to cut or keep an ad. The ad can have a “ripple effect” throughout the rest of the funnel. For example, a convincing ad can decrease the cost per ATC and a less-convincing ad can increase the cost per ATC. So let’s say that after a while you know that only one in every two ATCs will result in a Purchase. If your “payout” is say $20, then your cost per ATC can’t exceed $10 by too much. Therefore if you have an ad that has spent $20 without a single ATC, then it’s probably time to cut it.

And ads are just ONE part of the entire funnel. The rest of your funnel needs to work well enough so that all the components together will result in profits.

For a lead gen campaign, your funnel could be ad > landing page > affiliate offer. In this case, your landing page to offer CTR would be an important metric, as is the offer conversion rate.

For an ecom store, your funnel could be ad > product page > add to cart (ATC) > purchase (you can track a lot more steps in between such as initiate checkout, enter payment details etc., by implementing FB pixel events). In this case, the CTR of each step will matter. Metrics such as cost per ATC and cost per purchase will be important.

And on top of all that, you need to be targeting the right audience with the right ads – which is why audience testing is so important. We will cover audience testing later in this post.

My point is that if the rest of your funnel is shit, you could be targeting the most relevant audience with the most skillfully-crafted ads, and still lose money. So make sure you either have a proven funnel in place, or split-testing funnel components to continuously improve on them. Funnel optimization is too big a topic – I may cover this in a future lesson or tutorial.


To cut an ad, I would suggest to just put it on pause instead of deleting it, because the data you’ve collected with the ad can be helpful for the future – at least you’d remember what DIDN’T work for the audience.

To pause an ad, simply find the ad in the “Ads” tab and toggle it to off/grey:

-And of course, if the ads in an adset all look hopeless, you can just pause out the adset. To pause an adset, just click on the “Ad Sets” tab, find the adset you want to pause, and toggle it to off/grey (same as for pausing ads described above).



Testing More Ads for Current Audiences

So after a few days of collecting data and cutting ads, you will see how each audience is responding to your ads. There are 2 common case scenarios when it comes to each audience you’ve been testing:

1)The audience may be profitable with one or more ads.

2)The audience may be unprofitable but may still reach profitability with more testing.

3)The audience may look hopeless where all the ads are in great loss.


Below are some suggestions for each scenario.



For Each Profitable Audience

This is the most exciting thing to see. But now is not the time to be complacent!

Next, test more ads for the winning angle, and also test more angles.

To test more ads, just duplicate an adset that’s targeting that audience, add 3 new ads to it and delete copies of old ads from the new adset.

Or – if you like the “1 ad copied 3 times in each adset” approach better, you can do it that way instead.

I would suggest NOT to include a copy of the profitable ad in the new adset. I know I know – I come from a background of wanting to split-test to get more accurate test results too. But the profitable ad have already built a good track record in the eyes of facebook, and putting that in the same adset as new ads wouldn’t be fair to them. Again, DYOT, but I’ve observed a trend that facebook tends to favor the older proven ad.

Plus, if you remember the “audience pools” concept mation talked about (mentioned above), then putting ads in the same adset wouldn’t be a fair split-test anyways, as each ad would/could be shown to a different audience pool.

Again – there’s the split-test option provided by facebook, and you can test to see how well it works for you. I just don’t use it because it takes too long to arrive at a statistically significant decision. Moreover, the split-test would just be targeting a specific audience pool anyway – so the knowledge that Ad A performs better than Ad B for a specific audience pool probably isn’t useful enough to be worth the money and time required for a fair split-test.

Reminder: As was brought up before, if you see an ad that looks promising, that facebook has decided to “neglect” by sending most of the traffic to a more “favored” ad, you may decide to test it in a new adset.



For Each Unprofitable-But-Not-Hopeless Audience

What if the audience isn’t profitable for any of the ads you’ve tested so far, but the results aren’t hopeless? i.e. There are ads that are at close to breaking even. How close is close enough? That question is too subjective. But ask yourself the question, “is this audience too good to give up?” If you answer yes, then test more ads for it! (In the manner we covered above.)

Another important consideration is, how broad is the audience? The broader the audience, the more testing you may need to do to find something that will resonate with enough people to turn a profit, but the more profits you can stand to make once you DO crack the code! As long as you’ve done your research and you’re confident that the audience is likely to be a good match for what you’re promoting, keep testing more ads and more ad angles to see if you could get it to work.



For Each Seemingly Hopeless Audience

What if all the ads just completely bombed for the audience?

The boundaries between unprofitable-but-not-hopeless and downright hopeless is not clear-cut. A lot depends on how solid your niche research and audience research is.

If you’ve done extensive research and you’re confident that the audience is a good fit for what you’re promoting, you may decide to test more ads and ad angles to make it work – especially if it’s a broader audience as mentioned above.

However, if the audience is on the small side, and you’re not completely confident that it was a good fit in the first place, and you have a more promising audience(s) you can focus on testing and scaling – then just pause the adsets for that audience, at least for now.



What If…

To cover all bases – what if all 2-3 audiences you tested were hopeless? *knocking on wood*

There are a few possibilities here as to why you got bad results:

-The audiences just aren’t a good fit for what you’re promoting. Solution: Test more audiences!

-The ads don’t resonate with the audiences. Solution: Test more ads and angles!

-The rest of your funnel is problematic – your landing page or the offer you’re promoting for example. Solution: Try another lander and/or another offer!

And how would you know which of these were the cause of bad performance? Again, this is where having some KPIs would really help.

If ads have high CPC / low CTR, then the problem is probably either the ads or the audience or both.

If lander-to-offer CTR is low, then the problem is the lander.

If the clicks-to-conversion rate for the offer is low, then the problem is the offer.

If you don’t have any KPIs from anyone, then I would suggest to test each variable (ads, audiences, landers, offers) to get a wider range of data, which would give you a better idea on where the issue could lie.

Also, if for any reason you skimped on the research process, that is probably why everything fell apart. Redo your research to identify your ideal audience, good ad angles etc. and try again.

It is common for a campaign to NOT work out in the beginning, especially if this is your first campaign, or your first campaign in a particular niche. Don’t get discouraged! Keep testing and you’ll find out more and more about the niche, suitable audiences, angles that work etc. – and this is how you “break into” and dominate a niche.



***************

In the next post I’ll cover custom and LAL audiences, as well as scaling. Please stay tuned!




Amy