Case Study: Facebook Advertising Drives $150k Kickstarter Launch For Apotheosis' The Red Opera

Case Study: Facebook Advertising Drives $150k Kickstarter Launch For Apotheosis’ The Red Opera

CASE STUDY – Apotheosis Studios is a game development company based out of Colorado. A new project came across their table for a Dungeons & Dragons-style expansion. However, the game being requested was not an ordinary project. This game was to accompany a heavy metal band’s latest album, with each song having a unique part of the campaign. Yes, a D&D campaign that comes with its own theme music, awesome!

While Apotheosis had success in the past with their launches and advertising, they wanted to bring in some heavy-duty professionals because this was such a novel idea. And that’s where Be Pro came in. This opportunity bubbled up through our friend Walter Greggs who runs a heavy metal internet radio show on ThatMetalStation. From pro to pro 🙂

By the time we started our conversations, this new game, The Red Opera, was already deep into development. To the point where they were only a month or so out from doing a Kickstarter to fund the full fabrication of the game. Up to then they had collected a decent number of emails from their website and were using Facebook lead ads so that people could opt-in directly on Facebook.

So, right away, they were doing something right by building a list to hype up the “launch”. Which was another reason we felt this would be a great case study opportunity. And it was, which you’ll read below.

Apotheosis wanted our help with advertising specifically. When we joined the project, they had already run a couple of generations of ads. Most importantly, they had gotten to the point where they had a tested and proven control ad that could predictably (and at an acceptable cost) generate a lead for the game launch. This meant they were ready to start scaling their ad spend.

In this case study, we are going to discuss two different phases of the launch. First, we will look at the evolution from the start of their lead generation ads, right up until the Kickstarter campaign where they had a solid control driving leads consistently at 90 cents each.

The second phase is going to look at the triple boost approach that, while not on its own in a bubble, helped them blow their $10,000 milestone out of the water and hit a $150,000 Kickstarter backing. Skip ahead to the second Facebook Lead Ads case study.

Case Study #1 – Developing A Proven Control Ad

Starting July 2020 until August 31st, using Facebook Lead ads alone, they generated 3,063 leads at an average cost of $1.14 each. The crazy part: 1,697 of those leads, over half, came from their winning control ad.

How did they work to develop an ad on which they could confidently spend over $1,500?

Testing and iterations. With plenty of time to collect data (aka spend money).

This case study studies the 7 full testing cycles they did trying to beat the control ad.

They did 7 full testing cycles trying to beat the control ad. Click image for full view.

Before they began to run Facebook ads, they had been building their email list for some time on other channels, including their website. Which helped their ads start with extra momentum via lookalikes. More on that later.

Their very first ad test began at the start of July 2nd, two full months out from the Kickstarter launch. Up to this point, they had been posting about it organically on their Page and talking about it elsewhere, so this was not a completely cold start.

Between July 2nd and the 13th, they spent $50 driving clicks to their website, which resulted in 404 clicks at 12 cents each. There was no closed reporting on this test, so while we know this caused their email list to grow, we do not know by exactly how much. We can confidently say they spent $50 on ads and their list grew.

The audience for this first test was a data-driven approach. They targeted people interested in Dungeons & Dragons AND they also had to be interested in Kickstarter. A well-informed audience to test against. They know they are using Kickstarter to launch in September and that their offer is a D&D campaign expansion.

If we are looking for people to back a project in a month or so (the “fish”) we should be able to find those people within that audience selection (the “pond” we are fishing).

To continue the fishing analogy, the “bait” is used (aka the ad) was a visually dynamic image with a Benefit-Free approach. They also used user-specific slang (“5E”) that their targets should recognize to further build trust. 

For those unfamiliar with Dungeons & Dragons, “5E” means the fifth edition of the game rules which is the modern D&D standard. Meaning, the target audience who plays D&D will be able to use this new Apotheosis game in their existing campaigns.

Oddly enough, this ad didn’t get a single comment but plenty of other engagement.

Oddly enough, this ad didn’t get a single comment but plenty of other engagement.

At the same time, on July 3rd they turned on their first proper Lead Ad which ran until the 9th. If you are unfamiliar, Facebook provides a special kind of ad that is an opt-in form. Except this form lives on Facebook directly, no website needed! When you keep people on Facebook and still get the opt-in, there is less resistance than driving someone to your website to submit a form. In other words, you can get better opt-in rates and costs AND you can do it all within the Facebook platform.

This test was using a lookalike audience of their email list thus far, about 1,600 opt-ins. Plenty of users for Facebook to find similar folks. There was no other targeting other than having to be in the United States. They started running at $5/day and then quickly jumped to $20/day on the 5th from the initial positive results.

Case Study Test #1:209 leads for about 95 dollars. Not bad for a week’s worth of ads!

This first test drove 209 leads for about 95 dollars. Not bad for a week’s worth of ads!

As far as the creative, there was only one ad that used the same language as the other traffic test that was running in parallel to this campaign.

Spoiler! This ended up being the control ad that drove 1,600+ leads for 90 cents each 👀

Spoiler! This ended up being the control ad that drove 1,600+ leads for 90 cents each 👀

This specific campaign ended up holding the control ad until the launch. While the campaign was shut off during some of the future tests, this particular campaign “learned” how to spot a Lead well since it was given a clean source audience (lookalike). This means that throughout the rest of the tests, we were unable to beat the cost per lead of this one campaign/ad.

So, as you proceed, remember this campaign is trolling in the background for most of the time leading up to September. To make it easy, we will call this the “control campaign” when referring to this specific one later in the case study.

Looking at the case study timeline, this campaign was shut off and turned back on twice for other tests. All except 10 leads came from the control ad in this campaign.

Looking at the timeline, this campaign was shut off and turned back on twice for other tests. All except 10 leads came from the control ad in this campaign.

While this traffic and Lead Ad campaign were running and stable, they continued with two formal Tests on Facebook, spending about $20 a day. These were both for a cost cap for leads and they ran in parallel.

Let’s quickly explain cost cap bidding. This type of bidding means you set a maximum cost you are willing to pay for the campaign’s result. Here, it was the maximum they would bid to pay for a Lead Ad submission (the campaign objective).

They wanted to test how effectively they could generate leads at a fixed cost. This meant they ran the same ad to the same audience but bid in the auction at two different levels. Specifically, they tested a 50 cent and $1 cost cap ad set to a lookalike list of the emails they had already collected for the launch.

The test from July 9th, 2020 through the 16th showed that bidding no more than 50 cents was too cheap to reach enough people. And even with a 50 cent cost cap, the two leads generated cost 86 cents and spent nowhere near the $10/day budget; the bid was too low to win enough auctions and spend money. The other ad set with a $1 cost cap drove 15 leads at $1.22, still not spending the full $10/day budget available.

The second test July 10th, 2020 through the 17th was also testing the cost cap with another lookalike list of a similar name using the same budgets. These results collaborated that 50 cents was too cheap and that leads can be brought in for roughly a dollar each. 15 leads for 98 cents with this specific lookalike.

Their formal Test ads used high-quality visuals in a carousel format pushing the free appeal. Both tests used this same creative.

Their formal Test ads used high-quality visuals in a carousel format pushing the free appeal. Both tests used this same creative.

Right away, from their very first iterations, they seeded the data with their best possible list of users. This allowed for a clean test because they knew the source for their lookalike was exactly the kind of people they wanted to attract.

A takeaway at this point would be that the sooner you can start collecting a list of emails/phone numbers/etc. for a launch, the better. Because it allows you to do solid tests like this when you are ready to start advertising.

This was a smart early Test because they tested cost control using two different lists of target customers. Very clean. Even if there was overlap between the lists, the lookalikes would find different pockets of Facebook users. The fact that both tests generated similar results is important too. Knowing that leads could be generated predictably for about a dollar (with proper targeting) is important for marketing leadership so they can watch overall budgets and adjust for the key launch results.

:::inserts coin:::

:::presses start:::

Here Comes A New Challenger!!!!

Although this is more of a teamwork-based game of business than fighting.

At this point, Be Pro officially entered the conversation with Sarah, Apotheosis’s ad wizard, and Jamison, the founder. With a few tests under their belts and successful ones at that, we were able to guide them through scaling up the ad spend without tanking the costs.

Shortly after, they kicked the next test into gear from July 14th to the 24th. This time, they weren’t testing bids or costs. This time it was a creative test. Targeting the cheaper-results-producing lookalike from the initial tests (including the lookalike seed audience), they spent $20 a day for 10 days.

The main creative difference being tested was the previously winning “blue” image against the “control” red image. The text is the same except the words ‘get your’ were removed from the beginning. We have seen this impact CPM negatively, using ‘your’ right at the front of an ad’s text, so this was a safe iteration to make from the initial control. The headline also changed from ‘coming soon’ to announce the actual Kickstarter date. Again, this was primarily a creative test to a proven audience.

To ensure they were not bidding against themselves (because they were targeting the same audience in two different campaigns), they turned off the “control campaign” for this time period. A few days after, they turned the “control campaign” back on running at $5/day without negatively impacting the test results. CPMs held for both campaigns within an acceptable range.

The “blue” ad used in the third case study test.

The “blue” ad used in the third test.

The “control” red visual was used for the ad in the third case study test.

The “control” red visual was used for the ad in the third test.

After 10 days, the blue version had driven 126 leads at 99 cents each. The red had only driven 79 leads for 94 cents. A little cheaper but not as much volume. Perhaps, since this audience had already seen the blue version earlier in the month (from the initial tests), the repetition helped the blue version win.

Engagement-wise, since they performed similarly, the blue version took the cake for all engagement types.

Except for shares. Something about the red version made it more likely to be shared by people. Maybe the badass and cool sword helps with shareability compared to the blue version? In any case, while the lead volume was not better, the ad is still good to continue testing. If anything, perhaps they could have boosted this to cold audiences to gain more awareness.

Speaking of awareness…remember the original lead generation “control campaign” with the red ad? Well, since it had won this test, they decided to add this winning ad to the “control campaign” and see if it would hold up.

This blue version was added to the case study's “control campaign” on July 20th but was shut off on the 23rd.

This blue version was added to the “control campaign” on July 20th but was shut off on the 23rd. 

This contender ad did not reach a lot of people in comparison to the red control that had been running by itself since the 3rd. While it did drive 10 leads at 64 cents, technically beating the red control, the campaign/ad set was giving over 70x the impressions to the red version. The blue version wasn’t getting enough exposure, so they paused it.

This meant the red control ad which ran at $20/day was consistently pulling leads AND it won the first head-to-head competition. So they bumped this “control campaign” up to $40/day while more tests were conducted starting on July 25th.

The next test they wanted to do was $50/day since they were getting comfortable results on the $40/day budget increase from before. Doubling the budget at this small scale is still safe; you can’t always do that and keep within reasonable costs.

Since more people were going to be seeing the ad due to more budget, and it is a well-known reality of advertising that most people will NOT take the desired action, we recommended a cleanup mechanism. We recommended they remarket to people who opened the lead ad but didn’t submit because they were using lead ads. It’s a little-known low-hanging fruit of an audience we recommend with bigger Lead Ad campaigns. This is the same as traditional website remarketing where people hit a landing page, don’t opt-in, and then see ads about it. Except all within the Facebook platform 😉

From July 29th through August 7th, the main campaign generated 291 leads at $1.79 each. The remarketing campaign cleaned up 18 extra leads at $3.33 each. The higher costs for the remarketed leads were not so much of a bother. Mainly because the campaign was only trolling along at $1 a day in the background.

For this test, we recommended they branch out to other audiences besides a lookalike. Especially since the “control campaign” was already bidding for this audience and winning leads. If anything, to see what the cost of a lead is for a colder audience as they continued to scale. In addition to lookalikes, this test targeted Page Likers and their friends which seemed to be obvious targeting extensions.

The Page Likers only generated 2 Leads at $6.18 each. The higher costs were to be expected due to the smaller audience size in comparison to the lookalikes and friends. However, the lead conversion rate was weak because nearly 400 people were reached. We usually see Page Likers as a source of cheap traffic and leads for active Pages like Apotheosis, so this was a little surprising. Perhaps most of these people had been collected as a lead already in previous tests?

The friends of page likers drove 14 leads but at an unsustainable $11.27 each. Again, the conversion rate was the challenge here as 16,700 people were reached. You would expect way more leads with that many people reached. However, the data shows that, in this case, the first-degree circle of Page Likers’ friends wasn’t super interested. In the future, they could add an extra interest in Dungeons & Dragons then enable targeting expansion. Hopefully, this would point Facebook to the RIGHT friends of Page Likers.

Fortunately, this test was using Campaign Budget Optimization which meant most of the $50/day still went to the proven lookalike audience. This was because lookalikes were the largest of the three audiences after friends and Page Likers themselves; more chances to win the ad auction. The lookalike audience generated 275 leads at $1.27 each, about 25 cents more than previous tests with more than double the daily budget. Not too shabby!

As far as the ads themselves, we helped them re-use the proven red and blue visuals with some new ones to create five new ads to test.

The Benefit angle won by a longshot driving 200 leads at $1.59 each, with another five from lookalikes. No Page Likers responded to this one.

The Benefit angle won by a longshot driving 200 leads at $1.59 each, with another five from lookalikes. No Page Likers responded to this one.

The New angle drove 47 leads at $1.76 each, with all but five from lookalikes. No Page Likers responded to this one either.

The New angle drove 47 leads at $1.76 each, with all but five from lookalikes. No Page Likers responded to this one either.

The Social Proof angle used the tested “red” visual and drove 35 leads at $3 each. This was the only ad that Page Likers responded to for some reason.

The Social Proof angle used the tested “red” visual and drove 35 leads at $3 each. This was the only ad that Page Likers responded to for some reason.

The Fear of Missing Out (FOMO) angle pulled 8 leads from lookalikes only at $1.64 each. Perhaps the lookalikes were growing tired of this particular visual, as this was the fourth time using it with them.

The Fear of Missing Out (FOMO) angle pulled 8 leads from lookalikes only at $1.64 each. Perhaps the lookalikes were growing tired of this particular visual, as this was the fourth time using it with them.

The Commitment/Consistency only drove one lead from lookalikes at $1.79. However, it also only reached 300 people (compared to the tens of thousands of impressions the other ads got) so it might not be a complete failure. Notice how this is the only ad to include the word ‘you’ which could explain the limited reach. This was also the first ad to use a video in the test which achieved 16 ThruPlays at 11 cents each with 4 video completions.

The Commitment/Consistency only drove one lead from lookalikes at $1.79. However, it also only reached 300 people (compared to the tens of thousands of impressions the other ads got) so it might not be a complete failure. Notice how this is the only ad to include the word ‘you’ which could explain the limited reach. This was also the first ad to use a video in the test which achieved 16 ThruPlays at 11 cents each with 4 video completions.

Engagement-wise, the Benefit, New, and Social Proof angles drove most of the engagement. The most shareable ad was the Benefit angle with 14 of the 25 shares for the campaign. Perhaps the ‘feedback from fans’ piece in the text helped fuel all those shares?

As far as the lead ad remarketing, the whole campaign only reached 201 people and it started on July 16th (about 12 days before the previous ads went live). But the frequency was 27.5 which meant on average those lead ad abandoners saw a follow-up ad nearly 28 times. The CPM was still reasonable at about $11, which is why 5,500 impressions got served up to those 200 people. The engagement was poor overall which makes sense because people were growing blind to these ads.

But! The campaign only spent $1 a day and still generated leads, wow!

The Commitment/Consistency angle performed best in the remarketing pulling 9 leads at $2.28 each. This visual was pulled from the sign up landing page on their website.

The Commitment/Consistency angle performed best in the remarketing pulling 9 leads at $2.28 each. This visual was pulled from the sign up landing page on their website.

The Social Proof angle drove 3 leads at $3.36, nearly the same cost as the Social Proof ad in the main campaign.

The Social Proof angle drove 3 leads at $3.36, nearly the same cost as the Social Proof ad in the main campaign.

The FOMO angle with tested blue visual drove 3 leads at $4.24 each.

The FOMO angle with tested blue visual drove 3 leads at $4.24 each.

The Benefit angle with the tested red visual drove 3 leads at $4.16 each.

The Benefit angle with the tested red visual drove 3 leads at $4.16 each.

We did a second variant of Commitment/Consistency text using the red visual modified with a green Kickstarter launch blurb. This drove no leads while the other version drove 50% of the leads for this whole remarketing campaign. Wild! Goes to show that only the data can tell you how an ad will perform.

We did a second variant of Commitment/Consistency text using the red visual modified with a green Kickstarter launch blurb. This drove no leads while the other version drove 50% of the leads for this whole remarketing campaign. Wild! Goes to show that only the data can tell you how an ad will perform.

The New angle drove no leads. Probably because it wasn’t exactly new to this audience, and the headline is missing the cheeky tone we were trying for, in hindsight.

The New angle drove no leads. Probably because it wasn’t exactly new to this audience, and the headline is missing the cheeky tone we were trying for, in hindsight.

Let’s come up for air for a second.

They have collected about 500 leads from FB ads alone and are 30 days out from the start of the Kickstarter come the start of August. They also have a “control campaign” running at $40/day with one control ad that will continue to collect leads. The goal will be to drive as many of these leads to Kickstarter in September so they hopefully fund the game’s production. By this point, the Kickstarter goal is still $10k. But we know it ends differently 🙂

The next test was specifically the Kickstarter launch video and how it could pull for leads by itself to hype up September 1st. Well, a modified version of the launch video since the call to action of backing the project isn’t relevant by this point. The campaign launched shortly before the previous one ended, on August 3rd through the 14th, running $40 daily. This means that for about 4 days Apotheosis was running at $140/day; the previous test was still live until the 7th and the “control campaign” was running. Further testing for overall daily budget scaling as a secondary side-experiment and nothing blew up.

This test was running to a lookalike audience with only the Facebook Lead Ad submitters as the source. A super clean list to make a lookalike we can feel confident to test a video against.

The video drove 148 leads at $2.60 each, one of the highest lead costs of any tests so far. However, this lookalike audience also had overlap from the other lookalike audiences being tested. So we can expect that this newer test would be losing more auctions to the older campaigns, thus driving up the costs. The CPM was a respectable $7.04 and reached 31,399 people so if the costs were driven up, it at least wasn’t catastrophic.

Still, nearly 150 leads is nothing to ignore for about 10 days worth of ads. But it wasn’t enough to beat the existing “control campaign” that is running alongside this test.

Engagement-wise, this test also got a healthy amount of reactions, comments and shares. Also, because the video was only 15 seconds, the watch engagement was very high with over 1,200 completions. 

We are using the same case study control language as before, to keep it a cleaner test for the video itself.

We are using the same control language as before, to keep it a cleaner test for the video itself. To watch the video, view the original post.

By this point, we are mid-August with half the month left for a big list growth push.

Between August 14th and 24th, they ran $40/day with 4 image ads plus the same video ad from the previous test. They used an updated list of leads to refresh the lookalikes this time. This was a creative test to keep things fresh, as they’d been using the same couple creatives this entire time.

This previously tested image (both in ads and website) won by driving 284 leads at $1.08! While not the absolute cheapest, it won by sheer volume.

This previously tested image (both in ads and website) won by driving 284 leads at $1.08! While not the absolute cheapest, it won by sheer volume.

This red gentleman drove 48 leads at $1.17 each.

This red gentleman drove 48 leads at $1.17 each.

The green night elf got the best cost per lead at $1.03 each, but only drove 36 leads.

The green night elf got the best cost per lead at $1.03 each, but only drove 36 leads.

Nobody cared about this guy who drove 0 leads out of the 345 people reached. Sorry pretty boy…

Nobody cared about this guy who drove 0 leads out of the 345 people reached. Sorry pretty boy…

Still the whole time the “control campaign” is running in the background pulling sub-dollar lead costs. We haven’t been able to beat the original control campaign which is now running at $40/day and scooping up 40-50 leads a day!

The last week before the Kickstarter launch, they ran the final test. Since this was the last seven days, they did a countdown campaign. This kind of advanced campaign strategy is highly effective if you can swing it!

Using a single campaign with a lifetime $350, and then seven ad sets (with the same audience) each scheduled for one single day. The audience being used was the same lookalike we had been using this whole time. The creative was unique per day and the ad language was the same control we had been using. If it ain’t broke, no sense in trying to fix it the last week before the Kickstarter launch!

The first and last day finally broke the “control campaign” ad with a cost per lead of 66 and 89 cents respectively. These would be the ones to start with if another campaign was ever run for this game.

Each case study countdown post spent close to $50 with the best lead costs happening on the first and last days. This high-performing-bookend behaviour is also commonly seen in online launches when the cart opens and closes.

Each day spent close to $50 with the best lead costs happening on the first and last days. This high-performing-bookend behaviour is also commonly seen in online launches when the cart opens and closes.

This first countdown post had the best cost per lead driving 46 leads at 66 cents each. Perhaps because this was the first instance of genuine urgency and fear of missing out from people who had been seeing these ads the past month or so.

This first countdown post had the best cost per lead driving 46 leads at 66 cents each. Perhaps because this was the first instance of genuine urgency and fear of missing out from people who had been seeing these ads the past month or so.

This red gentleman pulled his weight again with 51 leads at $1.16 each, the 2nd most leads of the countdown week posts.

This red gentleman pulled his weight again with 51 leads at $1.16 each, the 2nd most leads of the countdown week posts.

Pulling 48 leads at $1.21, this midweek post was average performing, as were most blue images used in these tests. Perhaps the color psychology is off with blue and what we want them to do (opt-in).

Pulling 48 leads at $1.21, this midweek post was average performing, as were most blue images used in these tests. Perhaps the color psychology is off with blue and what we want them to do (opt-in).

Driving 35 leads at $1.34, this ad was also the middle of the pack performance-wise.

Driving 35 leads at $1.34, this ad was also the middle of the pack performance-wise.

The highest cost per lead at $1.98 this drove the 2nd lowest amount of leads: 28 total. The campaign audience was probably getting tired of seeing these.

The highest cost per lead at $1.98 this drove the 2nd lowest amount of leads: 28 total. The campaign audience was probably getting tired of seeing these.

This drove the fewest leads, only 26, at the 2nd highest cost of $1.74 each. Again, ad fatigue was definitely setting in. Especially since this was the same “control campaign” image that had been running the past 2 months except with a ‘2 Days Left’ overlay.

This drove the fewest leads, only 26, at the 2nd highest cost of $1.74 each. Again, ad fatigue was definitely setting in. Especially since this was the same “control campaign” image that had been running the past 2 months except with a ‘2 Days Left’ overlay.

Boom! The high performing blue elf pulled the most leads again. 57 total at 89 cents a pop. Hot dang! Using this proven image on the last day was a smart move because we KNOW folks will opt in for the sheer real-life urgency of tomorrow.

Boom! The high performing blue elf pulled the most leads again. 57 total at 89 cents a pop. Hot dang! Using this proven image on the last day was a smart move because we KNOW folks will opt in for the sheer real-life urgency of tomorrow.

One thing we forgot to mention, this entire time we were excluding Lead Ad responders and email lists! No sense in showing ads to people again when they already opted in. That’s how you can Be Pro with your advertising 😉

So there you have it. That is how Apotheosis put in $3,489.16 into the Facebook ad vending machine and got 3,063 leads out.

If you recall from the very beginning when we started, there is a second part to this case study. This first part is already well over four thousand words. So, first off, pat yourself on the back for making it to the end. The second part is live for you to read!

Is writing tough? Not sure what to say? Read our Dynamic Content post which includes 13 bonus writing ideas!

Leave a Comment

Your email address will not be published. Required fields are marked *