🛣️ Building Your First Model (Demo)
The purpose of this page is to provide step by step instructions on building your first model in Recast using demo data. When we set up your account on the Recast app, we will give you access to a demo client which you can experiment with freely. This page provides all necessary data and instructions to get a demo model deployed in the app.
We believe working through this tutorial will make it much easier when it’s time to work on your own client data.
Getting Started
The first step on building a model will be to gather all relevant marketing data. For demo purposes, we will provide this dataset which is fictional data designed to realistically simulate a simple model.
The next step is to get it in the specific format Recast requires (that format is outlined here). The csv above is almost in the correct format, we will just do some slight modifications to the date column and remove an unnecessary column. We demonstrate those in the video below. We highly recommend creating a repeatable process to easily clean your data into Recast’s specification to make it easier each time you want to refresh the model.
If you want to run the code yourself, the script is below:
R Code
library(dplyr)
# Gather data from wherever it lives
raw_data = readr::read_csv("demo_data.csv")
# Format data into Recast's clean_data specification (see specs)
clean_data = raw_data %>%
mutate(date = ymd(day), .before=everything(), .keep="unused") %>%
select(-total_sales)
#You will likely need a lot more steps than this
#But this file now looks like it meets our requirements
head(clean_data)
#Let's write it to a csv so we can use it in the app later
readr::write_csv(clean_data, "demo_clean_data.csv")
# Now let's calculate the historical blended CPA for acquisitions
clean_data %>% rowwise %>%
mutate(total_spend = sum(c_across(facebook_prospecting:influencers_shifted))) %>%
ungroup %>%
summarize(blended_cpa = sum(total_spend)/sum(acquisition))
If you’d just like to skip ahead, you can download the clean_data csv:
Calculating Priors
The configuration options we make available to set priors are available in this Google Sheet. To begin, make a copy of the Google Sheet, rename it to “Demo Model Configuration”, and make sure to share it with [email protected].
The formatting requirements for the sheet are found here, while more philosophical explanation of how to choose values for the sheet is found here.
Napkin Math
To start filling out the priors sheet, I will walk through an exercise we do called the “Napkin Math” that gives us a good starting point to think about our priors on relevant performance metrics. Below are links to unfilled out links to our napkin math worksheets that you can use to follow along:
Note: this video uses $135 as the base CPA instead of $39.40 due to a mistake in the earlier calculation. The sheet with the proper base CPA can be found here.
Filling out the prior sheet
Now that we have the napkin math, we’re ready to fill out our priors sheet.
The filled out sheet can be found here.
Note how we pull the CPA ranges and intercept percentages from the Napkin Math document.
The logic behind selecting the other values can be found in the Configuration Options page.
Launching your first run
In this video, we will walk through logging into the app for the first time and launching a run using the clean data and priors provided above:
(Note: "Launch Runs" has been renamed to "Manage Runs")
Some notes about model run time
The two primary determinants of model run time will be the amount of historical data (we generally recommend 27 months) and the number of channels. A simple model like this demo example should run in an hour or two, while a complex run with many channels and lots of data may take 6-8 hours. We generally recommend limiting the number of channels in a model to 30 to ensure model quality as well as reasonable run times.
Typically a model will show as "Launching" while it validates your inputs and calculates priors. This can take 5-10 minutes. Once the Stan code starts estimating the Bayesian model, it will switch to "Running" and there is a significantly lower chance of failure after that point.
Examining the Results
In this video, we will look at a run that has succeeded and how to interact with the results.
Model Validation Steps
In the final video, we walk through launching each of the validation steps and show how to look at the results of those runs.
In the validation process we'll usually launch one run each of the parameter recovery runs, holdout runs, and stability loops. We'll then assess each output for problems, make adjustments, and iterate as necessary. Further information about each dashboard output can be found on the Interpreting the Outputs page.
Updated about 2 months ago