🪴 Improve your model

There are several ways to improve Recast’s model in terms of both the precision of parameter estimates (e.g. getting a tighter range of ROI estimates for a certain channel) and forecast accuracy.

They fall into the following categories:

  • External information
  • Priors, or “starting points” for the model
  • Experiments run outside of Recast
  • Spend experiments (e.g. “go-dark” periods)

Priors

During onboarding, the Recast team will work with you to set priors for your model. Priors are an important part of Bayesian models; they act as an initial range for the model to start with, which is refined as the model incorporates information from the data.
Setting priors too widely or too sharply can impact the performance of your model. In the former case, it may be the case that a range that starts too wide will — even after the model is estimated — remain too wide. In the latter case, strong starting opinions may conflict sharply with the model’s estimate, which may lead to issues fitting a performant model.


Since we set the priors based on a combination of business intuition and past data, it leaves room for human error. You can view some of the most important priors by clicking “Configure” on the top row of tabs on your Recast dashboard. If you think your time shifts, ROI/CPAs, intercept bounds, or saturation curves are being captured inaccurately, contact [email protected]


Lift Tests

Recast can flexibly incorporate information about channel performance from tests run outside of the platform. For example, if you run a holdout test, or geo-lift test, the information from those tests can be input to Recast’s model to “pin” Recast’s estimates to a certain range of values.

Lift tests have the advantage of improving the precision not only of the channel that was tested, but other channels as well, since a significant proportion of the variation in Recast’s estimates in channel performance is due to correlation in spend between different channels. It is frequently the case that Recast will believe that Channel A plus Channel B has a certain level of performance, but is less certain of which channel is more performant than the other. In this case, a lift test on Channel A would help improve the precision of Channel B’s performance as well as Channel A.

Benefits

➕Gold standard for assessing causality across industries

➕ Helps to reduce uncertainty in the Recast model

➕ Can be executed “sub-scale”*

*This implies that you can run a lift test on a sample of your population. For example, if you advertise in Facebook to 1M users, you could leave 90% of them alone running business as usual advertising, and then create your lift test entirely within just 10% of them. Additionally, if you are trying out a new marketing channel or tactic, you could run it in just 5 markets and compare those 5 markets to 5 similar ones as controls. This allows you to test the channel before investing in it across the entire country.

Drawbacks
➖ Cannot be “always-on”
➖ Requires costly set-up
➖ Longer analytics cycle time


Spend experiments

One way to improve your model is to change the patterns of spend in the model. When spend in a channel is increased or decreased dramatically relative to past behavior, this introduces signal into the model that makes estimates more precise. For example, if a channel is turned off, and sales stay roughly the same, this is an indication that that channel was not effective (because if it had been effective, sales would have fallen), and Recast’s model will revise down the estimate of that channel’s performance.