🛠️ Onboarding FAQ

What does onboarding look like?

Our onboarding process includes:

  • A core team kickoff to align on model input needs and set expectations (1hr)
  • First data receipt and a business review to align on model setup (1hr)
  • Recast builds the model and runs a series of checks, and then we review first model output with the client ~2 weeks after data receipt and teach them how to understand and use the model output (1hr).
  • During this first month, we work with most clients to setup and automate a data pipe to receive weekly data.
  • Each week thereafter, we rerun the model and provide updated weekly results.
  • During onboarding, we lead a series of 3-4 biweekly meetings to ensure customers can use the model output / dashboard effectively and correctly, and we support their incorporation of model output into their business' decision-making processes (for example, we can help them get the data into a tool like Looker or Tableau if that is better than our dashboard for them to use the data).

Once the client is well-onboarded, we lead monthly ongoing meetings where we ensure success, discuss updating priors or incorporating new channels. We also lead Quarterly Business Reviews ongoing, where we discuss how they are using and getting value from our product overall.


What information aside from raw data outputs do you typically request from partners before beginning any modeling?

Business context and the results of lift tests that we can incorporate as priors. A calendar of product launches, promotional events, or pricing promotions where applicable.


Who owns the Data QA/Validation process to ensure accuracy before the modeling process?

We share the model inputs back to the client visually and set the expectation that they validate the data. We get alerts for unexpected or erroneous-looking data (missing data, changing historical data, big unexpected changes in spend patterns), which we will communicate to the client.


Do you have a process to ensure data standardization across multiple data sets?

We receive prepared data in a single dataset for each model and require that the column headers are consistent week to week.

We advise clients to use clear consistent naming conventions to stay organized and facilitate decision-making because the headers/names they choose will be used in their model output. Specifically, naming conventions should be set in such a way that they can be parsed for post-modeling aggregation of results on the client side.


What are your requirements for data passing and is there a way to automate this process?

We can work with pretty much any form of automated data access. We have full S3 support and by default set up a shared S3 location and issue credentials for all new clients. If that doesn't work easily with your infrastructure, we can almost certainly work with whatever you have (direct connection to data warehouse, sftp, etc.)


How much historical data is required for initial modeling phase?

Ideally we want at least 27 months of historical data. If you have less than that we can discuss!


How does Recast configure the model for each client?

As part of your onboarding, the Recast team will work with you to configure the Recast platform to the specifics of your individual business. This gives the underlying simulation engine a starting point and “guard rails” about what’s realistic.

The goal of this process is to configure the Recast platform such that it includes all and only the simulations which are realistic for your business. Our data science team will partner with your internal marketing experts to get this balance right. Some questions we may ask are:

(Organic Demand) What are your expectations about your base demand?
E.g. "In the short term, what proportion of your sales would you expect would remain if you cut all media?"


(Channel Effects) Which channels generate demand and which capture existing demand?
E.g. "Which of your channels are structured like affiliate programs, where you pay based on the number of customer conversions?"
E.g. "Which of your channels are structured like branded search, where other media may drive additional spend into the branded search channel?"


(Time Shift Effects) The model will estimate the time-shift effect of each channel, but we want to provide some guidelines so the model doesn't need to run unrealistic simulations. For each channel, how many days after the media spend occurred do you expect to have realized the full impact of that spend?
E.g "For offline channels, we expect to have realized the full impact of spend between 7 and 30 days after spend; does that sound reasonable?"


These calls often go best when we can address the first of these questions, “what are the range of possible values for Brand's organic demand?”, asynchronously in advance. If you're not sure though, that's fine too; just let us know and we'll include it on the call instead.

In the absence of media, some portion of your revenue would remain, which we are calling organic demand. For less well-established brands (industry challengers) with D2C as the primary distribution channel, the organic demand is typically 10%-50%. For other types of businesses, organic demand can be as high as 70%-90%. Therefore, even narrowing the range somewhat (like from 0%-100% to 10%-50%) helps us run more simulations that represent possible versions of the truth by eliminating some that are unrealistic: we don't want to "give the answers to the model", but we do want to focus the simulations to narrow in on Brand's true media ROIs.

In addition to the business information provided through the process above, Recast also configures each model to account for the client’s key holidays, promotional calendar, the results of their incrementality tests, and all relevant day of week / day of month trends.

Once we collect this information, we write it into the code and begin the model validation part of the process (~2 weeks).


🚚 Delivery and Timing

What lead time is involved in getting the first iteration of the model built from the time of initial

About 3-4 weeks (and often faster)


How frequently do you provide results and can the delivery cadence be customized at all?

By default we deliver fresh results weekly; we can discuss alternate cadence if needed


What do the readouts look like? Do you offer clients dashboards?

We offer a client dashboard which includes These views.


When should I schedule my refresh for?

Recast wants to refresh your model once a week. In order to do that, Recast needs comprehensive and complete data. Recast will set an automated schedule to ingest your data and launch a model on the day and time of your choice (it typically takes about 24 hours after a model run is launched before your dashboard will be updated). There are two choices to make when considering what day to refresh your model:

  1. What day of the week should the model run (refresh date)?
  2. What day of the week should we use data through (last data date)?

The refresh date should be the date before you want to review results. The last data date should be as close to the refresh date as possible, assuming you have complete data for every channel. If you, for example, only get new TV numbers every Friday, the last data date should be on Friday, regardless of when the refresh date is.


How should we report spend for direct mail?

When it comes to direct mail, clients wonder how they should report the spend. Three possibilities often present themselves:

  1. Report all spend on the day that the mail gets put in the mailbox (the “drop date”). If you sent out 100,000 mailers at the cost of $200,000, report the entire $200,000 on a single day (e.g. Nov. 4th)
  2. Report spend spread across estimated “in-home” dates. If you spent $200,000 on 100,000 mailers, the mailing company may estimate that 40% arrived in the mailbox on Nov. 7th, 30% on Nov. 8th, and 30% on Nov. 9th. Divide the spend over these three days.
  3. Report spend based on the mailing company’s estimated response timetable. Mail companies can often provide response curves that estimate how long it takes for customers to respond to direct mail, typically over a 30-120 day window. You could spread the spend using this curve that they provide.

At Recast, we recommend 1 and recommend against 3. The reason we prefer 1 is because Recast has built a strong infrastructure to handle the delay between when money is spent into a channel and when customers purchase because of the ad they saw. The “time shift” curve estimates the amount of time it takes to see return on ad spend. This catches and incorporates the time it takes for the mail to get delivered, the time it takes for mail to get picked up and read, and the time it takes for the customer to take action. It’s also generally the easiest to implement.

2 separates the time it takes to get delivered from the time it takes to get picked up, read, and acted on. There’s nothing wrong with this theoretically, except that it relies on estimates of “in home dates” provided by a third party that might not be reliable. By letting Recast estimate the total time from “drop date” to return on ad spend, we cut out this potential source of bias due to bad estimates.

3 attempts to estimate the total time shift curve for us, so that the delay between “spend” and “return” is 0 days. We consider this a significant source of bias and discourage using this method. Direct mail companies compute these curves without knowledge of other marketing efforts your company is engaged in, which makes it impossible to estimate the true “incremental” effect of this spend, so while it may be a useful reference point with which to compare Recast’s results, using it to determine when to place the spend can introduce significant bias.

Please be clear on what method you are using and be consistent over time. Recast will configure your model differently based on which method you are using, so please communicate that. If you reporting is using methods inconsistently over time, the time shift curves could lead to mis-attribution and poor results.