Recast Knowledge Base
Breadcrumbs

Benchmark Model Review & Gap Identification


What is the Benchmark Model Review?

During the model build process, the Recast team will work to build the model that maximizes both internal validity and external validity as assessed by the Recast Model Checks. This means we end up with a model tuned toward theoretical rigor. However, a model that is purely focused on theoretical rigor could end up missing the mark on providing the most valuable model for you to use for your business. This review call is our first step in the direction of maximizing value, while still staying rooted in the modeling rigor for which you came to Recast.


Screenshot 2026-02-19 at 2.50.57 PM.png




How Benchmark Model Review Works

  1. Live on a call, your Recast delivery team will walk you through:

    1. The results of the Recast Model Checks

    2. Model Inferences regarding:

      1. Historical channel efficiency and contribution business

      2. Historical organic contribution to your business

    3. Results of a model’s forecast for the future

    4. Results of model’s optimization for future spend

  2. After the call, you and your team will be granted access to the Recast platform, so you can dig in further yourselves.

  3. Homework: Your task coming out of this call is to identify the areas you see that do not align with the business context, and could ultimately make the model less actionable. We refer to these as “Gaps”

  4. Follow-Up Meeting: Usually we allow for one week for you to determine these gaps before we have a follow-up meeting where you bring the gaps forward, we ingest them and gather any clarifications necessary.


How to Identify Gaps?

To ensure both parties are clear on the ask, we have a very specific structure for defining gaps. Gap statements should have 2 clauses:

image-20250313-155203.png

Where should you start?

Please see our suggested Gap Identification Checklist below, which should give you starting points once you receive access to the dashboard.

Insights

In the app, review each of these reports and identify any of the model inferences that surprise you based on what you know to be true about your business.

Overview - Last 365 Days

  1. Marketing Effectiveness

    1. Do any channels appear to be more efficient than what makes sense?

    2. Are there channels that are extremely certain or uncertain?

  2. Direct Contribution by Channel

    1. Do any individual channels dominate the marketing contribution?

    2. Does the Baseline and Spikes contribution make sense over the last 365 days?

Baseline

  1. Does the trend over time in the baseline make sense? Is it generally trending up or down?

  2. Is there a seasonal component of your business that isn’t well represented in the baseline?

Spikes

  1. Promotions

    1. Are there major promotions that aren’t accounted for here?

    2. Are there any promotions that have a bigger or smaller impact than you expected?

    3. Are there any promotions that had the opposite impact of what you expected?

  2. Anomalies

    1. Are there any anomalies that aren’t accounted for here that should be?

    2. Are there any anomalies that have a bigger or smaller impact than you expected?


Context Variables

  1. For the context variables included, is the direction of the effect aligned to your expectations?


Experiments

  1. For the experiments included, do they align closely enough with the results you got from them?

  2. Do any of the channels an experiment was applied to seem to lose the effect of the experiment too quickly?

Plans

  1. Open the “Default Plan”

  2. Review the Spend Breakdown

    1. Does it generally align with how you expect to spend?

    2. The spend is estimated using Recast's "business as usual" approach that assumes your future spend will follow the same general pattern as your historical spend.

  3. Review the Forecast Results

    1. How does the Forecast Result compare to your goal for the period?

    2. How does the Forecasted ROI compare to your expectations for marketing efficiency for the period?

  4. Recommendations

    1. Review the recommendations from the model. These are nudges to your Default Plan’s budget the model would currently recommend

    2. How do these compare with how you currently think about opportunity across your channel mix?

    3. Where the model is recommending you spend up, can you spend up?

    4. Where the model is recommending you spend down, are there any reasons why you wouldn’t spend down on those channels?


Note: During the iteration phase, we will not focus on running experiments within the model. An experiment refers to exploring open-ended or hypothetical scenarios, such as testing new assumptions or evaluating potential changes that could impact model outputs. These types of exploratory analyses can be revisited with your Impact team after Acceptance.