Advertising adstock is the carry-over effect of some advertisement to a consumer over time. Finding the decay rate or half-life of advertising is a common question of interest to many advertisers to determine how effective advertising builds the awareness of their brand, and how that awareness decays over time.

Adstock is traditionally applied to advertisement via TV, and models are used to determine the best-fitting adstock rate of TV to Sales, or some sort of outcome (i.e. awareness). However in most cases, one would have other mediums such as Radio, Print, Digital, Social, etc.

I took the Nonlinear Least Squares approach to solving for the optimal adstock rate commonly applied to a single advertising medium, and augmented it to take in multiple variables.

My motivation for developing this multivariate approach is that modeling adstock rates for each advertisement medium independently may not be sufficient given that multiple mediums affect the outcome, and need to be accounted for collectively.

## Method

Let  = adstock rate, and = error at time i. Then we can model Sales (or some outcome of advertising) as: where Now let’s say there are three advertising mediums that we want to compute adstock rates for. In this multivariate scenario, this model would look like this: The goal is to find the optimal rates of all values, using Nonlinear Least Squares. The intercept that is computed from the model can also be interpreted as the Base, or the base level of sales or outcome if there were no advertising at all.

## Example in R

For this example I generated a sample data with 3 ad variables (each representing some advertisement medium) with 104 obervations (representing roughly 2 years of weekly data). Then `sales` is generated from base + ad variables w/ ad stocking, with added random noise.

FYI: If you aren’t using `pacman` already, it is a great package management tool and I would highly recommend it (link to Github).

```# generate sample data
set.seed(2222)

return(as.numeric(stats::filter(x=x,filter=rate,method="recursive")))
}

n_weeks = 104
base = 50
ad1 = sapply(rnorm(n_weeks, mean = 20, sd = 10), function(x) round(max(x, 0), 0))
ad2 = sapply(rnorm(n_weeks, mean = 20, sd = 10), function(x) round(max(x, 0), 0))
ad3 = sapply(rnorm(n_weeks, mean = 20, sd = 10), function(x) round(max(x, 0), 0))

# generate sales data from the base + ad vairables w/ ad stocking, with random noise
```

I wrote a Multivariate Adstock Function in R, with special thanks to Angela Ju, whose code from this article I adopted and augmented. The equation from is implemented in the R function using the `nls` function to fit a nonlinear least squares with the adstock function.

This function can take a `data.frame` of any number of column(s) (or advertisement mediums), and will calculate the optimal adstock rate for each column in the input data.

(Note: for whatever reason, WordPress deletes portions of the code when publishing – if you want a working code to the function below, you can find it here).

```#multivariate adstock function
# parameter names
# rate variable names
rates = paste0("rate_", params)
# create partial formula
param_fm = paste(
collapse = " + "
)
# create whole formula
fm = as.formula(paste("Impact ~ a +", param_fm))
# starting values for nls
start = c(rep(1, length(params) + 1), rep(.1, length(rates)))
names(start) = c("a", params, rates)
# input data
# fit model
modFit  rate_min) |
!all(summary(modFit)\$coefficients[rates, 1] < rate_max)){
library(minpack.lm)
lower = c(rep(-Inf, length(params) + 1), rep(rate_min, length(rates)))
upper = c(rep(Inf, length(params) + 1), rep(rate_max, length(rates)))
modFit <- nlsLM(fm, data = Data, start = start,
lower = lower, upper = upper,
control = nls.lm.control(maxiter = maxiter))
}
# model coefficients
# print formula with coefficients
param_fm_coefs = paste(
collapse = " + "
)
fm_coefs = as.formula(paste("Impact ~ ", AdstockInt, " +", param_fm_coefs))
# rename rates with original variable names
# calculate percent error
mape = mean(abs((Impact-predict(modFit))/Impact) * 100)
# return outputs
return(list(fm = fm_coefs, base = AdstockInt, rates = AdstockRate, mape = mape))
}
```

The function takes in an `Impact` (a vector or single-column data frame of some advertising outcome), `Ads` (data frame of advertisement variables), and `maxiter` (maximum # of iterations for convergence), and returns the adstock model formula, base value, adstock rate for each ads considered, and the Mean average percent error (MAPE) between the predicted outcome and actual outcome.

```# adstock for ad1
Impact = sales
```
```## \$fm
##
##
## \$base
##  106
##
## \$rates
##     0.78
##
## \$mape
##  6.9729
```

For Ad 1, the model estimates base as 106 and adstock rate as 0.78.

```# adstock for ad2
```
```## \$fm
##
##
## \$base
##  137
##
## \$rates
##     0.59
##
## \$mape
##  8.064316
```

For Ad 2, the model estimates base as 137 and adstock rate as 0.59.

```# adstock for ad3
```
```## \$fm
##
##
## \$base
##  130
##
## \$rates
##     0.61
##
## \$mape
##  7.768505
```

For Ad 3, the model estimates base as 130 and adstock rate as 0.61.

However, the original parameters used to simulate the data are base of 50 with rates of 0.7, 0.4, 0.5. To my previous point, modeling adstock for each medium independently may not be sufficient due to omitted-variable bias, and thus should be considered together.

Let us now compute the adstock rates for all three advertisement variables together in a multivariate model.

```# multivariate adstock model
```
```## \$fm
##
##
## \$base
##  51
##
## \$rates
##     0.70     0.37     0.53
##
## \$mape
##  2.160336
```

The model estimates base as 51 and adstock rates as 0.7, 0.37, 0.53. With a MAPE of 2.16%, and in comparison to base of 50 and rates of 0.7, 0.4, 0.5, this is a fairly accurate estimate.

## Simulation

Now let’s do a simulation with n = 100 random samples taken from normal distributions.

```# simulation
base = 50
ad1 = sapply(rnorm(n_weeks, mean = 20, sd = 10), function(x) round(max(x, 0), 0))
ad2 = sapply(rnorm(n_weeks, mean = 20, sd = 10), function(x) round(max(x, 0), 0))
ad3 = sapply(rnorm(n_weeks, mean = 20, sd = 10), function(x) round(max(x, 0), 0))
# generate sales data from the base + ad vairables w/ ad stocking, with random noise
# fit model
Impact = sales
return(c(base = mod[], mod[], mape = mod[]))
}

# replicate 100 times
mod_rep = replicate(n = 100, adstock_sim())
rowMeans(mod_rep)
```
```##      base  rate_ad1  rate_ad2  rate_ad3      mape
## 50.180000  0.699000  0.400900  0.492400  2.099492
```

With a simulation of 100 samples, the model estimates the average base as 50 and average rates as 0.7, 0.4, 0.5, with a mean MAPE of 2.1%.

The caveat here is that simulations can be built to produce any results as expected (and is certainly the case here), but in practice, I believe this multivariate approach to adstock modeling provides a better representation of adstock rates of different advertisment mediums, compared to a univariate approach.