ABOUT US

We solve the data problems that seem unsolvable.

GET SOCIAL

  • Jump Data Twitter Account
  • Jump Data Linkedin Account

Jumpdata Limited. Company No. 08241266.

  • Paul Brown, Director

Key Driver Analysis Techniques

Updated: Oct 29, 2019

Key Driver Analysis or Relative Importance Analysis, are regression/correlation-based techniques that are used to discover which of a set of independent variables cause the greatest fluctuations in the given dependent variable. i.e. which of them have the greatest impact in determining the value of the dependent.


In market research, the dependent variable is often a measure of overall satisfaction, whilst the independent variables are measures of other aspects of satisfaction, e.g., efficiency, value for money, customer service etc. The dependents in this example are then often called


Drivers of Satisfaction. By applying a suitable Key Driver technique, then ordering these variables in terms of a measure of importance, a researcher can better understand where a company should focus its attention if it wants to see the greatest impact.


One of the main problems with such analysis is that variables being used to predict an outcome are highly correlated with one another (multi-collinearity). This can result in importance values that are derived from simple regression/correlation analysis being inaccurate and can be of less use.


There are various methods to overcome this problem. Three of the most well-respected statistical techniques are:

  • Shapley Value Analysis

  • Kruskal's Relative Importance Analysis

  • Ridge Regression

The first two techniques can be thought of as running every possible regression model between the dependent and each possible subset of the independents, then averaging the importance values obtained in each case. In both cases, one importance value is returned for each independent variable.


Ridge Regression, on the other hand, in effect, penalises the importance values in an attempt to neutralise the effect of multi-collinearity. In this case, the importance values returned depends on the penalising factor. So, it is usual to return a set of importance values, then choose which one is most appropriate. A common way to help in your choice is to plot the importance values of the independents against the penalty factor, then choose your penalty factor when the graph appears to “flatten” (similar to a scree plot in factor analysis). Note that the penalty value of 0 is equivalent to ordinary linear regression.


Once the importance values have been calculated, the researcher may wish to plot quadrant maps that show attribute importance versus mean score for the individual attributes. By using this map, one can judge which are the strengths and weaknesses of certain areas and discover which ones must be focussed upon to see the greatest increase in, for example, overall satisfaction.


Derived Importance models should perhaps be renamed marginal resource allocation models in that reallocating all of one’s resources blindly based on these results often would lead to problems.


Take for example, airline satisfaction. No-one would ever consider cutting safety standards to improve their food, although in a derived importance model, the standard of food would almost invariably come out as more important. This is because with respect to air travel, safety is assumed and hence safety standards do not generally influence people’s choice. However, the standard of food on the airline does vary between airlines and thus has a much greater influence on people’s satisfaction - and thus their choice of airline. Hence derived importance will demonstrate where one should allocate extra resources if one wishes to see the greatest impact on their satisfaction scores.


The JumpData Key Driver Tool will run all three of these procedures simultaneously. It returns a csv file containing the importance values for each of the independent variables from both Shapley Value and Kruskal's Relative Importance Analysis. It also returns sets of Ridge


Regression coefficients for the penalties as specified by the user (with a default being minimum penalty of 0 and maximum of 4).


In each case, it scales the output so that importance values are in percentages. For, in the case of Shapley and Kruskal, the actual values do not have much meaning, only relative to each other.


The program also returns base correlations and, and absolute value of such scaled to percentages. It also returns the R2 for each of the Ridge Regressions (so the ‘ordinary R2 is at penalty value 0).


Each of the techniques should produce similar results (for suitable choose of penalty parameter). Nevertheless, there are various researchers who prefer one technique to the other. We would suggest looking at each of them, and then making a decision (although we have tended to favour Kruskal’s over the others in the past).


The tool can deal with large numbers of independent variables (although Key Driver results become more difficult to interpret for many independent variables) and is very fast: it executes in one or two seconds for less than 15 variables, and less than 5 minutes for 20 variables.


In the test data file, the R2 for the base regression is 38.49%. Taking a Ridge penalty of 1.7, we then have (RidgeVal=0 is ordinary linear regression):

Whilst drawing absolute cut-offs in this kind of analysis is a bit fruitless, it’s clear that the most important variables are Ind_5, Ind_3, and Ind_2 (these account for almost 44% of the Kruskal’s total), with Ind_9 and Ind_11 also of note. What is important to note is how the importance of Ind_2 has declined from the ‘ordinary’ linear regression in each case, an example of the fact that when multi-collinearity is present, one can over-estimate the importance of some variables.



Author: Paul Brown, Director

JumpData Logo
Jump Data Linkedin Account
Jump Data Twitter Account