Traditional risk and performance assessment tools – standard deviation and the Sharpe index – can fail badly in analysing investments with asymmetric risks.

Carlos Heitor Campani

Carlos Heitor Campani

Investment banks assess risk and compare different investment vehicles with risk-adjusted performance indicators.

Risk is the possibility of a less than desired return and often reflects potential financial loss – be it an actual loss or gaining less than the expected based on a proper benchmark. All other things equal, we prefer less risk than more, which means we are risk averse.

Standard deviation

The standard deviation of expected returns distribution is one of the most used risk assessment instruments. It measures the degree of dispersion of future possibilities: the larger the standard deviation, the further from the expected value the result can be.

At first glance, the association with risk seems obvious: the greater the degree of dispersion, the greater the potential loss and, therefore, the greater the risk.

However, this assessment tool does not distinguish the downside risks – undesired and worrisome – and the upside risks – bringing hope and eventually the joy of achievement.

In symmetric distributions, standard deviation reflects downside risk because downside and upside risk are equal. But history shows us that stock returns and other types of investment are far from symmetrical.

Missed opportunity

Some analysts use standard deviation to the left, only computing the values ​​below the average. Standard deviation to the left should replace standard deviation in the denominator of the traditional Sharpe index. Therefore, investors have a performance measure that allows adjusting the expected return according to the asset’s (downside) risk.

Simple, but unfortunately this important adjustment is not always made.

Imagine you have to roll a dice and choose one of two possible bets (Richards bet or Jagger bet). Bets pay the following returns:

    Returns
Dice’s face Probability Richards bet Jagger bet
1, 2, 3 50.0% -10% -10%
4, 5 33.3% +30% +30%
6 16.7% +100% +300%
  Expected return: 21.7% 55.0%
  Standard deviation: 39.3% 111.0%
  Sharpe index: 0.55 0.50

Note that everyone should choose Jagger bet, regardless of the level of risk aversion or any other psychological factor. That's because if the dice results in 1, 2, 3, 4 or 5, the results of both bets will be the same, but if you roll 6, you win (much) more with Jagger bet.

However, the higher upside risk of Jagger bet penalises its Sharpe index, as the traditional standard deviation recognises the +300% and inflates the risk measure used.

As a result, the Sharpe index would be wrong to recommend Richards bet (for convenience and without any loss of generality, the risk-free rate was considered zero for the very short time it takes to roll the dice).

The obvious conclusion is that the standard deviation and the Sharpe index can lead to unforgivable investment decisions.

But what if we used the standard deviation to the left? We would still have a problem: the higher average of Jagger bet would cause the deviations on the left to be larger than for Richards bet, producing a standard deviation to the left equal to 52.8% for Jagger bet against 31.7% for Richards bet.

This is yet another inconsistent result, as the bad scenarios are the same for both bets. Why does this happen? The standard deviation to the left is calculated with respect to the average, which in turn is affected by the extreme positive value (which causes satisfaction).

In other words, scenarios of joy are still penalising the assessment, which is definitely not what you want.

Separating the downside

Consistency must be pursued by the source of the problem: we do want to penalise unwanted results (not satisfactory results). Therefore, we must use a reference that separates the downside risk (state of dissatisfaction) from upside risk (state of satisfaction).

This depends fundamentally on the investor and their degree of risk aversion. I see three natural paths: using the absolute zero reference (actual gains and losses), the risk-free rate (losses and gains relating to the risk-free scenario), or an investment risk-adjusted rate (losses and gains relating to the risky comparable scenario).

There are arguments for each, some more conceptual and others more behavioural. Personally, I like to use the risk-free rate as a benchmark.

Going back to the choice between Richards and Jagger bets, notice that a reference that does not depend on the average would solve the problem. The same standard deviation to the left is identified for both bets, which is consistent with the fact that they both have the same results in the dissatisfaction scenario(s). This means that Jagger bet is the correct option.

In conclusion, the traditional risk and performance assessment tools – standard deviation and the Sharpe index – can fail badly in analysing investments with asymmetric risks. This can be rectified by employing standard deviation to the left of an appropriate fixed reference (such as the risk-free rate) as a risk measure and as the Sharpe index denominator.

Carlos Heitor Campani is professor of finance at the Coppead Graduate School of Business in Brazil. Follow him on Instagram and Linkedin @carlosheitorcampani.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter