Para bellumPrepare for the worst risk
"Si vis pacem, para bellum", goes the old adage. If you want peace, prepare for war. In our case, the worst possible risky scenario our information assets could go into. While probability distributions, loss exceedance curves, simulated scenarios, etc, are all great for the quants in the office, at the end of the day big, important decisions need to be supported by single numbers that can be easily compared to one another. In risk management, this number is the Value at Risk or VaR. Fortunately, once you have one you have the other.
Value at risk measures the mentioned worst-case scenario by telling us beyond how much our losses will most likely not go, up to a certain degree of confidence, in a definite period of time (as we have stressed throughout this series of articles, all measurements need to be time-framed to make sense). Thus a daily 1% VaR of $10 million means that the probability that you will lose more than ten million is 1%, which is the same as saying that you are 99% confident that the losses will not exceed $10 million.
So we need to define over what time period our VaR will be taken and, more importantly, what confidence we want or, rather, how extreme the worst-case scenario we want to analyze. Typical periods in the industry are a single day or week, but they can be anything, as long as your risky investment lasts. Since we are interested in extreme cases, confidence levels of 95% or 99%, which correspond to the extreme 5% and 1% cases are not uncommon.
There are at least three workable ways to compute the value at risk:
Examining the distribution of the returns.
Simulation using historical values.
Using the loss exceedance curve.
We will focus on the first, because it illustrates the VaR definition, and the last, because it is more compatible with what we have done so far, since our research on risk has mostly revolved around computing the distribution of loss due to cybersecurity events, i.e., the loss exceedance curve. As a bonus, this will be the easiest method.
The normal distribution is perhaps the most popular one for modeling real-word situations and natural phenomena, and with good reason. It could be used, for example, to model the final value of a given portfolio over a one-year period. Suppose the mean return is 10%, with a standard deviation, i.e., volatility, of 30%. Thus the portfolio returns can be modeled by a normal random variable with mean 10 and standard deviation of 30:
Knowing the probability distribution, which tells us probabilities of point values, we can find probabilities of ranges with the corresponding cumulative distribution function or CDF (which is nothing but the integral of the density, i.e., adding them up):
Notice how this is similar to the loss exceedance curve, only vertically reflected. In a cumulative probability plot the VaR is just the x-value corresponding to the probability we need. In this case it’s just the x-value for 1% probability, as shown in the graph.
We can use a simple spreadsheet, which is a tool we all probably already have, to put this model to work. In this case we will use the NORM.DIST function since the distribution is normal. If we want to know the probability that the loss will be greater than, say, 20%, we can compute
i.e., around 15.8%. The 10 and 30 above are the distribution parameters, and the -20 is of course the value whose probability we need. Notice that it is negative, meaning a loss. The 1 means to make the computations cumulative (as opposed to 0 which just calculates the density function)s i.e., to find the probability that the overall change in value will be less than or equal to -20.
We can also use the inverse function so that, given a probability, we get the point at which this probabilty is attained. It is the same process as above, but backwards.
At what point is the 1% probability? More exactly, for which value V is it true that the probability that the final value is less than or equal to V is 1%? That’s just the 1% VaR:
This is the 1% quantile, or the first percentile of the distribution, the point under which the remaining 1% of points are, weighing by the probability. Thus the Value at Risk in this example will be 59.8% of what we invested. Had we invested $100 million, then we know the VaR is $59.8 million, and hence that the losses will not exceed that amount in 99% of the cases, only in that rare 1%. Notice that the VaR, being a single figure, does not tell us exactly or otherwise what the losses might be in that catastrophic 1%. But if we are ready to lose that much, we are halfway prepared for the metaphoric war.
The tail (or conditional) value at risk, or TVaR (CVaR) for short, tries to fill that void by giving us the expected value or mean in the catastrophe region, i.e., in case of a VaR breach. Much like the actual mean of a distribution is a center of gravity of sorts, where we could "hold" the PDF in balance, besides being the value with more repetitions if we repeatedly draw numbers from such a distribution:
The TVaR is thus the expected value of the loss, given that the VaR has been surpassed. In terms of the above analogy, it is the center of gravity of the "catastrophe" region of the distribution plot:
In our case, since we are mainly interested in cybersecurity risk, which we quantify via, we can always re-run them and aggregate the results differently in order to obtain the density function and recreate the example above. But given that the main result of our simulations was a loss exceedance curve, which already involves loss:
We can just use this to obtain the VaR, just like we did with the distribution CDF. This graph is already cumulative, so there is no need to compute areas under the curve behind the scenes. We simply obtain the value in millions corresponding to the percentage of the scenario in which we are interested. In this particular graph, the 5% yearly VaR appears to be $500 million (recall that this graph has a logarithmic scale in the x-axis). The 1% is not even visible here, but at least that tells us that it must be beyond $1000 million.
Monitoring a short-termed VaR, v.g., the daily VaR, can be useful tool to evaluate the performance of risk management or the apropriateness of adopted risk policies, while it can also be useful to understand events from the past. Consider the following two VaR evolution-in-time graphs:
In the first one we see a steady, if slow, decline in VaR over the years, for all three methods with which it was computed. The orange line, labeled "normal" corresponds to the method we followed here. Also notice how the returns are almost always above their corresponding values-at-risk, save for a few breaches, which is to be expected.
In the image to the right there is an interesting moment around February 1994, where there is a sharp decrease in the VaR, after which it pretty much stays stable under the risk appetite line (dashed). This phenomenon is explained in Jorion’s book  as a response to a rise in interest rates at that moment, which was just as sharp as the decrease in the VaR.
However, it shouldn’t be taken away that a decreasing VaR is enough to deem that the risks managers are doing a good job. Shying away from investments to keep the VaR low will, by symmetry, mean less chance of great returns:
So, the Value at Risk tells us in a single number what can happen with an investment or any risky situation what the worst that might happen is, and allows us to easily compare, for example, two investments or cybersecurity policies. However its greatest strength is also where it falls short. This particular number, while it gives an upper bound for the losses, is also unable to tell us anything else about what happens in that 1% of the cases. The TVar tries to fill this void, but it is still just a number, meaning that it inherits this same weakness.
with an itch for CS