Black Monday and The Rise of Quants and Value at Risk in Financial Markets

Marketopedia / All about risk management / Black Monday and The Rise of Quants and Value at Risk in Financial Markets

Black Monday

Looking back, when most people think of the 70’s, rock and roll comes to mind. However, the financial analysts of that era viewed the decade from an entirely different perspective.

The 70s global energy crisis plunged the United States into an economic depression, resulting in an inflationary period accompanied by high unemployment rates. This may explain why many turned to music and created remarkable works. Eventually, better times came when the US took the required steps to improve their economy; from late 70s / early 80s, the US economy was back on track which naturally boosted the stock markets.

Starting in the early 80s, markets saw a steady increase up until mid-1987, making Dow reach its all-time high of 2,722 that August. This provided a return of 44% over 1986. Concurrently, observers noticed signs of a slowing economy, ominously known as a ‘soft landing’. Following this peak, the market took a break and the three months afterwards were filled with mixed emotions. During small corrections, traders would both open up new long positions and unwind existing ones. Consequently, the market neither gained nor lost much traction.

October 1987 was a remarkable month for global financial markets, particularly during the second week when drama and horror emerged in various corners of the world. At home, difficulties were increasing while offshore Iran attacked American super tankers near Kuwaiti oil ports. These events are quite intriguing, making October of that year one to remember.

On 14 October 1987, the day of reckoning came. The Dow plummeted nearly 4%, and this marked a historic downturn in the market at the time.

On 15th October 1987, Thursday, the Dow takes a plunge of 2.5%. Having already been 12% down from its August high, elsewhere in the world, an American super tanker positioned outside Kuwait’s oil port is attacked with a Silkworm missile by Iran.

Fear and panic swept over the global financial markets due to these two events.

On 16th October 1987, a highly unexpected and powerful storm swept through London, the strong wind reaching a speed of 175 KMPH. This had disastrous consequences for the City; outages were widespread and most notably in the financial epicentre of the south. London’s bourses closed in response to this alarming incident, with the Dow showing its displeasure by dropping nearly 5%, triggering fears worldwide. These anxieties only heightened as Treasury Secretary noted his grave doubts about the economy’s future.

On 19th October 1987, Hong Kong’s markets began to suffer immense losses that soon spread to London and then the US. Dow Jones placed itself firmly in the history books by experiencing its greatest percentage drop ever – a staggering 22.61%. Amongst all the chaos and despair, this day was aptly called Black Monday.

The financial world had never seen such drastic changes before. This was the initial encounter with a ‘Black Swan’ event that shook up the world. After clouds of dust settled, an entirely new kind of trader emerged on Wall Street – ‘The Quants’.

– The rise of quants

The financial fallout of October 1987 was felt across the globe, prompting regulators to take a closer look at system wide shocks and firms’ abilities to assess risk. Companies were evaluating their chances of surviving a similar catastrophe, since the odds of it occurring were virtually non-existent. But happen it did.


It is a common trend amongst financial institutions to speculate in various regions, with multiple counterparties and different types of assets. Evaluating the associated risk can be rather daunting; yet this was seen as an essential measure by the business. With the emergence of ‘Quants’, mathematics-savvy people holding doctorates from a multitude of fields such as physics, mathematics, statistics and finance were recruited to create more complex models to track positions and gauge the risk level instantaneously. This led to risk management being deemed an essential part of operations in banks and trading firms on Wall Street, where distinct teams were formed for the purpose of evaluating hazard.


When he was the CEO of JP Morgan, Dennis Weatherstone commissioned the 4:15 PM report; a single page document which gave him an overall understanding of the risk at their bank. This report was sent to his desk each day at precisely 15 minutes past market close and it quickly gained such popularity thatJP Morgan shared the methodology and fundamental aspects with other banking institutions. Ultimately, this team has been spun off by JP Morgan and formed into a separate business entity known as The Risk Metrics Group. It was eventually purchased by the MSCI group.


The report included a metric known as ‘Value at Risk’ (VaR), which provides an indication of the worst loss that could be suffered in the event of an extreme and unforeseen occurrence.


This chapter will provide with insights on Value at Risk.


– Normal Distribution


At the heart of the Value at Risk (VaR) approach is the concept of normal distribution, which I won’t be discussing here. We will be looking at the “quick and dirty” approach to calculating portfolio VaR, something I have been using for a few years and have found to be effective for an uncomplicated equity ‘buy and hold’ portfolio.


 Portfolio VaR helps us answer the following questions –


In the event of a black swan happening tomorrow, which portfolio would sustain the greatest loss?


What is the likelihood of experiencing the most serious loss?


Portfolio VaR helps us detect this. The steps necessary to calculate it are straightforward and include:


Identify the distribution of the portfolio returns


Chart the distribution to check if the portfolio returns are normally distributed. 

Order portfolio returns from lowest to highest.

Take a look at the last 95% of observations.

The portfolio VaR has the lowest value in the preceding 95%.

Averaging the last 5% is equivalent to taking the cumulative VaR or CVar.


We can take the portfolio we’ve been discussing and use it to calculate its Value at Risk for a better understanding.


– Distribution of portfolio returns


In this section, we will focus on the first two steps in calculating portfolio VaR. We need to determine the distribution of portfolio returns, either through normalized or direct returns. Recalling that we computed normalized returns when we discussed the “equity curve,” let’s apply them here

The ‘EQ Curve’ sheet contains the portfolio returns needed. I’ve copied them onto a separate sheet for working out the Value at Risk. This is what it looks like now –

At this time, let us determine the portfolio returns’ distribution by taking the following steps:


Step 1 – Make use of the ‘=Max()’ and ‘=Min()’ function on excel to calculate the maximum and minimum return from the given time series (of portfolio returns).

Step 2- For this step, the quantity of data points can be determined using the ‘=count()’ function

We need to consider 126 data points for this exercise, as it only covers the last six months. Although ideally we would be working with one year’s worth of data, our goal is just to demonstrate the concept at this point.


Step 3 – Bin width


We must construct a “bin array” containing the frequency of returns, thus allowing us to appreciate the number of times a certain return has occurred in the past 126 days. To do this, we begin by calculating the bin width with the following formula:


Bin width = (Difference between max and min return) / 25


I have carefully picked 25 individuals in accordance to the amount of data we have.


= (3.26% – (-2.82%))/25




Step 4 – Build the bin array


We begin by considering the lowest yield and increase it by the bin width. For instance, the lowest yield is -2.82, which would then be followed by a higher figure.


= -2.82 + 0.002431


= – 2.58


Once we reach the maximum return of 3.26%, here is what the table looks like –


(Add img)


And here is the full list –


(Add img)


I’m going to present the data and then discuss it. We need to calculate the frequency of returns within the bin array.


(Add img)


Using the ‘=frequency()’ function on excel I calculated that out of the 126 return observations, just one had a return rate of -2.82%. There were 0 between -2.82% and 2.58%, 13 in the range of 0.34% to 0.58% and so on.


To work out the frequency, select all the cells alongside the Bin array, without making any changes. Type =frequency into the formula bar and enter the necessary info. This is what it looks like –


(Add img)


Be sure to press ‘Ctrl + shift + enter’ all at once, not just ‘enter’. This will allow you to work out the frequency of returns.


Step 5 – Plot the distribution


It’s quite straightforward – the bin array houses our returns, and the frequency tells us how many times each one has occurred. Plotting this on a graph gives us our frequency distribution; then we just need to take a look and decipher if it looks like an ordinary bell curve or not.


I can easily plot the frequency distribution by selecting all the data and using a bar chart. This is what it looks like 

We can observe quite easily that the portfolio returns take on a bell-shaped curve, suggesting that it is likely to be normally distributed.

– Value at Risk

Now that it has been ascertained the returns are normally distributed, it is time to calculate the Value at Risk. The next move is relatively simple – reorganizing the portfolio returns from ascending to descending order.


(Add img)


I used Excel’s sort function for this and am now ready to calculate the Portfolio Value at Risk (VaR) and its Conditional Value at Risk (CVaR). Soon I will enlighten you on the rationale of this computation.


Portfolio VaR is determined by the least value within 95% of our 126 observations: 120 of them, to be precise. This works out to be -1.48%, and it’s an essential measure.


I compute the average of the final 6 observations, which serves as the Cumulative VaR (CVaR).


The CVaR is calculated to be -2.39%.


At this point you may have numerous queries. To help, here is a list of them and the solutions.


Why did we plot the frequency distribution of the portfolio?

I conducted a hypothesis test to show that the portfolio returns are normally distributed.

Why should we check for normal distribution?

If the data we are studying is normally distributed, then the characteristics of normal distribution can be applied to it.

What properties do normally distributed data possess?

68% of the data falls within one standard deviation, 95% inside two, and 99.7% within three. 

What was the purpose of sorting the data?

We have ascertained that the data set is normally distributed. Thus, sorting it from highest to lowest allows us to view the returns in an organized fashion and focus on the worst case scenario.

What was the purpose of only including 95% of observations?

The normal distribution theory tells us that 95% of the data lies within 2 standard deviations. Subsequently, any given day could bring a return on the portfolio that falls among this 95%. This lowest value among these observations is would be our worst case potential loss, or Value at Risk.

What does the VaR of -1.48% indicate?

We can be 95% confident that the worst case loss for this portfolio is -1.49%.

Isn’t it possible for the decrease not to surpass -1.48%?

For extreme events, CVaR provides an important insight: there is a 5% chance that the portfolio could suffer a loss of -2.39%.

Is it possible for the shortfall to be more than -2.89%?

It is possible, but highly unlikely.