Once the frequency and severity distributions for loss events have been determined, which of the following is an accurate description of the process to determine a full loss distribution for operational risk?
Answer : B
Once the frequency distribution has been determined (for example, using the binomial, Poisson or the negative binomial distributions) and the severity distribution has also been determined (for example, using the lognormal, gamma or other functions), the loss distribution can be produced by a Monte Carlo simulation using successive drawings from each of these two distributions. It is assumed that the severity and frequency are independent of each other. The resulting distribution gives a distribution showing the losses for operational risk, from which there Op Risk VaR can be determined using the appropriate percentile. Therefore Choice 'b' is the correct answer.
When building a operational loss distribution by combining a loss frequency distribution and a loss severity distribution, it is assumed that:
1. The severity of losses is conditional upon the number of loss events
2. The frequency of losses is independent from the severity of the losses
3. Both the frequency and severity of loss events are dependent upon the state of internal controls in the bank
Answer : B
When a operational loss frequency distribution (which, for example, may be based upon a Poisson distribution) and a loss severity distribution (for example, based upon a lognormal distribution), it is assumed that the frequency of losses and the severity of the losses are completely independent and do not impact each other. Therefore statement II is correct, and the others are not valid assumptions underlying the operational loss distribution.
Which of the following is not one of the 'three pillars' specified in the Basel accord:
Answer : C
The three pillars are minimum capital requirements, supervisory review and market discipline. National regulation is not a pillar described under the accord. Choice 'c' is the correct answer.
For a loan portfolio, unexpected losses are charged against:
Answer : B
Credit reserves are created in respect of expected losses, which are considered the cost of doing business. Unexpected losses are borne by economic credit capital, which is a part of economic capital. This question is a bit nuanced - and 'economic capital' would generally be a good answer as well. However, taking a rather beady eyed view of the terminology and distinguishing between 'economic credit capital' which is a subset of 'economic capital', we can say that 'economic credit capital' is a more appropriate Choice 'a's the question relates to credit losses.
Under the KMV Moody's approach to credit risk measurement, which of the following expressions describes the expected 'default point' value of assets at which the firm may be expected to default?
Answer : C
A situation where a firm has more liabilities than assets does not necessarily imply default, so long as the firm is able to pay its obligations when they come due. Therefore, short term debts have a greater bearing on a firm's default than longer term debt. However, this is not to say that merely having enough to pay off the short term debts (ie debts due within one year) is enough to avoid default. Over time, the long term debt will also be turning to short term debt, and it may not be possible for the firm to roll over its liabilities without lenders considering the long term debt. The KMV approach considers the entire short term debt and half of the long term debt as the critical value of assets below which default will be triggered. Therefore Choice 'c' is the correct answer.
Which of the following steps are required for computing the aggregate distribution for a UoM for operational risk once loss frequency and severity curves have been estimated:
1. Simulate number of losses based on the frequency distribution
2. Simulate the dollar value of the losses from the severity distribution
3. Simulate random number from the copula used to model dependence between the UoMs
4. Compute dependent losses from aggregate distribution curves
Answer : A
A recap would be in order here: calculating operational risk capital is a multi-step process.
First, we fit curves to estimate the parameters to our chosen distribution types for frequency (eg, Poisson), and severity (eg, lognormal). Note that these curves are fitted at the UoM level - which is the lowest level of granularity at which modeling is carried out. Since there are many UoMs, there are are many frequency and severity distributions. However what we are interested in is the loss distribution for the entire bank from which the 99.9th percentile loss can be calculated. From the multiple frequency and severity distributions we have calculated, this becomes a two step process:
- Step 1: Calculate the aggregate loss distribution for each UoM. Each loss distribution is based upon and underlying frequency and severity distribution.
- Step 2: Combine the multiple loss distributions after considering the dependence between the different UoMs. The 'dependence' recognizes that the various UoMs are not completely independent, ie the loss distributions are not additive, and that there is a sort of diversification benefit in the sense that not all types of losses can occur at once and the joint probabilities of the different losses make the sum less than the sum of the parts.
Step 1 requires simulating a number, say n, of the number of losses that occur in a given year from a frequency distribution. Then n losses are picked from the severity distribution, and the total loss for the year is a summation of these losses. This becomes one data point. This process of simulating the number of losses and then identifying that number of losses is carried out a large number of times to get the aggregate loss distribution for a UoM.
Step 2 requires taking the different loss distributions from Step 1 and combining them considering the dependence between the events. The correlations between the losses are described by a 'copula', and combined together mathematically to get a single loss distribution for the entire bank. This allows the 99.9th percentile loss to be calculated.
Which of the following statements are true in relation to Historical Simulation VaR?
1. Historical Simulation VaR assumes returns are normally distributed but have fat tails
2. It uses full revaluation, as opposed to delta or delta-gamma approximations
3. A correlation matrix is constructed using historical scenarios
4. It particularly suits new products that may not have a long time series of historical data available
Answer : A
Historical Simulation VaR is conceptually very straightforward: actual prices as seen during the observation period (1 year, 2 years, or other) become the 'scenarios' forming the basis of the valuation of the portfolio. For each scenario, full revaluation is performed, and a P&L data set becomes available from which the desired loss quantile can be extracted.
Historical simulation is based upon actually seen prices over a selected historical period, therefore no distributional assumptions are required. The data is what the data is, and is the distribution. Statement I is therefore not correct.
It uses full revaluation for each historical scenario, therefore statement II is correct.
Since the prices are taken from actual historical observations, a correlation matrix is not required at all. Statement III is therefore incorrect (it would be true for Monte Carlo and parametric Var).
Historical simulation VaR suffers from the limitation that if enough representative data points are no available during the historical observation period from which the scenarios are drawn, the results would be inaccurate. This is likely to be the case for new products. Therefore Statement IV is incorrect.