Published by Dave Grimshaw on
The CMI Mortality Projections Model has near-universal use in the UK for external communication of longevity assumptions. It is a deterministic model – producing a single estimate from a set of assumptions – and requires the user to input at least the Long Term Rate of Mortality Improvement, for which there is no default value, making it an opinion model.
“A key feature of cause of death models is that the sum of the projections by cause results in lower improvements than projecting all-causes mortality.”
Analysis of mortality by cause of death can be used to inform the parameterisation of the CMI Model; useful insights include the estimation of improvements due to reducing smoking prevalence, as described in a recent article in The Actuary by one of Barnett Waddingham’s specialist longevity actuaries.
However, cause of death models are open to a number of criticisms and challenges; for example the volatility arising from changes to the ICD coding rules. These can be addressed, but only by adding complexity to the model.
A key feature of cause of death models is that the sum of the projections by cause results in lower improvements than projecting all-causes mortality. This arises because the causes that have improved most rapidly – circulatory diseases in the UK in recent years – have less impact in the future as their proportion of deaths reduces. Whilst this feature has been levelled as a criticism of cause of death models, it could instead be considered a weakness of projections using all-causes mortality data, which implicitly assume that new drivers will arise in future to provide the same level of improvement as historical reductions in circulatory mortality.
There are a range of stochastic models, which are often used to provide a measure of the uncertainty around a best estimate projection, but many are also open to challenge. For example, the inability of simple random walk processes to model the difference between short- and long-term influences of changes in mortality means that many commonly-used models may overstate volatility at the oldest ages – due to high short-term volatility from harsh winters and flu epidemics – but understate it at younger ages.
It is important to use simple, easy-to-understand models where possible. Introducing a complexity budget for longevity modelling and only ‘spending’ the budget where it adds most value could be the answer to this. It is also important to recognise that any model is only as good as the data that is used to calibrate it. A number of concerns exist over UK population data, which is most commonly-used in projection models. It is essential to understand the weaknesses of the data in order to avoid inaccurate projections.
“Imagine how much harder physics would be if electrons had feelings! ”Richard Feynman, the Nobel prize-winning physicist
This blog post was written by Dave Grimshaw and Jon Palin who discussed and compared a range of methods for projecting future improvements in mortality at the IFoA’s Life Conference on 20 November. The session was aimed at delegates without specialist longevity expertise, who need to assess proposed assumptions.