Have London's roads become more dangerous for cyclists?



Cities are dangerous places to cycle – and a sudden spike in deaths has brought a media furore. But do they mean that something on the streets has changed? Jody Aberdein and David Spiegelhalter investigate.

Between November 5th and November 13th, 2013 a total of six cyclists were killed whilst cycling in London. These six women and men, purely going about their daily business, were quite simply cut short in their prime.

The reaction to such a terrible succession of events has been at times visceral. There has been heated debate about the how safe London's streets are to cycle, with accusations of poor driving, poor cycling, poor infrastructure, and poor policy. Even the police commissioner for London, Sir Bernard Hogan-Howe, has expressed his doubts about the sanity of cycling in our capital city1. Ashok Sinha, chief executive of the London Cycling Campaign, noted that such a cooccurrence of fatalities was unprecedented, and demanded urgent action2.

The question remains: as an urban cyclist, should you hang up your hat and fluoro? Should you forsake two wheels for four, or more, in order to preserve life and limb? Many will have started their journeys to work this month with more trepidation that they did in November.

Undoubtedly cycling in an urban environment such as London is not without risk. Surely, though, with six deaths in less than a fortnight this danger must have increased? Has something new happened to make cycling a now life-threatening means of getting about? Let us try to answer this question by considering how unusual six sequential deaths might be.

Randomness is clumpy, and humans are excellent at pattern recognition

The issue with rare events which are subject to chance is that it is very hard as an observer to know whether there is an underlying pattern. We have, for a variety of reasons, a great propensity to attribute cause to chance occurrence. This tendency is never more active than when there is an emotive and high-stakes outcome. Fortunately, through statistical inquiry we can check if our gut reaction is correct, or if we are seeing a pattern in the vagaries of chance.

What do we know, and what can we infer about London cyclist deaths?

Data are available on road traffic accidents for the whole of the UK for the last several years from the Department for Transport3. From these we can see that there have been on average 0.6 cyclists killed per fortnight within the 32 London boroughs between 2005 and 2012. Six is a lot more than 0.6, so at first sight we might feel justifiably concerned. However, this simple average does not tell us anything about the variation from one month to the next. To get an idea of how evenly or otherwise they are spread out, we have plotted the number of deaths in each calendar fortnight. From Figure 1 it is apparent that some fortnights have more deaths than others, but even then in 8 years we have had only five fortnights with 3 deaths in them. Six deaths must mean something then, surely?

Figure 1.

London cyclist fatalities per fortnight, 2005–2013. Only 5 fortnightly periods have three cycle deaths in them; none have as many as six

We can get a good approximation to cycling deaths by fitting what is called a Poisson distribution. This is because the factors that lead to cyclist deaths, and the chance of an individual cyclist death, match quite closely a Poisson process – one in which events happen randomly, the timing of one not depending on the timing of another. Customers joining a queue at a shop form a Poisson process. Examine the frequencies of total deaths per calendar month (Figure 2, right), and compare them with those expected from a Poisson model (Figure 2, left). As you can see, for both the model and in actuality, most fortnights have no cycling deaths, some have one or two, but a few have more. A formal test of how well the data matches a theoretical Poisson distribution gives us no reason to question our assumptions, at least with the Department for Transport data (chi-squared goodness-of-fit test for deaths ∼ Pois(λ = 0.6), p = 0.44).

Figure 2.

Expected (left) and actual (right) deaths per fortnight from Poisson distribution model (red), and from Department for Transport data (blue)

You might look at that graph and wonder about the chance of six deaths though. The chance of 3 or more deaths in a month is already pretty small; 4 or 5 will surely be smaller, and 6 must be vanishing. Surely something must have changed then? Or, as a statistician might say, the new spate of deaths call into question our assumed model. Perhaps the rate of deaths has increased, that is, perhaps the risk has become greater?

We have to be careful not to be wise after the event

Let us think about the reason the question has been raised. It is because of a noticeable event. The problem is that there are a large number of other events, which by virtue of their benignity just go unnoticed. Unless we account for all events, however, we cannot easily get a handle on the true risk. How many newspaper stories have you read which led with ‘No cyclists killed in London in the last 14 days, Boris is jubilant'? (Boris, for non-Londoners, is the somewhat flamboyant mayor. Londoners have Boris bikes, a sharing system introduced under his reign; they also have Boris buses, and even potentially a Boris island and airport.) So in order to judge the rarity of a run of six deaths in a given time window, we must count all potential time windows of the same length in our given period of interest. Note that we must use not just calendar fortnights, which are sequential, but all fortnight periods. A period of 15 days starting on a Monday has two such fortnights within it: the one that starts on Monday, and the one that starts on the next day and runs from Tuesday.

Figure 3 shows a plot using exactly the same fatality data, but this time tallying up all deaths in a rolling 14-day period, that gently scrolls its way from January 1st, 2005 to December 17th, 2012, thus covering the whole 8 years. By accounting for all the 14-day time periods we still see that none in the past 8 years have exceeded 3 deaths. The number of periods with 3 deaths is, rather surprisingly, 52. Hence we see the importance of counting all periods.

Figure 3.

Cyclist fatalities in a rolling 14-day window, 2005–2013

We can use the Poisson assumption, together with some more complicated statistical techniques, to make an estimate of the chance of seeing six or more deaths in any 14-day period over 8 years. The calculation is more complicated for a number of reasons. The distribution of deaths in each period no longer obeys the simple Poisson rule, as each period is not independent of the others. We can use a very good approximation to the actual distribution of the number of deaths in a given period, a so-called “scan statistic”4. The reference is to the original derivation of the method used; the formulas given by that paper can be found at http://understandinguncertainty.org/when-cluster-real-cluster.

It turns out that over 8 years the chance of our seeing six deaths in a 14-day period is around 2.5%. The usual level of “significance” in such analyses is set, arbitrarily, at 5%. In other words, the chance of observing an occurrence at least as extreme under the baseline assumption (here deaths being Poisson with mean 0.6) is less than 5%. Hence six deaths is a significant finding and should lead us to question our assumption. That is not to say a city with cyclist deaths as Poisson mean 0.6 could not from time to time throw up six in a row. It is just that less than 1 in 40 such 8-year periods will do so.

Back in 2008, on a single day, July 10th, four people were murdered – all stabbed – in separate, unconnected incidents in London. There are about 170 murders a year in London; on most days there are no murders at all. Four on one day led to media headlines of the “London, Murder Capital” variety. But a similar analysis by David Spiegelhalter of the daily variation in murder figures (Significance, March 2009) found that four murders on one day could be expected about once every 3 years – so nothing terribly unusual had happened, and the newspaper headlines were merely announcing nothing very remarkable.

This case is different: six cycle deaths in two weeks is remarkable. There are numerous potential explanations for seeing so high a figure, including that our original model is wrong and that despite our checks the deaths are not Poisson distributed. Perhaps also there has been a change in the risk to cyclists, which may be due in turn to a change in traffic, a change in cyclist behaviour, or a change in infrastructure. Whatever the explanation, these data should give cause for concern to the transport authorities and prompt further examination of cycling safety in London.

A final word on cycling and risk

Although I have shown that perhaps we should be concerned about the recent increase in risk to cyclists in London, I would not wish to discourage cycling in our capital. In London there are around half a million cycle trips a day, or 7 million a fortnight. With the historical risk quantified here, that is one death on average every 12 million trips. It is in absolute terms a tiny chance.

Set against it, the health benefits of cycling are so huge that overall it greatly reduces your chances of death, particularly from cardiovascular disease such as strokes and heart attacks5 – and this benefit applies to almost every cyclist. The real danger of our susceptibility to seeing patterns in randomness is that it may, perversely, lead us to make a decision that puts us at more danger. Only through a thorough look at the risks can we avoid this.



Whilst cycling in London is still a safe thing to do, it could be safer. In a recent European comparison6 the risk per distance cycled in the UK, 22.4 deaths per million kilometres, was found to be far higher than in the Netherlands, where the figure is 12.4. Holland is a society which has truly taken cycling to heart. Perhaps it shows.