## 1. Introduction

[2] It is intuitive that a warming (cooling) climate will lead to an increasing frequency of extreme high (low) temperature events and a concurrent reduction in the frequency of the opposite extreme. Because extreme events are, by definition, rare, statistical analysis of trends is difficult and has generally been limited to only moderately extreme events (*i.e.*, those in the top or bottom 10%) [*Intergovernmental Panel on Climate Change*, 2007]. For record-breaking temperatures, which represent the most extreme events, the detection of trends is further confounded by the inherent decrease, for a stationary climate, in the frequency of record-breaking events as the number of years considered increases.

[3] The probability (*P*_{n}) of breaking a record in a stationary time series of independent, identically distributed (i.d.d.) random variables is a simple, well-known function of the length of the series, with*P*_{n} = 1/*N*, where *N* is the number of observations comprising the series [*Glick*, 1978]. Furthermore, the expected number of records in the time series is . Thus, for a time series of 30 i.d.d. observations, 3.99 records would be expected; it would take an additional 52 observations (*i.e.*, N = 82) to increase the expected number of records by one. Previous studies have employed complex statistical analyses to simulate the behavior of record-breaking temperatures.*Redner and Petersen* [2006] and *Benestad* [2004] used Monte Carlo simulations on a Gaussian time series to predict the probability of breaking a record and applied these predictions to real data. *Anderson and Kostinski* [2010] examined variability of record temperatures by removing a mean trend and extracting the parameter of variance alone. *Wergen and Krug* [2010] identified a relationship between drifting mean and variance governing the expected increase in record rate. Several studies including *Anderson and Kostinski* [2010] and *Benestad* [2004] explore the technique of reversibility to evaluate record temperature trends both forward and backward in time. *Meehl et al.* [2009] identified an excess of record high temperatures compared to record low temperatures over the United States since 1950, finding an approximately 2:1 ratio of record highs to record lows, but their ratio method must aggregate data from a large number of stations to avoid division by zero if no low temperature records are broken in a given year.

[4] For a stationary climate, temperatures will be expected to exceed a set threshold at a rate that remains constant over time while the rate will change if the climate is changing. However, any trend could be the result of changes in the mean or the shape of the probability density function, or some combination. Record-breaking events occur at a rate independent of the underlying probability density function, depending – for a stationary climate – only on the number of prior years [*Coumou and Rahmstorf*, 2012]. As the length of record increases, therefore, the probability of breaking a record decreases and the expected number of daily records set in a given year approaches zero, making it difficult to detect trends in the number of records due to climate change. This is especially true for records that are expected to decrease in number, such as minimum temperatures in a warming climate. Our method combines these methods by setting a threshold value with a known – and constant – probability of exceedance. The uniqueness of our study lies in its simplicity. By comparing annual numbers of record-breaking temperatures to an established baseline, trends are revealed with little effort or mathematical manipulation. Moreover, the method can be used for individual stations as well as various geographic groupings to identify trends.