SEARCH

SEARCH BY CITATION

Keywords:

  • malaria;
  • Ethiopia;
  • epidemic early warning system;
  • forecasting;
  • time series analysis

Summary

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

The aim of this study was to assess the accuracy of different methods of forecasting malaria incidence from historical morbidity patterns in areas with unstable transmission. We tested five methods using incidence data reported from health facilities in 20 areas in central and north-western Ethiopia. The accuracy of each method was determined by calculating errors resulting from the difference between observed incidence and corresponding forecasts obtained for prediction intervals of up to 12 months. Simple seasonal adjustment methods outperformed a statistically more advanced autoregressive integrated moving average method. In particular, a seasonal adjustment method that uses mean deviation of the last three observations from expected seasonal values consistently produced the best forecasts. Using 3 years' observation to generate forecasts with this method gave lower errors than shorter or longer periods. Incidence during the rainy months of June–August was the most predictable with this method. Forecasts for the normally dry months, particularly December–February, were less accurate. The study shows the limitations of forecasting incidence from historical morbidity patterns alone, and indicates the need for improved epidemic early warning by incorporating external predictors such as meteorological factors.


Introduction

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

Malaria affects mainly children in highly endemic areas where adults have (partial) immunity to the disease. In contrast, in areas of low endemicity, the disease may affect all age groups. In such areas, changes in weather conditions may lead to major epidemics.

In Ethiopia, such epidemics have from time to time inflicted high mortality among the largely non-immune population. A well-documented major epidemic in 1958 resulted in an estimated 3 million cases of whom 150 000 died (Fontaine et al. 1961). Such a large-scale epidemic has been known to return at some irregular intervals of years; for example, during the 1980s and 1990s, severe epidemics were recorded in 1981, 1988, 1991, 1992 and 1998. It had not been possible to forecast any of these events, especially in highland areas that were normally considered non-malarious or had very low transmission. As an example, in 1988, a major epidemic affected most highland areas in the country following normal or below normal transmission the previous year (manuscript in preparation). A similar epidemic in 1998 resulted in high levels of mortality in highland areas where the disease had been absent for years.

In areas with unstable transmission, setting up systems for epidemic early warning has become essential. The quantification and use in early warning of the effect of epidemic-precipitating factors such as weather patterns has been difficult in epidemic-prone areas where slight changes might cause devastating epidemics. Currently there are efforts to develop early warning systems that use weather monitoring and climate forecasts and other factors (Thomson & Connor 2001). In some countries, epidemics have been associated with the occurrence of the weather phenomenon known as the El Niño (Bouma & van der Kaay 1996; Bouma & Dye 1997). While predictingEl Niño years is not a simple task, even such predictions would be too global in nature to be useful as an early warning in specific areas. Moreover, there are variations in the magnitude and timing of the effects of El Niño on malaria incidence according to geographicalconditions.

Specific forecasts of incidence would be helpful to local health services for appropriate preparedness and to take selective preventive measures in areas at risk of epidemics. In this study, we explore whether it would be possible to forecast malaria incidence from the patterns of historical morbidity data alone (without external predictors) while making use of the correlation between successive observations, and compare different methods of doing so in terms of the level of accuracy obtained. Our aim was to find out what information can maximally be obtained from past morbidity trends and patterns which may be useful for prediction of future incidence levels, and to identify months or situations where additional information is needed most. We used monthly incidence data collected in areas with unstable transmission in Ethiopia.

Study areas and data

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

We used historical monthly morbidity data from 20 areas in central and north-western Ethiopia (Table 1). Data sets from seven areas comprised microscopically confirmed Plasmodium falciparum cases seen at malaria laboratories. The data set from one area (Finote Selam) comprised combined monthly reports of P. falciparum cases from a number of health facilities reporting to the Sector Malaria Control Office. Data sets from the remaining 12 areas comprised unconfirmed clinical cases symptomatically diagnosed as malaria by trained health workers. We assume that fluctuations in the number of clinical malaria cases reflect proportional fluctuations in P. falciparum cases. There are P. vivax cases regularly reported in most areas of the country, but compared with P. falciparum, their number shows little seasonal fluctuation.

Table 1.  The 20 study areas with associated period and number of monthly observations used (n). The data used in the analysis include confirmed Plasmodium falciparum cases (areas 1–8) and clinically diagnosed malaria cases (areas 9–20)
No Area/localityType of health facility*Periodn
  • ML = malaria laboratory; MDTPs = malaria detection and treatment posts; HS = health station; HC = health centre.

 1Bahir Dar (urban)MLJuly 90–May 99119
 2Finote Selam SectorMDTPsOctober 86–August 95107
 3Nazret (urban)MLSeptember 94–March 99 55
 4Nazret (rural)MLSeptember 94–March 99 55
 5Debre Zeit (urban)MLSeptember 94–May 99 57
 6Debre Zeit (rural)MLSeptember 94–May 99 57
 7Zway (urban)MLNovember 94–March 99 53
 8Zway (rural)MLNovember 94–March 99 53
 9Ambo MedaHSMarch 93–January 99 71
10YifagHSMarch 93–January 99 71
11ChagniHCJuly 93–February 99 68
12ChiretiHSJuly 93–February 99 68
13CharaHSJuly 93–February 99 68
14DangilaHCJuly 93–February 99 68
15TamuhaHSJuly 93–February 99 68
16EstieHCMarch 93–January 99 71
17Hamus WenzHSJuly 92–June 97 60
18Debre TaborHCMarch 93–January 99 71
19WeretaHSJuly 92–May 98 71
20AzenaHSJuly 93–February 99 68

The monthly morbidity data has an approximately lognormal distribution, and therefore the analysis was based on log-transformed series. We calculated relative (log) incidence (RI) in order to bring morbidity data from all areas to the same scale. The RI for month t (denoted Yt) was calculated as:

  • image

where Zt is the number of cases in month t and A is the overall mean of the log-transformed series used for forecasts. The back-transformed number of cases is thus: Zt= exp (AYt). The mean (A) differs for each series or sample.

Seasonal average

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

This method uses the historical average of each particular calendar month as a forecast for the same month in the future. In other words, the average of all observed RI values during the same calendar month in previous years will be a forecast value for all similar months in the future:

  • image

Seasonal adjustment with last observation

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

In this method, the seasonal average was corrected using the deviation of the most recent observation from its expected seasonal value to generate forecasts for future months. The object was to capture incidence trend during the most recent months:

  • image

Seasonal adjustment with last three observations

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

In this method, the seasonal average was corrected using the mean deviation of three most recent observations from their expected seasonal values to generate forecasts for future months. The object was to capture trend in incidence during the most recent months while reducing statistical variation:

  • image

The subscript t−i denotes a month i lags before the (last) month t.

Autoregressive integrated moving average (ARIMA)

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

The autocorrelation pattern in each series at different time lags was used to develop ARIMA models (Box & Jenkins 1976). A single equation ARIMA model states how any value in a single time series is linearly related to its own past values through combining two processes: the autoregressive (AR) process which expresses Yt as a function of its past values, and the moving average (MA) process which expresses Yt as a function of past values of the error term e:

  • image

where the φs and θs are the coefficients of the AR and MA processes, respectively, and p and q are the number of past values of Yt and the error term used, respectively.

Application of the ARIMA technique requires the series to be stationary (i.e. with constant mean and variance over time). A series with constant variance can be obtained by applying log and other types of transformation to the original series. A constant mean can be obtained by taking the first or higher order difference of the variable as necessary until the series becomes stationary.

Seasonal and lag-1 differencing of the RI series (which were already on a log-scale) resulted in stationary series. The Akaike Information Criterion was used to compare goodness-of-fit among ARIMA models.

Assessment of forecast accuracy

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

The last 12 observations in each area were used for validation of forecast accuracy of the different methods and are referred to as test observations. For each area and each method, we generated 12 predictions at prediction intervals of 1, 2,… 12 months for each of the 12 test observations. All predictions were made by using historical series of equal lengths, formed as subsets taken from the same series. For each prediction interval, average forecast error was then calculated. For example, in each area, the average 1-month ahead forecast error was calculated from all the 1-month ahead forecast errors produced for each of the 12 test observations. The average forecast error at prediction interval of m months (ɛm) was calculated as:

  • image

where Yt+m,k and inline image denote the observed and forecast values for month t + m in sample k. The above error was again averaged over 20 areas (by taking the arithmetic mean) to obtain the overall average error inline image of each method to forecast m months ahead.

We used observation periods of 30–48 consecutive months (depending on the data available) to generate forecasts for comparing accuracy among the different methods. We also tested forecast accuracy of the different methods by varying the observation period from 1 to 4 years. This was only possible for seven data sets with sufficient length.

Results

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

Malaria transmission in most areas was highly variable from season-to-season and year-to-year. As an example, data from Finote Selam area (the longest series) shows a clear seasonal and interannual variation in incidence (Figure 1).

image

Figure 1. Incidence of falciparum malaria reported from Finote Selam Sector during the period September 1986–August 1995, showing seasonal and year-to-year variability of transmission.

Download figure to PowerPoint

As was to be expected, forecast accuracy deteriorated as the prediction interval increased in the 20 study areas (Figure 2). This phenomenon, however, varied between methods: the most rapid deterioration of forecast accuracy with increasing prediction interval was observed for ARIMA, the slowest for the seasonal average method. The method using seasonal adjustment with mean deviation of last three observations almost consistently produced the lowest forecast error. This was true for most of the series (not shown). Up to about 9 months prediction interval, the average forecast error of this method was 0.22, i.e. 22% of the log mean number of cases in each area (95% CI: 0.17–0.27). The statistical sophistication of the ARIMA method did not result in better forecasts compared with the simpler methods. In most cases, the structures of the ARIMA models optimized for each series had similar structures, mainly consisting of both a non-seasonal and seasonal moving average or autoregressive parameters.

image

Figure 2. Accuracy of different forecasting methods calculated as average error of forecasts using 20 historical morbidity series. 30–48 months of observations were used to generate forecasts for each of the 20 areas. The errors are given on the relative (log) incidence (RI) scale. The methods are numbered in order of complexity.

Download figure to PowerPoint

The effect of using different series lengths in forecasting was assessed using the seven longest series, by varying the lengths between 1 and 4 years. The overall average performed particularly well for a short historical series (1 year) and prediction intervals of up to 4 months (Figure 3a). This is partly a statistical phenomenon (only seven series were used) as forecast accuracy of the overall average method using 1-year data from all the 20 areas did not perform better than the seasonal adjustment method. Nevertheless, the shortest period of 1 year performed best for the overall average method, compared with longer periods. With the method using seasonal adjustment with last three observations, the average forecast error was minimal when lengths of 3 years were used (Figure 3b). This was also true for the other seasonal adjustment methods (not shown).

image

Figure 3. The effect of varying series length on forecast accuracy, for two of the methods: (a) overall average and (b) seasonal adjustment with last three observations (only areas with at least 6 years of observations were used; these include: areas 1, 2, 9, 10, 16, 18 and 19 as given in Table 1).

Download figure to PowerPoint

Figure 4 shows comparison of predicted and observed values in actual number of cases for 1- and 12-month ahead forecasts for all areas using the overall best method: seasonal adjustment with last three observations. Overall, 1-month forecasts were better than 12-month forecasts (correlation coefficients between observed and predicted values were 0.50 and 0.45 at 1- and 12-month prediction intervals, respectively). It appears that incidence is less predictable during the dry season than during the wet season. The most predictable months in terms of incidence were the wet months of June–August as indicated by correlation coefficients between observed and predicted values (r = 0.66 and 0.75 at 1- and 12-month prediction intervals, respectively). The correlation coefficient during the usual malaria months of September–November was 0.55 at 1-month prediction interval. The most unpredictable months (at 1-month prediction interval) were the normally dry months of December–February (r = 0.49). At 12-month prediction interval, the periods September–November and December–February were most unpredictable (r = 0.28 and 0.31, respectively).

image

Figure 4. Comparison of observed and predicted number of cases in the 20 study areas using seasonal adjustment with the last three observations. (a) One-month prediction interval and (b) 12-month prediction interval.

Download figure to PowerPoint

A practical method of calculating forecasts using the seasonal adjustment with last three observations in a malaria epidemic early warning system (in the absence of a better method) is suggested in the Appendix.

Discussion

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

In this study, different methods were compared with forecast of malaria incidence from historical morbidity patterns in areas with unstable transmission to assess their potential use in epidemic early warning. The potential use of time series techniques, especially the ARIMA method, in epidemiological studies, disease surveillance and outbreak forecast, has been explored in some studies (Helfenstein 1991; Allard 1998). In our study, methods using seasonal adjustment were found to produce relatively better forecast of malaria incidence compared with the ARIMA method. Other studies have also indicated that the statistically advanced ARIMA models may produce very good fit to the data but in post-sample forecast, they would not be robust enough to handle a possible change in behaviour of the series (Makridakis et al. 1998).

Seasonal adjustment which takes account of deviations from seasonal averages of the last three observations gave the best forecast compared with the other methods. This is because of the capability of the method to accommodate both seasonality and recent changes or trends at the same time. However, this method gave only about 20% improvement relative to the overall average and about 10% compared with the seasonal average method.

There is always a balance between statistical accuracy and time trends. Very long series lead to relatively accurate averages, but they capture trends poorly. In contrast, very short series capture recent trends very well, but the averages they produce are relatively less accurate compared with those from longer series. All the methods used in this study differ in their sensitivity to length of the series used to generate forecasts. For example, the seasonal adjustment with last three observations performed best when 3 years' observations were used. On the other hand, the overall average method performed best with 1-year data. The results indicate the need for balancing short historical series (i.e. data closer to the prediction period) with long-enough series to minimize random error.

There is little doubt that external forces contribute significantly to variations in incidence levels. Several studies have shown that severe malaria epidemics are associated with changes in meteorological variables such as those resulting from EL Niño events (Bouma & van der Kaay 1996; Lindsay & Birley 1996; Bouma & Dye 1997; Kilian et al. 1999). On the other hand, the fact that consistently better forecasts were obtained for shorter term forecasts indicates that there is also some contribution of the inherent pattern in the historical morbidity data that may be considered in multivariate models. In a time series analysis conducted in Kenya, Hay et al. (2001) have shown that in an area with unstable transmission, climate variables might have a significant contribution to malaria epidemics, but in areas with higher endemicity, there is a between-year signal that is not related to climate, and rather may be the result of basic dynamics of the disease. The authors therefore suggest that in such cases, systems for epidemic early warning that ignore parasite and host population dynamics are unlikely to be sufficiently robust to capture super-annual variation in disease risk.

Figure 4 illustrates the goodness-of-fit of 1-month ahead forecasts compared with those of 12 months ahead, using the overall best seasonal adjustment method. As is clear from Figures 2 and 3b, 1-month ahead forecasts are much better than longer prediction intervals. Nevertheless, in terms of epidemic early warning, the accuracy of forecasts is still not acceptable. For example, predictions for 100 observed cases may range approximately from 20 to 500 cases.

The results of the present analysis show that simple methods such as seasonal adjustments perform as well as or even better than the more advanced ARIMA method, although they are themselves not accurate enough for forecasting abnormal incidence for use in an epidemic early warning system, especially during the dry season. The size of the forecast errors should also be cautiously interpreted as a result of gross underestimation of the true number of cases by health service data, especially during epidemics. An abnormal increase of malaria leads to excess cases visiting health facilities beyond their maximum capacity. Severe epidemics usually occur in remote rural areas far away from health facilities, and during such periods, health workers have to travel to those areas to distribute antimalarial drugs. Such house-to-house treatment of cases is normally not part of the reported morbidity data. In areas with highly unstable transmission, the use of external predictor variables (e.g. temperature and rainfall) together with past pattern of incidence, would probably lead to more accurate predictions of epidemics. Development of such models is under investigation.

Acknowledgements

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

We would like to thank Abera Tadese, Lemma Regassa, Tesgera Biru and Sewunet Tegegne for their assistance in compiling the health facility data. We are grateful to the Ministry of Health of the Federal Democratic Republic of Ethiopia, Amhara Regional State Health Bureau, and Eastern Shoa Zone Health Department in Oromiya Regional State, Ethiopia, for providing the data used for analysis. Financial support for this study was provided by the Department of Public Health, and the Trust Fund of Erasmus University, Rotterdam, National Institute of Health Sciences (NIHES), The Netherlands, and the World Health Organization.

References

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix
  • Allard R (1998) Use of time-series analysis in infectious disease surveillance. Bulletin of the World Health Organization 76, 327333.
  • Bouma MJ & Van der Kaay J (1996) The El Niño Southern Oscillation and the historic malaria epidemics on the Indian subcontinent and Sri Lanka: an early warning system for future epidemics? Tropical Medicine and International Health 1, 8696.
  • Bouma MJ & Dye C (1997) Cycles of malaria associated with El Niño in Venezuela. Journal of the American Medical Association 278, 17721774.
  • Box GEP & Jenkins GM (1976) Time Series Analysis: Forecasting and Control, Revised Edition. Holden Day, San Francisco.
  • Fontaine RE, Najjar AE & Prince JS (1961) The 1958 malaria epidemic in Ethiopia. American Journal of Tropical Medicine and Hygiene 10, 795803.
  • Hay SI, Rogers DJ, Shanks DG, Myers MF & Snow RW (2001) Malaria early warning in Kenya. Trends in Parasitology 17, 9599.
  • Helfenstein U (1991) The use of transfer function models, intervention analysis and related time-series methods in epidemiology. International Journal of Epidemiology 20, 808815.
  • Kilian AHD, Langi P, Talisuna A & Kabagambe G (1999) Rainfall pattern, El Niño and malaria in Uganda. Transactions of the Royal Society of Tropical Medicine and Hygiene 93, 2223.
  • Lindsay SW & Birley MH (1996) Climate change and malaria transmission. Annals of Tropical Medicine and Parasitology 90, 573588.
  • Makridakis S, Wheelwright SC & Hyndman RJ (1998). Forecasting: Methods and Applications, 3rd edn. John Wiley & Sons, Inc., New York.
  • Thomson MC & Connor SJ (2001) The development of malaria early warning systems for Africa. Trends in Parasitology 17, 438445.

Appendix

  1. Top of page
  2. Summary
  3. Introduction
  4. Materials and methods
  5. Study areas and data
  6. Forecasting methods
  7. Overall average
  8. Seasonal average
  9. Seasonal adjustment with last observation
  10. Seasonal adjustment with last three observations
  11. Autoregressive integrated moving average (ARIMA)
  12. Assessment of forecast accuracy
  13. Results
  14. Discussion
  15. Acknowledgements
  16. References
  17. Appendix

We present below a practical suggestion for using the seasonal adjustment method which performed best in our study for forecasting malaria incidence using historical morbidity patterns. This procedure may be used (or adapted to local situations) by malaria control programmes in their epidemic early warning systems in the absence of better methods of forecasting epidemics.

To generate forecasts, use the last 36 monthly malaria morbidity data (e.g. clinically diagnosed malaria cases, microscopically confirmed P. falciparum cases) diagnosed at a health facility or groups of health facilities in a specific area, which may be a selected sentinel surveillance site (based on our study, series longer than 3 years may not account for time effects. Shorter series are not accurate enough to provide seasonal averages).

Enter the data in a spreadsheet programme. In a new column, calculate the natural logarithms of each monthly observation.

In a separate column, for each calendar month, calculate the ‘expected’ seasonal average from the log-transformed series. For example, for the month of July, the mean from the three log-transformed observations during the three previous years will be the expected seasonal average for July.

Now you have all the necessary information to forecast incidence in the future. If the last month for which you have observations is July, and you need to generate forecasts for the next 2 months (August and September), proceed as follows:

Starting from July backwards, calculate the difference between each of the last three observations and their respective seasonal averages.

Take the mean of the three differences (or deviations).

To generate a forecast (on a log-scale) for August, add the mean deviation obtained above to the expected seasonal average for August. To generate a forecast for September, add the same mean deviation obtained above to the expected seasonal average for September.

Now you have forecasts on a log-scale, you need to back-transform your forecasts to the normal scale (i.e. number of cases). To do this, take the exponent of each of the (log-scale) forecasts. Finally, the obtained forecasts should always be interpreted with caution, as high degree of accuracy cannot be guaranteed.