Data-driven approach of CUSUM algorithm in temporal aberrant event detection using interactive web applications.
Aberrations in disease incidence (defined as higher than expected disease counts that could be detected through statistical models (1)) may provide an early warning of an outbreak or indicate the start of a seasonal increase in incidence. For this purpose, PHO has been using the Early Aberration Reporting System (EARS) (2) method, which was developed by the US Centers for Disease Control and Prevention. While they deemed it useful, staff at PHO found the algorithm generated many false positive alerts, particularly among low incidence diseases and those with high seasonality.
The published literature on statistical techniques for surveillance of diseases over time indicates that many potential methods of aberrant event detection are possible. These are generally classified into the following categories: regression techniques, time series methodology and statistical process control (SPC) methods, many of which are available in the surveillance (3,4) package in R (5). The purpose of each one is to detect important changes in the underlying process as soon as they occur. Two reviews of prospective statistical surveillance in public health are provided by Sonesson et al. (6) and Unkel et al. (7) All focus on maximizing the statistical properties of the detection methods.
Recent developments in interactive data visualization and statistical modelling through web-based applications (e.g., the Shiny (8) package in R) enable users to interact directly with the statistical algorithms, data and results. Application of this technology to disease surveillance allows staff involved in the communicable disease program areas to visualize the results of different statistical detection methods as well as the effects of parameters chosen for each method, for example, adjusting for seasonal variation. This permits the user to balance the sensitivity and specificity of the algorithm from a program perspective as well as a statistical perspective.
An on-going research study (9) at PHO has indicated the value of using SPC in detecting aberrant events. Methods in SPC have been widely adopted for disease monitoring; one of the most popular methods is the cumulative sum (CUSUM) control chart, originally proposed for industrial quality control data. (10) Compared with EARS, the CUSUM technique is more sensitive to moderate changes in the average event rate and slow rises in case counts. (11,12) In SPC literature, data processes are said to be "in control" when no aberrant events (or outbreaks) are expected and the system will "alert" when an aberrant event is detected.
The successful application of CUSUM methodology in the research project, staff's concerns about the number of false positive warnings from the existing EARS program, and the ability to visualize the SPC methods created an opportunity to develop temporal aberrant event detection methods with improved sensitivity and specificity, using Ontario's reportable infectious disease data from iPHIS and representative laboratory testing data from the PHO Laboratory (PHOL). The specific question the current study asked was: "Does the prospective application of customized user-defined, disease-specific CUSUM algorithms, based on historical patterns, have improved sensitivity and specificity compared to the existing approach?"
PHO staff working in the communicable diseases area selected seven diseases for development of customized aberrant event detection algorithms: cyclosporiasis, giardiasis, influenza A, influenza B, mumps, pertussis and invasive pneumococcal disease (IPD). The seven diseases were suggested by the three operational teams within the organization that conduct surveillance and manage outbreaks of reportable diseases at the provincial level, with the aim to improve the efficiency of current practice given the high number of false positives. In order to assess the statistical properties of each disease and assess seasonal trends, historical data were obtained from either iPHIS or PHOL. The time period of historical data was individually determined for each disease and considered factors such as changes in reporting practices over time and/or changes in the use of diagnostic tests. The shortest historical period was four years (influenza A and B) while the longest was 14 years (pertussis). The choice of data source (iPHIS versus PHOL) was made on the basis of timeliness and completeness. For example, PHOL was the source for influenza data due to its timeliness, while iPHIS was the source for giardiasis because iPHIS data are more complete, with most of the testing done in private laboratories instead of at PHOL. The data were analyzed in order to assess seasonal trends, effects of day of the week, level of aggregation over time (daily versus weekly counts or rates), and level of aggregation geographically (Ontario-wide versus by region). Based on these analyses, decisions were made on the statistical models to use as well as the need to adjust for seasonality. For diseases without seasonality adjustment, a baseline period (when there were deemed to be no outbreaks) was chosen and means and standard deviations of case counts were calculated for baseline periods.
Algorithm optimization via interactive web application
Built using the Shiny package in the statistical program R, the data on each disease were analyzed with regression models to determine the expected counts. CUSUM analyses were done on the difference between observed and expected counts (for a more detailed mathematical description of the models, see Appendix A). When a web browser program opens the URL to the Shiny web application, it triggers the Shiny server to generate a dynamic web page with graphs of the data and visualizations of CUSUM alerts generated by executing the underlying R code (see Figure 1). Program content experts (users) can change the parameters of the CUSUM (notably the correction factor, k, and threshold, h; see Appendix A) through the web interface. This results in the R program automatically re-running the analysis and providing instant feedback to the user by generating a revised web page with the changes reflected in the CUSUM values and alerts.
For seven selected reportable diseases, seasonality, time trend and day of week effects were modeled on historical data using Poisson or Negative Binomial regression via the interactive web applications. Table 1 shows the model parameters applied to each disease. Using baseline values and some initial parameters for the CUSUM, the data were visualized on the website. See Figure 1 for screenshots of the web application for influenza A as an example. Different options such as data source, space aggregation, time aggregation, seasonality and most importantly, CUSUM parameters h and k were also provided to users though the web interfaces.
With the application, users were able to see when and how the system alerts changed when different parameters were used. Each user defined an ideal number of alerts and when they should occur based on their experience, expertise and professional opinion on the organization's ability to respond, and tuned the parameters accordingly. This task was performed separately by three to four users for each disease, and decisions on the final parameters for each CUSUM algorithm were reached by consensus agreement.
Comparison with current practices
Once the CUSUM detection algorithm was optimized for each disease through the modelling of historical data, the algorithm was applied prospectively for a testing period (ranging from 3 to 12 months). The variation in testing period depended on when the algorithm for the disease was created. While each disease algorithm was tested for a minimum of 3 months, staff collected addtional data until all 7 diseases had 3 months of data collected. Concurrently for diseases where the purpose of the algorithm was outbreak detection, i.e., cyclosporiasis, giardiasis, mumps, pertussis and IPD, the EARS algorithm was also run routinely. The CUSUM and EARS models read the same data (PHOL or iPHIS) and the number of alerts were compared. Due to the small numbers of events, only Ontario-wide analyses provided stable expected case counts (expected number of cases when there are no outbreaks expected) for each disease. Program staff investigated all alerts as per their usual protocol. They designated the alerts as useful or false positives. Due to resource constraints, staff could not be blinded to the source of the alert. Both EARS and CUSUM algorithms were automated in R for prospective routine analysis and comparison using a dynamic reporting system built using an R package, Knitr. (13,14)
For influenza A and B, the main goal was to detect the start of the influenza season and predict the peak in seasonal incidence for each type. The traditional method of detecting the onset of the influenza season is to establish when percentage of respiratory samples testing positive for influenza (percent positivity) rises above 5% for two consecutive weeks. This week was compared to the alert date for the CUSUM models. The predicted peak (a range based on average time since seasonal start alert to peak based on historical data) was declared when the alert occurred. Staff then waited for disease activity to peak and the date of the actual peak was compared to the predicted range of peak activity. Peak activity was defined as a non-weekend, non-holiday period having the highest percent positivity for influenza A and B. Weekend and holiday dates were excluded due to the small number of samples being tested on those dates, which would result in unstable estimates.
Disease-specific algorithms were built with the choice of aggregation by time (monitoring by day versus week), seasonality, time trend and day of week effects. Algorithms for each disease were based on either individual case reported date (the date public health was first notified about the case) for algorithms based on iPHIS data or the date the specimen was received at PHOL for algorithms based on laboratory data. Table 1 provides the collective decision made for each disease.
Aberration detection with the purpose of provincial outbreak detection and start of the season
For cyclosporiasis, giardiasis, mumps, pertussis and IPD, the final algorithms were run prospectively for each disease using the automated dynamic reporting system and the results were monitored. Compared to the results of EARS obtained over a test period of a few months, the CUSUM models performed better in terms of both sensitivity and specificity for all diseases except giardiasis (see Table 2). For four of the five diseases, there were fewer false positives and the algorithm successfully detected all known outbreaks that occurred during the time period of review. For cyclosporiasis, an outbreak was detected using the CUSUM approach and all flags, except one, clustered around reported outbreak cases (see Figure 2a). Neither CUSUM nor EARS detected a giardiasis outbreak because no known provincial outbreaks occurred during the time period of review. However, the flags generated by the CUSUM approach in September and October corresponded to an increase above expected in the number of giardiasis cases reported, indicating a higher than expected seasonal rise (see Figure 2b).
Aberration detection with the purpose of detecting the start of a known seasonal increase and predicting the peak in activity
Table 3 summarizes the results for influenza A and B. In predicting the start of the season by week, the CUSUM alerts for both influenza A and B occurred one week and four weeks earlier respectively compared to those generated using the traditional method of the percent positivity remaining at or above 5% for two consecutive weeks. There is no traditional method to alert the start of the season using the daily data, as indicated with "NA" in Table 3.
In terms of predicting the peak for influenza A, for the past four seasons, the CUSUM alerts on the weekly data tended to occur 8 weeks before the peak while the CUSUM alerts on the daily data alerted 6 weeks before the peak on average. The date for the peak was predicted using the range of time from CUSUM alerts to peak activity, with 6 weeks (by day) and 8 weeks (by week) as midpoints. In the fall of 2014, both CUSUM ("by week" and "by day") alerts occurred at the same time (November 27 in the week of November 23). As a result, the predicted range for the "by day" algorithm was closer to the actual peak than the "by week" algorithm. For influenza B, the predicted peak of the season was 9 weeks (by day) and 10 weeks (by week) after the CUSUM alerts. The actual peak during the test period was within the predicted range for the peak for both the by day and by week algorithms.
This paper describes the use of an interactive web-based application which allowed disease content experts to interact with reportable disease data using appropriate statistical models and develop optimized aberration detection models by tuning specific parameters of the statistical algorithm. By combining statistical methods and expertise in disease-specific epidemiology, the algorithms operate more effectively than programs such as EARS which take a standard approach to all diseases.
The improved function of the alerts was observed in the detection of provincial outbreaks for three of the five diseases (e.g., cyclosporiasis, pertussis and mumps). No provincial outbreaks occurred during the study period for either giardiasis or IPD. The performance of the CUSUM algorithm with respect to giardiasis was felt to coincide with the seasonal rise of rates in the summer. While this may be valuable in its own right, it did not specifically detect outbreaks and thus, according to our predetermined definitions, the CUSUM flags were listed as false positives.
The CUSUM algorithms for influenza A provided earlier flags for the start of the seasonal influenza than the traditional method. The predicted peak in activity was found to be later than the actual peak, more so for the weekly than the daily algorithm. The finding of earlier alerts among the weekly aggregate counts for influenza A (compared to daily counts) in the historical data was unexpected. This could be due to the decreased variation during the baseline period leading to a lower threshold, allowing the algorithm to pick up slight increases sooner. In the fall of 2014, the rate of influenza positive testing rose rapidly, causing both the weekly and daily alerts to occur at the same time. The rapid increase in percentage positivity resulted in an earlier peak in influenza A activity. This pattern and its impact on predictions of peak activity will be monitored in the future.
Although not part of the original study protocol, the interactive web-based application also generated discussion among the program staff about exactly what was an acceptable variation from the normal baseline level. The positive flags for giardiasis based on an aberrant seasonal increase led to a discussion about the threshold for such an alert and potentially adjusting the algorithm for an increased seasonal effect and/or updating baseline data. Having such discussions helped clarify operational decisions on appropriate follow-up and when alerts needed to be acted upon.
Additional discussions with staff indicated that this development process led to a better understanding of the statistical models being used, thresholds for action being defined and user acceptability of the results. Attempts to expand the algorithms to include regional analyses were not possible in many cases due to small numbers of cases in some regions.
Limitations and next steps
We recognize a number of limitations. This method was only applied to seven diseases for a limited period. Ongoing algorithm assessment and updating baseline data will continue and work is proceeding on expanding the approach to other diseases. In addition, aggregation to the level of Ontario may mask aberrant activities in certain areas.
For next steps, we will continue to monitor the performance of these algorithms and develop new ones for other reported diseases. We will also explore spatial as well as temporal modelling. Meyer et al. (15) provided a series of methods that model infectious diseases in both space and time which were also implemented in the surveillance (3,4) package in R. However such methods are not feasible for Ontario because the methods require detailed location and infection period information which is not available in iPHIS. A spatiotemporal surveillance system that is tailored to account for spatial features of Ontario's data and postal code geographies will further public health surveillance efforts to detect unusual patterns in the occurrence of diseases or other types of adverse health events so that timely action can be taken.
We have developed a method that improved both sensitivity and specificity of aberration detection using historical data via interactive web applications, which could be easily adapted to develop algorithms for other diseases by other public health organizations. A strength of this method is its subjective component, relying on disease-specific expertise as part of the input to tailor the algorithms to fit the context and needs of the organization developing them.
APPENDIX A. STATISTICAL MODEL
In this study, the cumulative sum method (CUSUM) was applied to the difference between the observed and expected values. These expected values were calculated by applying a regression model to account for linearly increasing or decreasing trends over time, seasonality, day of the week effects, temporal correlation and population size. The values varied for each disease. Letting [Y.sub.t] and [E.sub.t] be the observed count and expected count for day/week t, we then modelled the observed data using either a Poisson or Negative Binomial regression as follows:
[Y.sub.t] ~ NegBinom([[lambda].sub.t], [gamma]) or Poisson([[lambda].sub.t]) log([[lambda].sub.t]) = log [E.sub.t] + [X.sub.t][beta] + [U.sub.t] [U.sub.t] - N([phi][U.sub.t-1], [[sigma].sub.2]) (1)
Here [[lambda].sub.t] is the modelled expected count for day/week t, which depends on covariates [X.sub.t] that include seasonality, other time-varying effects and [E.sub.t], a population offset. [U.sub.t] is temporal auto-correlated effects that follow an autoregressive model with scale parameter [phi] and variance [[sigma].sup.2]. The parameter y is used to account for overdispersion in data when variance is larger than the mean in the traditional Poisson model. The system alerts when CUSUM exceeds a threshold value (h):
[S.sub.t] = max [0, [S.sub.t-1] + [E.sub.t] exp ([X.sub.t][beta]) + [U.sub.t] - k] > h (2)
Equation 2 is a classic CUSUM algorithm where a CUSUM value ([S.sub.t]) is non-negative, h is an alert-triggering threshold and k is a correction factor that brings the CUSUM value close to 0 at each run. These two parameters together determine the two average run length (ARL) concepts in process control literature. The ARL0 is the average time between two (false) alerts when the system is in control, i.e., no aberrations. The ARL1 is the average time the algorithm takes to alert when there are aberrations in the system. ARL1 has similar interpretation with sensitivity (opposite direction in magnitude) while ARL0 has similar interpretation with specificity. Therefore an ideal system would have a large ARL0 (high specificity) when the system is in control but a small ARL1 (high sensitivity) when aberrations happen.
A unique property of CUSUM is that once an alert is generated by the system, i.e., CUSUM value exceeds threshold h, the CUSUM value will be set to the minimum value 0 to restart the monitoring process. Engineering industries use CUSUM to monitor machines on the production line; once a signal is found, the machine is taken off the production line for repair and maintenance. A CUSUM algorithm without reset after an alert will continue to alert even if the system being monitored is back to normal/baseline state, thereby decreasing specificity of the algorithm and as a result, not being generally useful in disease monitoring.
The above method models the expected count at each time point with covariates and monitors the difference between the observed and expected count using CUSUM; however, to maximize ARL0 and minimize ARL1 (and thus maximize both sensitivity and specificity), we need to find the appropriate h and k for each disease selected in this study.
(1.) Lawson AB, Kleinman K (Eds.). Spatial and Syndromic Surveillance for Public Health. Hoboken, NJ: John Wiley & Sons, 2005.
(2.) Hutwagner L, Thompson W, Seeman GM, Treadwell T. The bioterrorism preparedness and response Early Aberration Reporting System (EARS). J Urban Health 2003; 80(2 Suppl 1):i89-96. PMID: 12791783. doi: 10.1007/ PL00022319.
(3.) Hohle M, Meyer S, Paul M. Surveillance: Temporal and Spatio-Temporal Modeling and Monitoring of Epidemic Phenomena. R Package Version 1.8-3, 2015. Available at: http://CRAN.R-project.org/package=surveillance (Accessed July 6, 2015).
(4.) Salmon M, Schumacher D, Hohle M. Monitoring count time series in R: Aberration detection in public health surveillance. J Stat Softw. Accepted.
(5.) R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria. : R Foundation for Statistical Computing, 2015. Available at: http:// www.R-project.org/ (Accessed May 5, 2015).
(6.) Sonesson C, Bock D. A review and discussion of prospective statistical surveillance in public health. J R Stat Soc Ser A Stat Soc 2003; 166(1):5-21. doi: 10.1111/1467-985X.00256.
(7.) Unkel S, Farrington CP, Garthwaite PH, Robertson C, Andrews H. Statistical methods for the prospective detection of infectious disease outbreaks: A review. J R Stat Soc Ser A Stat Soc 2012; 175(1):49-82. doi: 10.1111/j. 1467-985X.2011.00714.x.
(8.) Chang W, Cheng J, Allaire JJ, Xie YH, McPherson J. Shiny: Web Application Framework for R. R Package Version 0.11.1, 2015. Available at: http://CRAN. R-project.org/package=shiny (Accessed May 5, 2015).
(9.) Johnson IL. Development and integration of alerting algorithms and response protocols for public health surveillance of respiratory infections. Canadian Institutes of Health Research Grant No.: 259886. Available at: http:// webapps.cihr-irsc.gc.ca/cris/detail_e?pResearchId=5960875&p_version=CRIS& p_language=E&p_session_id=1333597 (Accessed May 18, 2016).
(10.) Woodall WH. The design of CUSUM quality control charts. J Qual Technol 1986; 18(2):99-102. doi: 10.2307/2988154.
(11.) Fricker RD Jr, Hegler BL, Dunfee DA. Comparing syndromic surveillance detection methods: EARs' versus a CUSUM-based methodology. Stat Med 2008; 27(17):3407-29. PMID: 18240128. doi: 10.1002/sim.v27:17.
(12.) Fricker Jr RD. Introduction to Statistical Methods for Biosurveillance With an Emphasis on Syndromic Surveillance. Cambridge, UK: Cambridge University Press, 2013.
(13.) Xie YH. Knitr: A General-Purpose Package for Dynamic Report Generation in R. R Package Version 1.10.5, 2015. Available at: https://cran.r-project.org/ web/packages/knitr/index.html (Accessed May 8, 2015).
(14.) Stodden V, Leisch F, Peng RD (Eds.). Implementing Reproducible Computational Research. London, England: Chapman and Hall, 2014.
(15.) Meyer S, Held L, Hohle M. Spatio-temporal analysis of epidemic phenomena using the R package surveillance. J Stat Softw. Accepted.
Received: July 27, 2015
Accepted: October 30, 2015
Ye Li, PhD, [1,2] Michael Whelan, MSc,  Leigh Hobbs, MPH,  Wen Qi Fan, MSc,  Cecilia Fung, MSc,  Kenny Wong, MPH,  Alex Marchand-Austin, MSc,  Tina Badiani, MHSc,  Ian Johnson, MD [1,2]
[1.] Public Health Ontario, Toronto, ON
[2.] Dalla Lana School of Public Health, University of Toronto, Toronto, ON
Correspondence: Ye Li, PhD, Public Health Ontario, Suite 300, 480 University Avenue, Toronto, ON M5G 1V2, Tel: 647-260-7479, E-mail: Lennon.firstname.lastname@example.org
Acknowledgements: The authors acknowledge the contributions of Stephen Moore, Victoria Ng, Ryan Walton, Emily Karas, Ellen Chan and Jill Fediurek.
Conflict of Interest: None to declare.
Table 1. Summary of parameters for each disease Diseases Period of Data source Frequency historical data Giardiasis Jan. 2009-Dec. 2014 iPHIS Daily Cyclosporiasis Jan. 2009-Dec. 2014 iPHIS Daily Influenza A Sept. 2010-Aug. 2013 PHOL Daily (D) Weekly (W) Influenza B Sept. 2010-Aug. 2013 PHOL Daily (D) Weekly (W) IPD Jan. 2010-Dec. 2014 iPHIS Weekly Mumps Jan. 2006-Jan. 2013 iPHIS Weekly Pertussis Jan. 2000-May 2014 iPHIS Weekly Diseases Seasonality Day of the Linear week effect time trend Giardiasis Yes Yes No Cyclosporiasis Yes Yes No Influenza A No No No Influenza B No No No IPD Yes No Yes Mumps No No No Pertussis No No No Diseases Model CUSUM Purpose parameters Giardiasis Poisson regression H = 2.0 Outbreak detection K =0.99 Cyclosporiasis Poisson regression H = 3.0 Outbreak detection K =1.5 Influenza A CUSUM on H = 3.5 (D) Start of season % positivity 2.5 (W) detection K =2.8 (D) 2.0 (W) Influenza B CUSUM on H = 2.0 (D) Start of season % positivity 3.0 (W) detection K =1.5 (D) 2.0 (W) IPD Negative binomial H = 3.0 Outbreak with temporal detection correlation K =1.5 Mumps CUSUM on count H = 3.0 Start of season K =1.5 detection Pertussis CUSUM on count H = 2.5 Start of season K =1.5 detection Note: IPD = Invasive pneumococcal disease. Table 2. Comparison of EARS and CUSUM algorithms in detecting aberrations Diseases Number of Number of Number of CUSUM flags useful false positive CUSUM flags CUSUM flags Cyclosporiasis 8 7 1 Giardiasis 11 0 11 Pertussis 7 7 0 Mumps 3 3 0 IPD 0 0 0 Diseases Number of Number of Number of EARS flags useful false positive EARS flags EARS flags Cyclosporiasis 34 4 30 Giardiasis 14 0 14 Pertussis 21 13 8 Mumps 6 5 1 IPD 16 0 16 Diseases Trial period Cyclosporiasis 06/2014-05/2015 Giardiasis 06/2014-05/2015 Pertussis 04/2015-06/2015 Mumps 04/2015-06/2015 IPD 04/2015-06/2015 Note: IPD = Invasive pneumococcal disease. Table 3. Comparison of traditional methods and CUSUM in predicting the start of the influenza season and peak in activity: Ontario, November 2014-May 2015 Influenza type and Start of season Start of season monitoring interval using traditional using CUSUM approach (CUSUM alerts) Influenza A by day NA * 27-Nov-14 Influenza B by day NA * 11-Feb-15 Influenza A by week 30-Nov-14(W) 23-Nov-14(W) Influenza B by week 15-Mar-15(W) 08-Feb-15(W) Influenza type and Predicted date Predicted date Actual monitoring interval for the peak range for date/week of (average time the peak the peak to peak) Influenza A by day 7-Jan-15 Jan 1 to Jan 16 31-Dec-14 Influenza B by day 22-Apr-15 Apr 12 to May 6 14-Apr-15 Influenza A by week 01-Feb-15(W) Jan 18 to Feb 7 28-Dec-14(W) Influenza B by week 18-Apr-15(W) Apr 5 to May 9 12-Apr-15(W) Note: (W) indicates a week period, with the date shown as the Sunday at the start of the week. * Traditional data at the daily frequency are not available.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||QUANTITATIVE RESEARCH; cumulative sum|
|Author:||Li, Ye; Whelan, Michael; Hobbs, Leigh; Fan, Wen Qi; Fung, Cecilia; Wong, Kenny; Marchand-Austin, Ale|
|Publication:||Canadian Journal of Public Health|
|Date:||Jan 1, 2016|
|Previous Article:||The relationship between violence and engagement in drug dealing and sex work among street-involved youth.|
|Next Article:||What leads to homeless shelter re-entry? An exploration of the psychosocial, health, contextual and demographic factors.|