The Updating Sequential Probability Ratio Test (USPRT) developed by MaCurdy et al.. (2009, Updating sequential probability ratio test for real-time surveillance of vaccine safety, unpublished working paper) has been used by the U.S. Food and Drug Administration for near real-time surveillance of the safety of the flu vaccine since 2008. This procedure was the first method developed to account for data delay in pharmacovigilance studies. However, the current implementation is based on the strong assumption that the clinical and reporting delays do not vary from previous years. When this assumption does not hold, size distortion of the USPRT procedure might result. The goal of this article is to numerically investigate the robustness of the detection probabilities of the USPRT method with respect to possible misspecification of the clinical and reporting delay distributions through extensive simulations. We find that if the delay distribution used in calibrating the critical bound is lengthier than the delay distribution in the data generating process, then there is a higher rate of false signaling and vice versa. This is an inherent property of a real-time testing procedure. However, the distortion created by misspecifying the reporting delay distribution appears to be insignificant when compared to the overall power generated by an elevation of the adverse event rate. The size distortion is unevenly distributed across the interim tests, so the effect of misspecification of the delay distributions is more prominent in the median time-to-signal. In summary, although a misspecified delay distribution induces size distortion, we find that it does not erode the overall power.