Automating regression testing for evolving GUI software

Authors

  • Atif Memon,

    Corresponding author
    1. Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, U.S.A.
    2. Department of Computer Science, University of Maryland, College Park, MD 20742, U.S.A.
    • Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, U.S.A.
    Search for more papers by this author
  • Adithya Nagarajan,

    1. Department of Computer Science, University of Maryland, College Park, MD 20742, U.S.A.
    Search for more papers by this author
  • Qing Xie

    1. Department of Computer Science, University of Maryland, College Park, MD 20742, U.S.A.
    Search for more papers by this author

  • A preliminary report of this work appeared in the Proceedings International Conference on Software Maintenance [1]

Abstract

With the widespread deployment of broadband connections worldwide, software development and maintenance are increasingly being performed by multiple engineers, often working around-the-clock to maximize code churn rates. To ensure rapid quality assurance of such software, techniques such as ‘nightly/daily building and smoke testing’ have become widespread since they often reveal bugs early in the software development process. During these builds, a development version of the software is checked out from the source code repository tree, compiled, linked, and (re)tested with the goal of (re)validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically re-run for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (Daily Automated Regression Tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis, smoke-test-case generation, test-oracle creation, to code instrumentation, test execution, coverage evaluation, regeneration of test cases, and their re-execution. Together with the operating system's task scheduler, DART can execute frequently with little input from the developer/tester to re-test the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We empirically compare the relative costs of employing different levels of detail in the GUI test oracle. We also show the events and statements covered by the smoke test cases. Copyright © 2005 John Wiley & Sons, Ltd.

Ancillary