Automated testing is getting more attention from those building and using web applications and content management systems. Open source projects (like Behat and CasperJS) are making it easier to write tests for websites and run them from typical development servers or your local computer. Still, we have not made automated testing a habit. Writing tests for a project may seem like extra work instead of extra value. Project owners may ask for justification to spend the time needed to implement and support tests. For these reasons, there is a case to be made for automated testing.
The first case for automated testing is the one against manual testing.
Without a specific test script, testing can amount to little more than clicking around a website. While this can surface problems, it is not reliably repeatable. By the time you find an error, you may have forgotten important details on how to reproduce it. Each time the test is performed, steps may be done in a different order so they fail to expose an issue. Further, the results of this kind of testing are not stored in a way that can serve as a reference to others. Unless the tester keeps meticulous notes, it may be hard to determine when a bug first occurred, and how it corresponds to a code release or database state. At minimum, testing a basic user story on a website by hand takes a few minutes; for complex CMS features, it probably takes a fraction of an hour. For a developer, testing the functionality directly related to a code change is non-trivial, but fully regression testing an application for a code change is often not feasible. Generally, manual testing has reliability problems and is slow and tedious.
The Business Case
On several projects, we have advanced a case to business stakeholders for the value of automated tests for their projects. In some cases, we proposed writing tests to cover the core user stories. In other cases, we set up a testing framework that uses a friendly syntax to help less technical users read and write tests. For project owners, the key benefit from an automated testing system is improved stability. Test results become a constant indicator that validates the success criteria for a project, and the tests provide an active defense against regression bugs that tax development effort. Sometimes releasing important changes are held up by the fear of introducing a bug. Automated tests let you blanket an application with test cases related to its key value propositions. By relying on limited manual tests, you rely on end users to catch the edge cases, which leads to a frustrating experience for those users.
The Developer Case
Often testing is the step that follows development, so this suggests that you would write and use tests after coding is complete. I would argue that development is an iterative process of trial and error, of coding and testing. When working on CMSes and integrated systems, developers may spend almost as much time testing the application as writing code. Testing is a critical part of development, and automated tests should be seen as a development tool that can improve efficiency for complex projects. The test driven development paradigm makes this case. To the extent possible, tests that cover the requirements of a feature should be written prior to developing the feature. When the tests pass, the feature is complete. The test gets committed with the features and continues to ensure that the feature doesn’t break. With this approach, developers can eliminate some of the tedious testing work that accompanies development. Coding still requires hacking and trying to break (and fix) things, but the repeatable tasks can be scripted away.
As you start using an automated testing system, the key questions are:
- What tests are written?
- When are tests written?
- Who writes the tests?
- How are tests used?
Ideally, you write tests to cover all of the user stories or other significant features. However in practice, just get started! As you develop, write tests for anything you wouldn’t want to break later. Write tests for regression bugs that do pop up. Write a few tests that are broad and can act as a sanity check for a number of features (the site is live, a visitor can log in, and view a piece of content).
There are different kinds of tests -- like unit, behavioral/functional, visual -- which are suited for validating different kinds of code/functionality. Ideally, a test system would blend many types of tests. For web applications, behavioral tests are great for verifying end user expectations. There is an art to writing good tests.
Often it’s important to test for an expected success and one or more expected failures. When testing features through web pages generated by an application, there are many opportunities for false positives or negatives. That a specific test case passes is not always an indication that the function executed as expected.
Tests can be written at any point. As discussed in the developer case, there are reasons for writing tests even before the code is in place to make them pass. Automated tests can be a useful tool for the development team. However, it’s also possible to write test scripts after development is complete to act as an ongoing check against regressions.
Likewise, anyone can write tests. Often developers or technical QA teams are responsible, but less technical stakeholders can also create and run tests. Once the tests are in place, it’s important to use your tests, maybe even obsessively! A good practice would be that an automated test suite is run by developers before a code change is integrated, by the deployment system after code is integrated and deployed, and by the system when a release candidate is being prepared. These tests provide several opportunities to catch problems before end users are affected and as near as possible to the problem’s introduction.
There are many cases to be made for implementing an automated testing system. We can do more to move automated tests ahead as a good habit for development teams and another part of the standard toolbox.