Tuesday, 6 May 2014
Automated Testing Lifecycle
This paper focuses on Automated Testing for comprehensive regression testing changes in a Blaise
Internet instrument. Like the overall software development lifecycle (SDLC), test automation has its own lifecycle consisting of the following six steps:
1. Level of Effort Analysis: The decision whether to automate testing is based on analyzing the field
length of the project, system stability, available resources, test environment, tool compatibility, and other factors.
2. Test Tool Acquisition: A test tool can be developed or purchased after comparing and evaluating tools on the market.
3. Automated Testing Introduction Process: An overall test process and strategy needs to be in place before introducing automated testing for any new project. For example, documented system changes must be passed to testing to update the version of suites of scripts and expected results.
4. Test Planning, Design, and Development: This phase includes setting up a preliminary schedule and test environment guidelines, and developing the automated scripts. The Testing Team leads this phase of the work with staff equipped with a second desktop PC for developing and tuning the automated scripts. This phase includes close collaboration with the developer and/or tool tech support to help manage how the tool integrates with the system.
5. Execution of Tests: The test environment needs to be in place as planned. The automated scripts must have dedicated PCs on which to run. Dedicated PCs for automated testing are totally independent of development PCs, so that they can provide “unpolluted” platforms on which to test. Thus, they should be as close to the end user configuration as possible. It may be desirable for testing platforms to have increased processing speed and memory. This consideration has to be weighed against testing on a system that is identical to the end user system. The tester will execute automated scripts and provide test results for evaluation.
6. Test Program Review and Assessment: Test program review and assessment activities need to be
conducted throughout the testing lifecycle, to allow for maintenance of scripts and continuous process
Significant Benefits of Test Automation
Once it has been determined that automation is appropriate for a project, the team can look forward to the following benefits:
1. Increased depth and breadth of regression testing.
2. Elimination of long, repeatable manual test cases. (Although automation doesn’t eliminate
manual testing, it does replace lengthy repeatable test cases so that testers can focus on particular
3. Reduction of the schedule.
4. Unattended testing. Automated tests can run unattended and overnight.
5. Improved quality of the test effort.
6. Improved assessment of system performance. Besides allowing for greater consistency and test
coverage, automation can be used to ensure that the system’s performance requirements meet or
exceed user expectations.
Loosers Expectations for Automated Testing
After considering the automated testing life cycle, the benefits of automation must be weighed against
Automation can be implemented at any time
Automated testing requires a long-term project with an instrument that is relatively stable, that needs to have run a specified set of regression tests on a regular basis that have predictable results. If the project tries to implement automation prematurely, while a system is still being developed, it will increase the maintenance required to keep suites of scripts running that can provide useful feedback about software issues.
Automation can replace manual testing
Automated test tools should be viewed as enhancements to manual testing; they will not develop a test plan, design and create test cases and procedures, and execute the tests. The test team will never achieve 100% test automation of an application, because it is not possible to test all combinations and
permutations of all inputs. An experienced tester will certainly need to execute tests during exploratory testing that were not predicted when writing test cases.
Automation is easy
Vendors sell an automated tool by exaggerating its ease of use, usually referring to it as a
“record/playback” tool. Automation is actually more complicated than that, because recorded test scripts must be enhanced manually with code to make the scripts robust, reusable, and maintainable. To be able to modify the scripts, the tester must be trained and become an expert on the tool’s built-in scripting language.
One tool does it all
Currently, one single test tool will not fulfill all testing requirements for most projects. Different tools are used for regression, file comparison, load, and other aspects of testing.
Immediate test effort reduction
In addition to the learning curve associated with applying automated testing to a project, test automation requires that the team pay careful attention to automated test procedure design and development. The automated test effort can be viewed as its own software development life cycle, complete with the planning and coordination issues that come along with a development effort. Time is invested up front to organize, plan, script, and debug.
Selecting an Automated Tool
There are many tools in the market which support automation. TestPartner is the testing tool we selected to automate our regression testing process . Like most automation test tools, you can create
scripts automatically using TestPartner’s record facility. When recording your actions, the responses of the application you work with are translated into TestPartner scripts. TestPartner can record just
keystrokes, mouse moves and clicks. It works best at an object level
–- identifying objects by name.
Your actions are translated into simple commands. TestPartner lets you quickly record and execute test scripts. The tester can then modify test scripts to include hand-coded programming that cannot be
recorded, to enhance the scripts and make them easier to maintain. All automated test tools will equire
-coding to make scripts robust, even though tools are generally marketed as “record and playback.”
TestPartner is not able to uniquely identify the objects on the screen, since Windows does not supply a unique attribute to the windows control.