Test Automation Framework Design
When test automation is taken to highest level tests
can be started with a push of a button, they can be left running over-night
unattended and next morning test results are published. This kind of automation
clearly requires some kind of a system which makes creating, executing and
maintaining test cases easy. The system should provide some core functionality
(e.g. monitoring and reporting) and allow extending itself to make it possible
to create new kinds of tests.
Characteristics of a good Test Automation Framework
1)
The framework MUST execute test
cases automatically. That includes also for example verifying results, handling
errors and reporting results.
2)
The framework MUST be easy to
use without programming skills.
3)
The framework MUST be easily
maintainable.
4)
The framework MUST be able to
execute tests unattended.
5)
It MUST be possible to start and
stop test execution manually.
6)
It SHOULD be possible to start
test execution automatically at predefined time.
7)
Non-fatal errors caused by the
SUT or the test environment MUST be handled gracefully without stopping test
execution.
8)
Every executed test case MUST be
assigned either Pass or Fail status and failed test cases SHOULD have a short
error message explaining the failure.
9)
Test execution MUST be logged.
10)
Test report MUST be created
automatically. Test report SHOULD be published automatically.
11)
The framework MUST have coding
and naming conventions.
12)
The framework MUST be adequately
documented.
High level Framework design
Components of a Automation Framework Architecture
Test Design System
Test design system is used for creating new test
cases and editing existing ones. It is test engineers’ and domain experts’
playground and must be easy to use with minimal training and without
programming skills.
The created test data can be stored into flat files
or databases. A simple solution is using an existing tool like spreadsheet
program or HTML editor as a test design system. A special test design tool made
just for this purpose would of course have its benefits but building something
like that is out of the reach for most automation projects.
Test
Monitoring System
Test monitoring system is used for controlling test
execution and checking test results.
It should have at least the capabilities listed
below.
• Starting test execution manually.
• Stopping test execution.
• Monitoring test execution while tests are running.
• Viewing test logs while tests are running and
afterwards.
• Viewing test report.
Test Execution System
Test execution system is the core of the framework.
Its main components are
a)
driver scripts,
b)
the test library,
c)
the test data parser and
d)
other utilities like logger and
report generator
a)
Driver Scripts
Test execution is controlled by driver scripts which
are started using the test monitoring system. Driver scripts are pretty short
and simple because they mainly use services provided by test libraries and
other reusable modules.
b)
Test Libraries
Test libraries contain all those reusable functions
and modules which are used in testing or in activities supporting it. Testing
is mainly interacting with the tested system and checking that it behaves
correctly. Supporting activities are often related to setting up the test
environment and include tasks like copying files, initializing databases and
installing software. Functions in the test library can do their tasks independently
but they can also use other functions or even external tools.
c)
Test data parser
The test data parser gets test data easily to driver scripts. Its
task is processing the test data and returning it to the driver script in easy
to use test data containers. The role of the parser may seem small but in
reality it is the heart of the whole test execution system.
d)
Logger
An easy and modular way to fulfill them is having a
common logger component with the required features. Both the driver script and
test functions use the same logger which in turn writes messages to the common test
log.
e)
Report Generator
The reporting requirements of the framework can be
summarized saying that a concise report must be created and published.
Working of a Data Driven Framework
Simple test scripts have test data embedded into
them. This leads to a problem that when test data needs to be updated actual
script code must be changed .This kind of reuse is also problematic because one
particular in the tested system may require updating all scripts. Because of
these problems embedding test data into scripts is clearly not a viable solution
when building larger test automation frameworks. A better approach is reading
the test data from external data sources and executing test based on it. This approach
is called data-driven testing.
Step 1: Editing and Storing Test Data
Because data-driven test data is tabular it is
natural to use spreadsheet programs to edit it. Another benefit is that
spreadsheet programs are often used for simple test management tasks and in
that case there is less need for storing same data in different places or
copying it back-and-forth.
Step 2: Processing Test Data
Implementing a script for parsing data-driven test
data can be surprisingly easy with modern scripting languages. Test data is
first read into data variable and then split to lines so that header row is
excluded. Data lines are then processed one by one. Lines are split to cells
from tabulator character and cells’ data assigned to variables making test data
available in the script for actual testing.
Step 3: Test Execution
The test scripts written are invoked from the data
sheet and then executed. The test scripts will in short act on the application
under test and execute the test cases.
Step 4: Logging and Reporting
The logger will log the data in a file after each
step of execution and after the test is complete, the test reporting mechanism,
will generate reports with status.