4.����� Objectives
of Functional Test Automation
5.����� Introduction
to Functional Test Automation
5.1.������� Terms
and Definitions
5.2.4.���� User
Acceptance Test
5.2.5.���� Deployment
Readiness Test
5.3.1.���� Developing
Test Cases for Automation
5.3.2.���� Handling
Application Errors and Exceptions
5.4.������� Roles
and Responsibilities
6.����� Developing
A Functional Test Strategy
7.����� Creating
a Functional Test Plan
8.����� Performing
Application Requirements Analysis
9.����� Writing
Test Cases for Automation
10.1.1.������ Classification
of Data Types
11.������� Evaluating
Test Tools
12.������� Writing
Test Scripts
13.������� Developing
Automation Frameworks
13.1.����� Data-Driven
Frameworks
13.1.2.������ Test
Case Script
13.1.3.������ Business
Component Function Script
13.1.4.������ Common
Subroutine Function Script
13.1.5.������ User-Defined
Function Script
13.2.����� Keyword-driven
Frameworks
13.2.4.������ User-Defined
Function Libraries
13.3.����� Tool-specific
Frameworks
13.3.1.������ Mercury
Quality Center with TestDirector and QuickTest Professional
13.3.2.������ Desktop
Certification Automation
13.5.����� Business
Components
13.7.����� Test
Sets (Test Lab)
14.������� Functional
Test Results Analysis
15.������� Defect
Tracking and Resolution
16.������� Status
Reporting and Test Metrics
17.������� FTA
Methodology Delivery Process
18.1.1.������ Mercury
Quality Center with TestDirector
18.1.2.������ Mercury
Test Director
18.1.3.������ Windows
Script Host VBScript
18.2.2.������ WinRunner
script� (note: structure applies to all
coding, e.g., QTP)
18.2.3.������ DDT
Action datasheet (Excel)
18.2.4.������ Keyword
(framework) Reference Guide examples
18.3.����� Links
to Other Resources
Prepared By |
Company/Group |
Contact
Information |
Greg Annen |
|
Greg@BlueOpal.com |
|
|
|
Version |
Date |
Description |
Approval and
Date |
1.0 |
Dec 2005 |
Draft of Methodology
document |
GJA | |
1.0 |
Jan 6 2006 |
Content Updates, all
sections |
GJA | 01/06/06 |
1.0 |
Jan 13 2006 |
Content changes to
sections: TEST LEVELS, DEVELOPING A FUNCTIONAL TEST STRATEGY, CREATING A
FUNCTIONAL TEST PLAN, PERFORMING APPLICATION REQUIREMENTS ANALYSIS, WRITING
TEST CASES FOR AUTOMATION, WRITING TEST SCRIPTS, TOOL-SPECIFIC FRAMEWORKS,
and FTA METHODOLOGY DELIVERY PROCESS. |
GJA | 01/13/06 |
1.1 |
May 1 2008 |
Highlighted some key
points |
GJA | 05/01/08 |
1,2 |
June 3, 2008 |
Added to Terms and Definitions |
GJA | 06/03/08 |
This document provides an overview of the concepts, processes and terms encountered in developing a comprehensive methodology for functional test automation. It is a living document, structured to allow collaborative input as knowledge is gathered in the field and refined by test solution architects.
4. Objectives of Functional Test Automation
Functional testing is a process to ensure that applications work as they should -- that they do what knowledgeable users expect them to do. Functional tests:
� Capture user requirements for business processes in a meaningful way
� give both users and developers confidence that business processes meet those requirements
� enable QA teams to verify that the software enabling those processes is ready for release
Simply stated, functional tests tell whether the completed application is doing the right things.
Today�s enterprises must conduct thorough functional testing of their applications to ensure that all business processes are fully available to users. Rigorous functional testing is also critical to successful application development and deployment. This climate presents a challenge for developers, QA teams, and IT managers: speed up testing processes and increase accuracy and completeness, without exceeding already tight budgets.
Why automate? Manual testing processes take too long to execute, provide incomplete functional test coverage, and introduce higher risk of manual errors and results that can't be reproduced. In practice, automated testing means programming the current manual testing process to run on its own. At the minimum, such a process includes:
� Detailed test cases, including predictable, expected results, which have been developed from business process functional specifications and application design documentation
� A standalone test environment, including a test database that can be restored to a known state, such that all test cases can be repeated each time there are modifications made to the application
Automation is the key to improving the speed, accuracy, and flexibility of the software testing process, enabling companies to find and fix more defects earlier in the SDLC. By automating key elements of functional testing, companies can meet aggressive release schedules, test more thoroughly and reliably, verify that business processes function as required, and generate increased revenue and customer satisfaction.
5. Introduction to Functional Test Automation
� requires multiple or frequent builds/patches/fixes
� needs to be tested on numerous hardware or software configurations
� deals with large or complex sets of data
� supports many concurrent users
In addition, if
repetitive tasks, such as data loading and system configuration are involved,
or if the application needs to meet a specific service-level agreement (
A functional testing tool is developed or purchased to support the automation effort. The typical use of a test tool is to automate regression tests, a database of detailed, repeatable test cases that are run each time there is a change to the application under test to ensure that this change does not produce unintended consequences. Within the tool, test steps are captured in the form of scripts. These can be individual scripts which test specific aspects of application functionality, or they can be functions which are reused as callable test steps.
Like the application under test, an automated test script is a program. Test automation can be thought of as writing software to test other software. Automated testing tools are actually development environments specialized for creating testing programs. Thus, to be effective, all automated test script development must be subject to the same rules and standards that apply to every software development project.� Making effective use of any automated test tool requires at least one trained, technical person � in other words, a developer. Using record and playback techniques to generate scripts is not effective for creating repeatable, maintainable tests; such techniques are often just an easy way to create throwaway test suites.
WikiPedia
defines a test
case as "�
a set of conditions or variables under which a tester will determine if a
requirement or use case upon an application is partially or fully satisfied. �In order to fully test that all the
requirements of an application are met, there must be at least one test case
for each requirement. ...Written test cases should include a description of the
functionality to be tested, and the preparation required to ensure that the
test can be conducted�there is a known input and an expected output, which is
worked out before the test is executed."
Often
confused with test cases, test scripts are lines of code used mainly in
automation tools.
The software validation process can be broken down into five distinct test levels:
Validates the detailed design by demonstrating
compliance with requirements specifications and validating the code logic. Unit
testing focuses on all program branches, and exercises every program statement
at least once.� All decision boundary
conditions are tested. Several types of
testing associated with unit test include: logic testing, limit testing, error
processing, initialization, code coverage, component/string testing and
regression testing. A Development Lead is accountable for ensuring the completeness of testing at the unit test
level.
Validates the detailed requirements and technical design by ensuring that, once assembled as a whole, all of the logical components of a specified system or application function and interact as designed.� Integration testing can include regression testing, feature and function testing, configuration testing, data conversion, and installation testing.
Validates the completed applications and systems in a production compatible environment by verifying the end-to-end functionality and performance. Additional system testing might include: business process, data integrity, performance, security, recovery and regression testing.
Validates that requirements have been met from a business user perspective and uncovers any gaps between the client expectations and the actual deliverables being produced. It also serves to confirm that the system or application developed fits correctly within the business flow. The test results must demonstrate the usability, performance, security and data integrity of the application(s). This testing is usually performed by knowledgeable business users.
5.2.5. Deployment Readiness Test
Validates that the installation media containing the completed technical solution can be added to the Business User�s environment using the prescribed deployment method. Members of the Development, QA, Infrastructure and Technical Support teams are typically involved in this effort.
Test automation efforts can fail by trying to do too much. Every automation tool has its learning curve and specific usage requirements, so it pays to start simple and build on each small success. Build acceptance tests, for example, are excellent candidates for initial automation efforts: they are run frequently and their aim is breadth of functionality, not depth. First, get one test to run to completion. Then, use this test as a model to build up your test suite. Finally, verify that all tests in the suite run to completion within the test execution framework.
One critical goal of automation is to develop robust test suites in which all test steps in a test case can be executed without tester intervention, while detailed information is captured about errors encountered in the application under test.
5.3.1. Developing Test Cases for Automation
For any level of testing, it is first necessary to define the requirements and objectives of a test, before writing any test plans, test cases, or test scripts.� The next step is to define the actions and application components to be included in the test cases and, ultimately, in the automated test script.�� Developing test cases for automation is a discipline: the automation framework, test scripts, and test data typically reside in separate repositories, which are linked together by a framework during execution of the test steps. The test case itself deals with application objects (windows and controls), not specific data; it can also include conditional steps if required by the test objectives.
5.3.2. Handling Application Errors and Exceptions
A common problem that prevents truly unattended testing is the occurrence of cascading failures. When one test fails, the application is left in an unexpected state: for example, an unexpected dialog window pops up displaying an error message, and subsequent test steps can't be run while the error dialog is present. An error recovery system is the solution to this problem.� It automatically records the error and restores the application and test environment to a known �base state�, allowing successive tests to run reliably. Cascading failures are avoided and unattended testing executes to completion.� After each test case, a recovery system verifies that the application is in the expected base state; if not, it will reset it.
5.4. Roles and Responsibilities
Matrix of typical QA tasks and owners.
1 |
2 |
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6. Developing A Functional Test Strategy
An effective functional testing strategy optimizes the QA effort to minimize risk. Note that no matter how much testing you invest, there is always a risk. Therefore, releasing the software is directly related to the acceptable level of risk.
�To implement a risk-based strategy, determine the minimum testing effort that must be invested in order to maximize risk reduction. The basic methodology used can be described in the following steps.
1. Identify business-critical functionalities that could prevent a user from using the software if a defect was encountered. This defect would be a high severity: for example, a login page for a Web application that does not work. Efficient ways to gather this list of functionalities include surveying the user community, asking a business domain expert, and assembling statistics from logs of a previous version of the application. Since risk increases with the frequency of use, the most used features will be the riskiest ones.
2. Design and then assign test cases to each of the functionalities listed in Step 1.
3. Size (in hours or minutes) the QA effort required to run the test cases identified in Step 2.
4. Sort test cases in ascending order of effort so you have the test case with the minimum effort first.
5. Start executing test cases in the order established in Step 4 until you run out of time.
Ideally, you always want to lower the risk in the shortest period of time in order to release versions more aggressively. One way to shorten your QA cycle yet retain the same confidence level in the software is to automate the minimum QA effort with functional testing tools. Let�s say I want to implement two new features in an application. I have the choice of implementing the two features in the same version, or implementing each feature in two successive versions. From a QA standpoint, having two successive versions with the same confidence level has a huge impact on the workload unless you automate the test cases listed in step 4 of the above methodology.
The figure above shows that lowering the risk to 50 percent can be achieved in a shorter period of time if testing prioritization has been done with risk in mind. �Zone A� represents the test cases prioritized in Step 1.
Once the automation process is in place and confidence has been established in the metrics produced, the scope of coverage can be expanded beyond risk-based testing to include additional functionalities and datasets.
7. Creating a Functional Test Plan
One of the first steps to creating a comprehensive automation test strategy is the creation of the application test plan for automation. An application test plan should contain a set of optimized test cases with maximum test coverage of all critical application functions. It should be executed using a tool that easily adapts to changing data and requirements. Consider the exact intent of the test plan and determine if you can create simple, effective test cases which would lead to more reliable automation.� It is not acceptable to simply write all the automation scripts directly from the manual test scripts. This has the same inherent limitation as doing record/replay for every script: the test is potentially unreliable.
8. Performing Application Requirements Analysis
9. Writing Test Cases for Automation
Design test cases for automation to be modular. Instead of using one test case to perform multiple functions or exercise multiple business processes, break the tests into separate functions or business process components. This ensures focus on the intended functionality or business process.
Design test cases to be generic in terms of process and repeatable in terms of data. This will allow easier translation to test scripts, whatever the tool used to implement and execute the tests.
10. �Managing Test Data
An important goal of functional testing is to allow the test to be repeated with the same result, yet varied to allow problem diagnosis. Without this, it is hard to communicate problems to coders, and it can become difficult to have confidence in the QA team's results. Good data allows detailed diagnosis, effective reporting, and repeatable test steps. It fosters confidence in the results obtained from test execution and iteration.
10.1.1. Classification of Data Types
In the process of testing a system, many references are made to "The Data" or "Data Problems". Although it is perhaps simpler to discuss data in these terms, it is useful to be able to classify the data according to the way it is used. The following broad categories allow data to be handled and discussed more easily.
� Environmental data: tells the system about its technical environment. It includes communications addresses, directory trees and paths, and environmental variables. For example, the current date and time can be seen as environmental data.
� Setup data: communicates the business rules. Typically, setup data causes different functionality to apply to otherwise similar data.
� Input data: is the information entered during daily operation of business functions. Accounts, products, orders, actions, documents can all be input data. For the purposes of testing, this category of data can itself be split into two types;
o Fixed input data is available before the start of the test, and forms a major component of the test conditions.
o Consumable input data represents the test input
It is also revealing to categorize the data being used in a business process as:
� Transitional data: exists only within an application, during processing of input data. Transitional data is not seen outside the system, but its state can be inferred from actions that the system has taken. Typically held in internal system variables, it is temporary and is lost at the end of processing.
� Output data: the end result of processing input data and events. It generally has a correspondence with the input data, and includes not only files, transmissions, reports and database updates, but can also include test measurements. A subset of the output data is generally compared with the expected results at the end of test execution. As such, it does not directly influence the quality of the tests but is used to evaluate pass/fail criteria.
What should be �under the hood�?
� �Scriptless� representation of automated tests: testers should be able to visualize each step in the business process, and view and edit test cases intuitively
� Integrated data tables:� testers should have the ability to pump large volumes of data through the system quickly, manipulate the data sets, perform calculations, and quickly create hundreds of test iterations and permutations with minimal effort
� Clear, concise reporting:� reports should provide specifics about where application failures occurred and what test data was used; provide application screen shots for every step to highlight any discrepancies; and provide detailed explanations of each verification point�s pass and failure
� Integration with requirements coverage and defect management tools
Design test scripts for automation to be modular. Instead of using one test script to perform multiple functions, break the tests into separate functions. This can help focus on the business process expressed by the functionality being tested.
Design test scripts to be generic in terms of process and repeatable in terms of data. Read test data from a separate source: keep the scripts free of test data so that when you do have to change the data, you only have to maintain the data, not the scripts.
13. Developing Automation Frameworks
Test automation has undergone several stages of evolution, both in the development of marketable test tool technologies and in the development of test automation processes and frameworks within individual QA organizations. The typical path followed is described below:
� Record and Playback: monitoring of an active user session, recording user inputs related to objects encountered in the user interface, and storing all steps and input data in a procedural script. This method is useful in learning how to use a test tool, but the scripts produced are difficult to maintain after the application under test changes and do not produce reliable, consistent results.
� Test Script Modularity: creating small, independent scripts that represent modules, sections, and functions of the application-under-test; then combining them in a hierarchical fashion to construct larger tests. This represents the first step toward creating reusable test assets.
� Test Library Architecture: dividing the application under test into procedures and functions � also known as objects and methods depending on your implementation language � instead of a series of unique scripts. This requires the creation of library files that represent modules, sections, and functions of the application under test. These files, often referred to as function libraries, are then called directly from within test case scripts. Thus, as elements of the application change, only the common library components which reference them must be changed, not multiple test scripts with hard-coded references which might be difficult to locate and validate.
� Data-Driven Testing: reading input and output values from data files or tables into the variables used in recorded or manually coded test scripts. These scripts include navigation through the application and logging of test status. This abstraction of data from the test script logic allows testers with limited knowledge of the test tool to focus on developing, executing and maintaining larger and more complex sets of test data. This increase in organizational efficiency fosters enhanced test coverage with shorter test cycles.
� Keyword-Driven Testing: including test step functionality in the data driven process by using data tables and keywords to trigger test events. Test steps are expressed as Object � Action � Expected Result. The difference between data-driven and keyword-driven testing is that each line of data in a keyword script includes a reference that tells the framework what to do with the test data on that line. The keyword attached to the test step generally maps to a call to a library function using parameters read in from the data file or table. One major benefit is the improved maintainability of the test scripts: by fully modularizing automation of each step, it's easier to accommodate any user interface changes in the application under test.
As noted earlier, one of the challenges facing test automation is to speed up testing processes while increasing the accuracy and completeness of tests. The evolution of test automation frameworks has been driven by accepting this challenge.
This type of functional test automation framework abstracts the data layer from the test script logic. Ideally, only data used as inputs to test objects and outputs from test events would need to change from one iteration to the next. The types of test scripts used in this architecture are described below.
� Performs initialization of the test environment (as required)
� Calls each Test Case Script in the order specified by the Test Plan
� Controls the flow of test set execution
� Executes application test case logic using Business Component Function scripts
� Loads test data inputs (function parameters) from data files and tables
� Evaluates actual result based on the expected result loaded from data files and tables
13.1.3. Business Component Function Script
� Exercises specific business process functions within an application
� Issues a return code to indicate result or exception
� Uses parameter (input) data derived from data files and tables
13.1.4. Common Subroutine Function Script
� Performs application specific tasks required by two or more business component functions
� Issues a return code to indicate result or exception
� Uses parameter (input) data derived from data files and tables
13.1.5. User-Defined Function Script
� Contains logic for generic, application-specific, and screen-access functions
� Can include code for test environment initialization, debugging and results logging
In this architectural model, the �Business Component� and �Common Subroutine� function scripts invoke �User Defined Functions� to perform navigation. The �Test Case� script would call these two scripts, and the �Driver� script would call this �Test Case� script the number of times required to execute Test Cases of this kind. �In each case, the only change between iterations is in the data contained in the files that are read and processed by the �Business Function� and �Subroutine� scripts.
13.2. Keyword-driven Frameworks
This type of framework builds on the data-driven framework by including business component functionality in the data tables. Keywords are used within each test step to trigger specific actions performed on application objects.
� Governs text execution workflow
� Performs initialization of the test environment (as required)
� Calls the application-specific Action ("Controller") Script, passing to it the names of the business process test cases. These test cases can be stored in spreadsheets, delimited text files, or database records.
� Acts as the �controller� for test case execution
� Reads and processes the business process test case name received from the Driver Script
� Matches on "key words" contained in the input dataset
� Builds a list of parameters from values included with the test data record
� Calls "Utility" scripts associated with the "key words", passing the created list of parameters
� Process the list of input parameter received from the Action Script
� Perform specific tasks (e.g. press a key or button, enter data, verify data, etc.), calling "User Defined Functions" as required
� Record any errors encountered during test case execution to a Test Report (e.g. data sheet, table, test tool UI, etc.)
� Return to the Action Script, passing a result code for processing status (e.g. pass, fail, incomplete, error)
13.2.4. User-Defined Function Libraries
� Contain code for general and application-specific functions
� May be called by any of the above script-types in order to perform specific tasks
� Can contain business rules
13.3. Tool-specific Frameworks
13.3.1.
Business Process Testing uses
a role-based model, allowing collaboration between non-technical Subject Matter
Experts and QA Engineers versed in QuickTest Pro. Business process tests are
composed of business components. The information in the business component's
outer layer -- the description, status, and implementation requirements,
together with the steps that make up the component -- is defined in Quality
Center by the SME, who then runs and analyzes the associated tests and test
sets. A QuickTest Engineer populates a shared repository with the different
objects in the application being tested and encapsulates all activities and
scripted steps into operations, essentially using function libraries in a keyword
based automation framework.
When QuickTest Professional is
connected to a
In addition to creating and
maintaining the object repository, the QuickTest Engineer defines a set of elements
that comprise an Application Area, created in QuickTest Professional and
containing all of the settings and resources required to create the content of
a business component. These include all the objects from the application under
test contained in the shared object repository, and the user-defined operations
contained in function library files.
Each business component can be associated with a specific Application Area, or can share an Application Area with other components. Application area settings are automatically inherited by the business components that are based on that application area.
An application area includes:
�
� Resources: resource settings include
associated library files and the shared object repository.
��� Add-Ins: the add-ins associated with the first business component
in a business process test (inherited from the application area used by the
component) are automatically loaded in QuickTest Professional when
��� Windows-Based Applications: if you are creating a business
component to test a Windows-based application, you must specify the application
on which the business component can run. Other environments are supported by
the appropriate QuickTest Add-In.
��� Recovery Scenarios: activated during execution of a business
component test when an unexpected event occurs.
The picture below illustrates
the workflow (Roles and Activities) encountered in Business Process Testing
with Mercury Quality Center integrated with the QuickTest Professional
automated functional testing tool.
13.3.2. Desktop Certification Automation
In an application certification run, the steps for dealing with the application under test include:
1. Install Application.�������������������������� 7. Close Application.
2. Reboot (Optional).������������������������� 8. Scan.
3. Post Install Step.���������������������������� 9. Uninstall Application.
4. Analyze Workstation.��������������������� 10. Analyze Workstation.
5. Test and Leave Application.��������� 11. Perform Interoperability Tests
6. Perform Tests (Go To Top).
Implementing test automation is most often an evolutionary process, making it easier for a QA organization to assimilate the necessary learning curve. Some of the types of automation development, current and future, are described below:
Ad-Hoc
- Scripting developed in reactionary mode to test a single issue or fix
- Test case steps are part of each Action script: high maintenance, low reusability
- Contains some of the necessary data inputs stored in QTP script's datasheet but not full data-driven implementation
Data-Driven
- Scripts are an assembly of function calls
- Data for test cases read in from external source (e.g., Excel spreadsheet)
- Results can be captured externally per script execution (i.e., spreadsheet, database, TD)
Keyword-Driven
- Test cases are expressed as sequence of keyword-prompted actions
- A Driver script runs Actions which call functions as prompted by keywords
- No scripting knowledge necessary for developing and maintaining test cases (unless new functionality)
Model-Driven
- Descriptive programming is used to respond to dynamic applications (e.g., websites)
- Actually, this is a method which can used within other solution types
- Regular expressions used to define objects
- Custom functions used to enhance workflow capabilities
3rd-Party:
-
Similar to keyword-driven but controlled using
- Begins with high-level test requirements:
- Business Requirements defined
- Application Areas (shared resources) defined
- Business Components defined and grouped under Application Areas
- Test steps defined
- Tests can be defined as Scripted Components (QTP scripts with Expert Mode)
- Business Process Tests and Scripted Components are cataloged under Test Plan
- Test Runs are organized from Test Plan components and executed from Test Lab
- Test Runs can be scheduled and/or executed on remote test machines (with QTP)
- Defects can be generated automatically or entered manually per incident
- Dashboard available for interactive status monitoring
Intelligent Query-Driven
- Agile
- Object Oriented
- Constructed as a layered framework
- Test data is compiled as required using data-mining techniques
Each type of framework has its own unique advantages and disadvantages.
Comparison of
Automation Types: Test Coverage and Maintenance Level
�
Test Coverage: in functional testing, a measurement of the extent to which the business requirements of an application are verified during test execution.
Maintenance Level: the amount of effort (time and staff) required to keep test assets up to date with changes and additions contained in releases of the applications under test. It includes tasks such as creating and updating test cases, test scripts, function libraries, and object repositories, and debugging test code.
Not every type is required in this progression. The
implementation path is dependent on such factors as project timelines, resource
allocation, tool selection, and QA organization maturity level.
The most significant ROI is provided by the automation development model which has the greatest degree of test coverage with the least amount of maintenance.
Functional Test Execution
14. Functional Test Results Analysis
15. Defect Tracking and Resolution
16. Status Reporting and Test Metrics
1 |
2 |
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17. FTA Methodology Delivery Process
18.1.1.
18.1.3. Windows Script Host VBScript
18.2.2. WinRunner script (note: structure applies to all coding, e.g., QTP)
#########################################################�����������
Standard script header
that defines what the script does, any required parameters, special notes,
return value, and change log.� Each
change to the script should be logged in the script header
#����� Script
#����� --------------------------------------------------����
#����� Word_File_Close
#�������������������
#����� Description
#����� --------------------------------------------------����
#����� This script verifies that Word can close a
file
#
#����� Parameters
#����� --------------------------------------------------����
#����� No Parameters�����������������������������������������
#
#����� Notes���������������������������������������������������������������������
#����� --------------------------------------------------����
#����� Microsoft Office scripts are accompanied
by specific
#����� application files
containing application macros
#����� invoked by WinRunner scripts.
#
#����� Return Value��������������������������������������������������������������
#����� --------------------------------------------------����
#����� Returns Status Code
#�������������������
#����� Author�������������� Date��������� Change��������������������
Assign the test_id
variable that is used by the results reporting functions to identify the
test case.
#����� --------------------------------------------------����
#����� Lars Nargren�� ����� 8/29/01������������� Creation�������������������������
Perform the action.� In this example a new document is opened
using the appropriate Word macro.
The newly opened document is then closed (the objective of the test
in this case is to verify that Word can close a document). Start the log entry in the
detail results file for this test case Define any variables that
are to be used by the script.� This
allows easy maintenance by keeping all script data in one place.
##########################################################
static test_id = �Word File Close�;
������
#####################################
#��� VARIABLES (should be static)�� #
#####################################
static status, exp_res,
act_res, msg;
static file_name = �WordFileOpen�;
static file_path =
FILE_LOCAL_SOURCE_DIR & file_name;
#
Begin logging test case detail
test_case_detail_start(test_id);
Perform the script
setup.� In this example Microsoft
Word is loaded using the word_load function.� The new document is then closed by
invoking the appropriate Word macro.
The application is then checked to ensure it�s in the correct
initial state.� If it is not the test
status is set to COULD_NOT_TEST and logged.
Control is then returned to the batch script that called the script.
#---------SCRIPT SETUP---------
#
Start MS Word and verify initial state (Word running with no open documents)
word_load();
word_macro_run (WORD_FILE_CLOSE);
if (win_exists(�Microsoft Word -
No Open Documents�) != E_OK)
{
������ # If Word not there, or document
open then abort test
������ status = COULD_NOT_TEST;
������ msg = �Word not in
initial state�;
������ test_case_result_log (test_id, status,
msg);
������ treturn (status);
}�����
#---------SCRIPT
ACTION---------
#
Run Word open file macro and input file name
word_macro_run (WORD_FILE_OPEN);
set_window(�Test Input�, 10);
edit_set (�input�, file_path);
button_press (�OK�);
#
Close File
word_macro_run (WORD_FILE_CLOSE);
#---------SCRIPT
VERIFICATION---------
status = PASS;
exp_res = �Document was closed successfully�;
act_res = exp_res;
#
Make sure file was closed
Return the test status to
the batch script. Return the application to
its initial state, In this example, Word is closed Write the test case
results to TestDirector (if applicable) and the WinRunner report Write the test case
results to the detail and summary files Perform the
verification.� First, the
verification variables (standard across all test scripts) are
initialized.� Then, in this example,
Microsoft Word is checked to see if there are any open documents.� If there are the test status is set to
FAIL, and the actual result message is specified to describe the failure.
rc= set_window (�Microsoft Word - No Open Documents�, 10);
if (rc != E_OK)�����
{
������ status = FAIL;
������ act_res = �Document was not
closed�;
}�����
#
Log results to log files
test_case_result_log (test_id, status, act_res);
#
Log results to WR/TD report
tl_step (test_id,
status, act_res);
#
Return application to initial state
word_macro_run (WORD_QUIT);
#
Return test status
treturn (status);
18.2.3. DDT Action datasheet (Excel)
Sample 1: DDT Template used for data-driven test automation
�
Sample 2: operations, functions, and parameters used with the DDT Template
18.2.4. Keyword (framework) Reference Guide examples
Function Name
Keywords
BackPage �� Moves back a page in the browser using the keyboard �backspace� key.
Parameters |
Parameter Values |
Description |
--- |
--- |
--- |
ClickImage� Clicks on the specified image.
Parameters |
Parameter Values |
Description |
Param1 |
ImageName |
The name recorded in the Object Repository for the image. |
CloseAllBrowsers �� Closes all active browsers on the workstation.
Parameters |
Parameter Values |
Description |
--- |
--- |
--- |
CloseWindow � Closes the specified browser window.
Parameters |
Parameter Values |
Description |
Param1 |
BrowserObj |
Identifies the Browser that should be closed.� Ex.: Browser(�ChildBrowser�) |
EndState �� Closes the main APPL browser window.
Parameters |
Parameter Values |
Description |
--- |
--- |
--- |
18.3. Links to Other Resources
Software Quality Engineering�s test-related articles and info:
��������������� http://www.stickyminds.com