headerphoto

Advanced Programming C++

Ultima Presentacion para mi curso de C++:

Advanced Programming C++
View SlideShare presentation or Upload your own. (tags: c klagenfurt)

Software Testing Techniques

( Paper escrtito en Austria-Klagenfurt para el curso Software Quality)

Nota: El ingles debe corregirse !! ... en algún dia lo hare... ;)

Department of Informatics-Systems,

Alpen Adrian University of Klagenfurt.

Francisco Javier Riveros Escobar,


Abstract – The growth of software in recent years has increased, and therefore also the processes of development of software are modified to obtain a quality product; but quality is a complex and multi-faceted concept that can be described from diverse perspective. Therefore the techniques of Testing have occupied from a 25% to 50% of the total of the budget in many software development projects. Testing software well there are always been challenging, but the process is fairly well understood. The intention of testing is to always find defects - it is a destructive process, is due to show that something is incorrect - considering an appropriate attitude from the Tester. Many techniques are specifically designed to identify faults, and depending of these faults we often evaluate testing techniques on the basis of their ability to detect faults. In this paper we examine and describe related techniques and activities with both these purposes in mind, and provide a balanced view of testing. We will keep in mind the classification according to the major testing techniques by criteria, and we will see more deeply object oriented testing, where we examine the types of Testing and the problems or issues in oriented paradigm.

Index Terms

Software Testing, Black-box testing, White-box Testing, Gray-box Testing ,Object Oriented Testing, Class Testing , Software test data generation

I. INTRODUCTION

The common problem that in the history of the software appears is to ensure that the software systems would work as expected according to the requirements of the clients and target customers [1]. For this reason appear: The Testing, where we find different techniques and methods that will be applying in order to the different software. The design and implementation of error free programs is a difficult task for every person that works in this area. In the real life the software must show a quality and must be robust because the increasing economy demands this. If we see the definitions of testing, we found different opinions like:

“Testing is the process of executing a program with the intent to certify its quality” (Mills)

“Testing is the process of executing a program with the intent of finding failures/faults” (Myers)

“Testing is the process of exercising the software to detect bugs and to verify that it satisfies specified functional & non-functional requirements”. [3]

In these definitions there are common patterns: “the intent to finding errors”, and we accept more this definition:

“Testing is the process of executing a program with the intent of finding errors” [2].

This definition is more near of our approach, because we must to have the assumption that the program contains errors, because no one is perfect and we all make mistakes or omissions, and in most of projects the deadlines are present in all persons that work in them, for this reason, the pressure for accomplish the dates, then the more likely we are make mistakes. Therefore testing is important because it substantially contributes to ensuring that a software application does everything it is supposed to do [5], and It is important too because everyday is increased the complexity of our software and is increased the user base, and one very know example the Y2K problem, maybe with techniques before more precise we had found the problem.

Here appears later the concept of Quality assurance where testing helps ensure that a product meets requirements, where testing is a necessary but insufficient part of any quality assurance process, but is not Quality Assurance. The mission of QA is to prevent problems in the first place as well as to remove those defects that do creep into the product. For example, we show one definition about SQA (Software Quality Assurance):

“SQA is the planned and systematic set of activities that ensure that software process and products conform to requirements, standards, and procedures. “Processes" include all activities involved in designing, coding, testing and maintaining; "products" include software, associated data, documentation, and all supporting and reporting paperwork.” [6]

This definition show us how QA is a concept different of testing, that can contribute to improving quality in the development process, where with the different new techniques and methods we want to improve the quality, but these techniques require that testers work more closely with developers work more closely with testers, since we do not have only one technique for the development, we have many that must be used together. Each Test selection technique has its own strengths and weaknesses, and if we found an effective testing methodology it must use several different techniques together.

There are many approaches to testing, of these two are widely used and referred to:

1. Verification, Validation & Testing (V,V&T)

2. The V- Model

Software testing in the first one has focused on two separate issues:

l Verification (Start Testing)

This defined by IEEE/ANSI [7], is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. It’s the process of evaluating, reviewing, inspecting, and doing desk checks of work products such as requirement specifications, design specifications, and code. Verification thus tries to answer the question: Have we built the system right?

l Validation (Dynamic Testing)

This defined by IEEE/ANSI [7], is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. The question for validation is: Have we built the right system?

Verification and validation are complementary [8], and we will show a summary what V&V are doing in the process:

Validation

Verification

Am I building the right product

Am I building the product right

Determining if the system complies with the requirements and performs functions for which it is intended and meets the organization’s goals and user needs. It is traditional and is performed at the end of the project.

The review of interim work steps and interim deliverables during a project to ensure they are acceptable. To determine if the system is consistent, adheres to standards, uses reliable techniques and prudent practices, and performs the selected functions in the correct manner.

Am I accessing the right data (in terms of the data required to satisfy the requirement)

Am I accessing the data right (in the right place; in the right way)?

High level activity

Low level activity

Performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment

Performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standards

Determination of correctness of the final software product by a development project with respect to the user needs and requirements

Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.

Table 1. - Summary Validation and Verification (V&V) [9]

In other part we have the V-Model, It represents the Software Development Life Cycle, since V-model is the state-of-art taught on practically every course on testing [10]. The V-model shows the various stages in development and testing and the relationships between the various stages, like we can see in the Figure 1, here the V-model maps the types of the test to each stage of development:


Figure 1. - A typical development life-cycle V-Model .

The V-model splits the testing process onto levels where in both branches the testing goes together with system implementation. We can see that in the left side of the V, it represent the flow where the system specifications are defined. The right tails of the V represents the testing stream where the systems are being tested, in conjunction with the tail of the specifications. In this case we must show that the flow of abstraction in testing is reverse to the flow in implementation where the custom is to start from high abstractions and move towards more and more concrete details [10]. In this form how we are testing in small units is easier to find and fix defects than in large units or entities. The V-Model brings two important points: testing a smaller part before putting it into a larger system is a good approach and testing efforts can start with planning as soon as the higher level requirements have been identified.

Finally there is one question that always happened: If we can’t test everything, what can we do? this question in most of cases appear in the projects leaders or testers, the thing is managing and reducing the risk of the manner to carry out a risk analysis of the application. Also we can prioritize test to focus on the main areas of risk, in order that we focus our time in the major part or core of the software. There is other question relative to testing: What we have to do when the software is ready and in the market? Or When should testing stop?, the solution is follow with the testing, because is better that you find the errors than the users, for this reason in the most of software company, they throw updates and patches for fix the bugs and improve some areas, but we must to consider a goal to say when the program is error free, the better way is to try to find imperfections rather than just confirm that the program works correctly for some set of input data.

With this introduction we must know that there are no silver bullet for testing approach and that no single technique alone is satisfactory for our proposes, how has been pointed out by many leading researchers such as [11].

The structure of the paper is as follows: The next section gives the classification of software testing techniques. Later in the section III has the representation of the dynamics testing techniques like Black-box techniques, White-box technique and gray-box technique. Section IV represents Object oriented testing, with the topics intra-class and inter-class. Section V describes Software test data generation, and the last section is the conclusion of the content.

II. THE CLASSIFICATION OF SOFTWARE TESTING TECHNIQUES

If we have a classification would allow testers to compare and evaluate testing techniques more easily when attempting to choose the testing strategy for the software development, because each software development needs different techniques and methods that work together for the testing. These techniques can be classified according to the following viewpoints [12]:

(1) Does the technique require us to execute the software? If so, the technique is dynamic testing; if not, the technique is static testing.

(2) Does the technique require examining the source code in dynamic testing? If so, the technique is white–box testing; if not, the technique is black–box testing. These techniques testing we will see more forward.

(3) Does the technique require examining the syntax of the source in static testing? If so, the technique is syntactic testing; if not, the technique is semantic testing.

(4) How does the technique select the test data? Test data is selected depending on whether the

technique refers to the function or the structure of the software, leading respectively to functional testing and structural testing, where as test data is selected according to the way in which software is operated with respect to random testing.

(5) What type of test data does the technique generate? In deterministic testing, test data are

predetermined by a selective choice according to the adopted criteria. In random testing, test data are generated according to a defined probability distributed on the input domain.

We want to show a framework for surveying software testing techniques based on the works of classification and evaluation like in the Figure 2 extracted of [12] (showed in the next page) , here we see the tree of software testing techniques where depending of our type of software, where we will select one tail and the corresponding techniques, for example we will explain the dynamic testing branch with Black-box or specification-based testing, White box or program-based testing and Gray box testing, that we will see in the next point.

III. DYNAMIC TESTING TECHNIQUES

The principle choice in Testing is between two divergent forms of test case coverage:

· Specification based Testing or Black box

· Program Based Testing or White box.

A. Specification based Testing or black box testing.

According to IEEE standard [1, 2], black-box testing, also known as functional testing, is:

1. “Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions;

2. “Testing conducted to evaluate the compliance of a system or component with specified functional requirements.”

Testing that exercises most parts of the system by invoking various system calls through the user interface is called

black-box testing. For example, if the user adds a record to the application's database by entering data via the user interface, various layers— such as the database layer, the user-interface layer, and business-logic layers— are executed. The black-box tester validates the correct behavior only by viewing the output of the user interface.

Specification based testing or black box testing, the goal is produce a test suit on the basis of a specification, where the existence of a formal specification or model introduces the possibility of automating test generation, thus making test generation more efficient. However specification based testing is performed on software and other types of systems to improve the quality of the system by verifying the intended functionality of the system all this done through of the reference documents that might be formal specifications, user manuals, advertisements or even –Third-Party documents [13] but here came up paradigmatic case(s):

-Testing against contractual specifications, -Testing dominated by traceability to written specifications and user documentation testing. Also came up new specification based approach for software testing called exhaustive testing (BET), that seek to automate the testing process exhaustively testing the software for all inputs within some bound [15]. The current state of the practice is informal specification, and thus informal determination of coverage of the specification is the norm. For example, tests can be cross-referenced with portions of the design document [18], and a test management tool can make sure that all parts of the design document are covered. Based in this technique we have an example that was used to testing in a flight software development for Hubble space telescope mission [14]. Some Authors called specification based testing like Black-Box [16], because in the black box testing, the tester only know what the software is supposed to do and he can’t look in the box to see how it operates. In this sense we type a certain input, and we obtain a certain output and the tester doesn’t know how or why it happens, just that it does.

Figure 2: A framework for surveying software testing techniques [12].

Therefore black box testing is defined like a test case selection that is based on analysis of the specification of a component without reference to its internal workings, because here we are more concentrates on the business function that can be used throughout of testing cycle. Black box testing can be applicable to all levels of software testing: unit, integration, functional testing, system and acceptance.

Why do we need black box test techniques?

In this case, because the exhaustive testing is not possible since due to the constraints of time, money and resources, therefore we must create a sub-set of tests. Also we should focus on areas of likely risk where mistakes may occur, where we use the black box techniques.

According to BS7925-2 [17] (Standard for Software component testing) we have lists with the black box techniques lists below:

· Equivalence partitioning

Is software testing related technique with the goal:

1.To reduce the number of test cases to a necessary minimum.

2.To select the right test cases to cover all possible scenarios.

Although in rare cases equivalence partitioning is also applied to outputs of a software component, typically it is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behaviour. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13.


Figure 3. – Example equivalence partitioning.

· Boundary value analysis

This technique focuses on the boundaries of the input and output equivalence classes. Errors tend to congregate at the boundaries. Focusing testing in these areas increases the probability of detecting errors.

Boundary value testing is a variation of the equivalence class partitioning technique, which focuses on the bounds of each equivalence class, for example, on, above, and below each class. Rather than select an arbitrary test point within an equivalence class, boundary value analysis selects one or more test cases to challenge each edge. Focus is on the input space (input equivalence classes) and output space (output equivalence classes). It is more difficult to define output equivalence classes and, therefore, boundary value tests.

· State transition

State transition testing is a testing technique in which the states of a system are first identified. Then a test case is written to test the triggers or stimuli that cause a transition from one condition to another state. The tests can be designed using a finite-state diagram or an equivalent table.

· Cause & effect graphing (CEG)

Cause-Effect Graphing is basically a hardware testing technique adapted to software testing by Elmendorf [21] and further developed by others. The CEG technique is a black-box method, i.e., it considers only the desired external behaviour of a system. This technique is the analysis of the specification to produce a graphical model of causes and their effects. The cases are expressed as Boolean conditions in the form of a decision table. In CEG analysis, first, identify causes, effects and constraints in the (natural language) specification. Second, construct a CEG as a combinational logic network which consists of nodes, called causes and effects, arcs with Boolean operators (and, or, not) between causes and effects, and constraints. Finally, trace this graph to build a decision table which may be subsequently converted into use cases and, eventually, test cases.

· Syntax testing

Syntax testing is a technique in which a syntax command generator generates test cases based on the syntax rules of the system. Both valid and invalid values are created. It is a data-driven black-box testing technique for testing input data to language processors, such as string processors and compilers. Test cases are developed based on rigid data definitions.

The valid inputs are described in Backus Naur Form (BNF) notation. The main advantage of syntax testing is that it ensures that no misunderstandings about valid and invalid data and specification problems will become apparent when employing this technique.

· Random testing

Random testing is a technique in which a program or system is tested by selecting at random some subset of all possible input values. It is not an optimal testing technique, because it has a low probability of detecting many defects. It does, however, sometimes uncover defects that standardized testing techniques might not. It should, therefore, be considered an add-on testing technique.

By itself, black box testing is not the most effective way to test. In order to design and implement the most effective strategy for thoroughly investigating the correct functioning of an application, the testing team must have a certain degree of knowledge of the system's internals, such as its major architectural components. Such knowledge enables the testing team to design better tests and perform more effective defect diagnosis. Testing a system or application by directly targeting the various modules and layers of the system is referred to as gray-box testing.

B. Program-based testing or white box testing.

White-box testing, or structural testing, is one in which test conditions are designed by examining paths of logic. The tester examines the internal structure of the program or system. Test data are driven by examining the logic of the program or system, without concern for the program or system requirements. The tester has knowledge of the internal program structure and logic, just as a mechanic knows the inner workings of an automobile. Specific examples in this category include basis path analysis, statement coverage, branch coverage, condition coverage, and branch/condition coverage.

Program based testing implies inspection of the source code of the program and selection of test cases that together cover the program, as opposed to Its specification. Therefore various criteria have been proposed for determining whether the program has been covered: whether all statements, branches, control flow paths or data flow paths have been executed. In this case, some intermediate measure like essential branch coverage or feasible data flow path coverage is most likely to be used, since the number of possibilities might otherwise be infinite or at least unfeasibly large. This technique has been extensively studied, and program-based testing tells us nothing about whether the program meets its intended functionality.

Program-based testing is called too like White box testing, where the test case selection is based on analysis of the internal structure of a component. In dynamic testing, white box testing must be about testing a running program and it must be about looking inside the box, examining the code, and watching it as it runs. This kind of testing is one with X-Ray glasses, therefore is called “glass box testing” or “structural testing” because you can see and use the underlying structure of the code to design and run your tests. If we make a summary about what is white box testing, we will see the follow:

§ Focuses on Lines of code.

§ Looks at specific conditions

§ Looks at the mechanics of the application

§ Useful in the early stages of testing.

Why do we need white box techniques?

Because we need to provide formal structure to testing code, and this technique enable us to measure how much of a component has been tested, for example if we have less of 100 lines of code, we have 100.000.000.000 possible paths, later at 1.000 test per second would still take 3.170 years to test all paths, in consequence we need to plan and design effective cases, but for do this we require a knowledge of : programming language used, databases used, operating system(s) used and ideally knowledge of the code itself, it is a hard work but if we design and select good test cases and appropriate techniques, it will be correct.

According to BS7925-2 [17] we have lists with the white box techniques lists below:

§ Statement Testing

“A test case design for a component in which test cases are designed to execute statements”. Here the test cases are designed and run with the intention of executing every statement in a component. For example:

a;
If (b){
c;
}
d;

Table 2: Example Code.

Any test case with b True will achieve full statement coverage, but the full statement coverage can be achieved without exercising with b False.

§ Branch /Decision Testing

It’s a technique used to execute all branches the code may take based on decision made. In the Table 2, the 100 % of statement coverage requires 1 test case when b is True. But the 100% branch/decision requires 2 test cases (b =True & b = False ).

§ Branch Condition Testing

This technique of condition testing is based upon an analysis of the conditional control flow within the component control component.

Condition coverage reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if they occur. Condition coverage measures the sub-expressions independently of each other.

This metric is similar to decision coverage but has better sensitivity to the control flow.

However, full condition coverage does not guarantee full decision coverage. For example, consider the following C++/Java fragment.

bool f(bool e) { return false; }
bool a[2] = { false, false };
if (f(a && b)) ...
if (a[int(a && b)]) ...
if ((a && b) ? false : false) ...

Table 3: Example C++/ Java

All three of the if-statements above branch false regardless of the values of
a and b. However if you exercise this code with a and b having all possible combinations of values, condition coverage reports full coverage.

§ Branch Condition Combination Testing

Multiple condition coverage reports whether every possible combination of Boolean sub-expressions occurs. As with condition coverage, the sub-expressions are separated by logical-and and logical-or, when present. The test cases required for full multiple condition coverage of a condition are given by the logical operator truth table for the condition.

For languages with short circuit operators such as C, C++, and Java, an advantage of multiple condition coverage is that it requires very thorough testing. For these languages, multiple condition coverage is very similar to condition coverage (explained above).

A disadvantage of this metric is that it can be tedious to determine the minimum set of test cases required, especially for very complex Boolean expressions. An additional disadvantage of this metric is that the number of test cases required could vary substantially among conditions that have similar complexity. For example, consider the following two C/C++/Java conditions.

a && b && (c || (d && e))
((a || b) && (c || d)) && e

Table 4: Example code C/C++/Java conditions.

To achieve full multiple condition coverage, the first condition requires 6 test cases while the second requires 11. Both conditions have the same number of operands and operators. The test cases are listed below in the table 5 and 6.

   a && b && (c || (d && e))
1. F    -     -     -    -
2. T    F     -     -    -
3. T    T     F     F    -
4. T    T     F     T    F
5. T    T     F     T    T
6. T    T     T     -    -
   Table 5: Test Case 1



    ((a || b) && (c || d)) && e
 1.   F    F      -    -      -
 2.   F    T      F    F      -  
 3.   F    T      F    T      F
 4.   F    T      F    T      T
 5.   F    T      T    -      F
 6.   F    T      T    -      T
 7.   T    -      F    F      -
 8.   T    -      F    T      F
 9.   T    -      F    T      T
10.   T    -      T    -      F
11.   T    -      T    -      T

Table 6 : Test Case 2


As with condition coverage, multiple condition coverage does not include decision coverage.

For languages without short circuit operators such as Visual Basic and Pascal, multiple condition coverage is effectively path coverage for logical expressions, with the same advantages and disadvantages. Consider the following Visual Basic code fragment.

If a And b Then
...

Table 7: Example code

Multiple condition coverage requires four test cases, for each of the combinations of a and b both true and false. As with path coverage each additional logical operator doubles the number of test cases required.


§ Modified Condition Decision Testing

This metric is also known as MC/DC and MCDC, requires enough test cases to verify every condition can affect the result of its encompassing decision [22]. This metric was created at Boeing and is required for aviation software by RCTA/DO-178B [23].

For C, C++ and Java, this metric requires exactly the same test cases as condition/decision coverage. Modified condition/decision coverage was designed for languages containing logical operators that do not short-circuit. The short circuit logical operators in C, C++ and Java only evaluate conditions when their result can affect the encompassing decision. The paper that defines this metric [22] section "3.3 Extensions for short-circuit operators" says "[use of short-circuit operators] forces relaxation of the requirement that all conditions are held fixed while the condition of interest is varied." The only programming language referenced by this paper is Ada, which has logical operators that do not short circuit as well as those that do.

§ Linear Code Sequence & Jump (LCSAJs)

This variation of path coverage considers only sub-paths that can easily be represented in the program source code, without requiring a flow graph [24]. An LCSAJ is a sequence of source code lines executed in sequence. This "linear" sequence can contain decisions as long as the control flow actually continues from one line to the next at run-time. Sub-paths are constructed by concatenating LCSAJs. Researchers refer to the coverage ratio of paths of length n LCSAJs as the test effectiveness ratio (TER) n+2.

The advantage of this metric is that it is more thorough than decision coverage yet avoids the exponential difficulty of path coverage. The disadvantage is that it does not avoid infeasible paths.

§ Data Flow Testing.

Data flow testing is a testing technique based on the observation that values associated with variables can effect program execution [25]. Data flow testing not only explores program control flows but also pays attention to how a variable is defined and used at different places along control flows, which could lead to more efficient and targeted test suites than pure control-flow-based test suites.

Data flow testing criteria are based on data flow information, i.e., variable definitions and uses. A variable v is defined by a statement if the execution of the statement updates the value associated to v. For example, assignments define the variables occurring on their left hand sides, and parameters passed by value define the actual parameters. A variable v is used by a statement S if the effect of statement S depends on the current value of v, for example assignments use all variables occurring in the expression on the right hand side, and conditional statements use all variables occurring in the corresponding conditional expressions. A procedure return is a use not only of variables in the explicitly returned expression (if any) but also of anything that is implicitly made available through references or pointers. The use of a variable can be classified as c-use (computation use) or p-use (predicate use). Where c-use coverage ensures that if a variable is defined (assigned a value) and later used within a computation that is not part of a conditional expression, at least one path between this def-use pair is executed

, and p-use coverage ensures that if a variable is defined and later used within a conditional expression, this def-use pair is executed at least once causing the surrounding decision to evaluate true, and once causing the decision to evaluate false

Test paths of a program are selected based on the def-use chains (or definition-use chains) of variables and use-def chains (or use-definition chain), where a use-def chain is a data structure that consists of a use, U, of a variable and all the definitions, D, of that variable that can reach that use without any other intervening definitions. A definition can have many forms, but is generally taken to mean the assignment of some value to a variable (which is different from the use of the term that refers to the language construct involving a data type and allocating storage). As well a def-use consists of a definition, D, of a variable and all the uses, U, reachable from that definition without any other intervening definitions.

C. Gray box testing

Black-box testing focuses on the program’s functionality against the specification. White-box testing focuses on the paths of logic. Gray-box testing is a combination of black- and white-box testing. The tester studies the requirements specifications and communicates with the developer to understand the internal structure of the system. The motivation is to clear up ambiguous specifications and “read between the lines” to design implied tests. One example of the use of gray-box testing is when it appears to the tester that certain functionality seems to be reused throughout an application. If the tester communicates with the developer and understands the internal design and architecture, many tests will be eliminated, because it may be possible to test the functionality only once. Another example is when the syntax of a command consists of seven possible parameters that can be entered in any order, as follows:

Command parm1, parm2, parm3, parm4, parm5, parm6, parm7

In theory, a tester would have to create 7! or 5040 tests. The problem is compounded further if some of the parameters are optional. If the tester uses gray-box testing, by talking with the developer and understanding the parser algorithm, if each parameter is independent, only seven tests may be required.

There are some authors that call this gray box testing when we have a test strategy based partly on the internals of the product [26]. The concept is simple: If you know something about how the product works on the inside, you can test it better from the outside. This is not to be confused with white box testing, which attempts to cover the internals of the product in detail. In gray box mode, you are testing from the outside of the product, just as you do with black box, but your testing choices are informed by your knowledge of how the underlying components operate and interact.
Gray box testing is especially important with Web and Internet applications, because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces. Unless you understand the architecture of the Net, your testing will be skin deep. Hung Nguyen's Testing Applications on the Web (2000) is a good example of gray box test strategy applied to the Web.

IV. OBJECT ORIENTED TESTING

A. Introduction

As we were talking previously the software with error-free is a difficult task and there is a need in the sense to developing new testing techniques for new paradigms like Object oriented paradigm. This object oriented programs are structured differently than non-object oriented programs, because appear different concepts as objects, classes, inheritance, polymorphism that is a new challenge for the software testing, so they have different program models, and different distribution of bugs. This powerful features such encapsulation and inheritances, introduce new problems into the area of software verification and validation. Object oriented programs have a simple relationship between behavior of units and the system: units are objects, and the system is composed of communicating objects (Interclass Testing) and this interaction between two classes occur in the form a message passed from a run-time instance of one class, commonly referred to as an object, to another object. Also some combinations of objects states involved with a message send could potentially cause unexpected reactions that a developer may have overlooked during the implementation. In particular, object oriented software presents new classes of faults that require new testing techniques [17]. Despite the widespread use of the object-oriented paradigm, the many new problems related to testing of object-oriented systems have not been adequately investigated. Therefore we will examine how is possible to do testing with this challenge of the new paradigm: Object-oriented.

B. Intra-class Testing

Intra-class testing deals with classes in isolation and includes testing of abstract classes, selection of test cases from the ancestors’ test suite, and testing state-dependent behavior of the class under test. Major problems in class testing derive from the presence of instance variables and their effects on the behavior of methods defined in the class. A given method can produce an erroneous result or a correct one, depending on the value of the receiver's variables when the method is invoked.

There are certain techniques in intra-class testing, that uses dataflow testing, where these techniques can be applied both to individual methods in a class and to methods in a class that interact through messages, but these techniques do not consider the dataflow interactions that arise when users of a class invoke sequences of methods in an arbitrary order. Then appear new approaches to class testing that supports dataflow testing for dataflow interactions in a class. This technique define three levels of dataflow class testing: (1) intra-method testing, which tests individual class methods, (2) inter-method testing, which tests methods in a class that interact through procedure calls, and (3) intra-class testing, which tests sequences of calls to methods. To identify def-use pairs for these three levels of testing, we represent a class as single-entry, single-exit program, and construct a class control flow graph for this program. They then perform interprocedural dataflow analysis on the class control flow graph.

C. Inter-class Testing.

Inter-class testing applies to clusters of classes to be tested incrementally, considering class interactions, polymorphic calls, and exception handling.

The following assumptions are made for inter-class testing:

1. We assume a testing oracle exists that can verify the correctness of test case execution.

2. Classes are well -defined. A well-defined class is one that has the limits of all its methods defined clearly and appropriately implemented. This implies that the correct specifications for the classes are available.

3. Class hierarchies are available for examination.

4. A list of the possible states of each object is available so that we know the states to test.

5. Valid arguments exist for the constructor of an object that allows an object to be put into each of its states.

6. In the testing process for each individual class, the tester has progressed through testing individual methods, including intra-class integration testing, and has used “stubs" to avoid inter-class testing. The tester is now at the point where there are no “stubs" in place, all classes are fully implemented and available for testing.

7. Testing will be done based on the interaction of two objects at a time, additional object interaction can be tested incrementally.

8. Global values will be treated in the same manner as parameters.

These assumptions are not intended to be restrictive, but are instead normal processes or pieces of the testing process that should be available in an object oriented software development project.

We will see now a few approaches of how I can do the inter-class testing. There are one technique that is based on the idea that the execution of a method is affected by the instance variables used by the method [28], and thus by the methods that determine the values of such variables. Therefore, they identify pairs of methods that define and use the same instance variable and then try to select a feasible sequence of method invocations that contains the identified definition and use in the correct order. The identified sequences represent the test cases for the target class.

This technique consists of three main steps:

Data flow analysis. This phase already described previously, computes def-use association, that is, ordered pairs of statements in which the first statement defines and the second statement uses the same instance variable. The def-use association are identified with data-flow analysis of the whole class, focusing on instance variables only.

Symbolic execution. This phase computes conditions related to path executions and variable definitions. For every path within each method, they identify the conditions associated with the execution of that path, the relationship between input and output values of the method with respect to that path, and the set of variables defined along the path. This information is obtained by applying well-tried symbolic execution techniques to the method’s code [27]. Cycles are treated by either fixing an upper bound to the number of loop iterations or adding suitable invariants.

Automated deduction. This phase produces complete sequences of method invocations that exercise the def-use associations identified during the first phase. Such sequences are incrementally built by applying automated deduction techniques to method preconditions and post conditions that are output by the second phase.

This technique first automatically produces test case specifications and then generates feasible test cases for the produced specifications.

Other technique used for inter-class testing [29] uses AORD (augmented object relationship diagram) where the use of state diagrams in these works is limited to message sequence testing, also they use heuristics for reduce the solutions. This AORD consists of an object relationship diagram augmented with the call graph information about each method in the class and the class state diagram in object chart notation. The use an algorithm that uses this representation to generate the pair wise integration test order for a system of classes. They also generate sets of methods that need to be retested in the presence of inheritance, but this set may be too large for exhaustive inter-class integration testing and for this reason they have given a state-dependent heuristic, which further reduces the number of state-class interactions that need to be tested.
The testing process approximately is:

They begin the testing process at the subsystems rooted at a primary class. For each primary class, they first recursively traverse nodes based on the aggregation relationship. If any member attribute is itself a composite class, we recursively test classes representing its member attributes. When we reach the leaf nodes (in the ORD) in this aggregate hierarchy, they unit test the class, if it is not already tested, and create intra-class test suites using “stubs” for messages sent to other classes. They then test the client-server relations between classes at this level, if there are any. After this, they identify any classes derived from this leaf level class. If there are none, they proceed up the hierarchy. If this leaf-level class has any derived classes, they recursively test the classes that are new member attributes in the derived class. They also compute the set of messages between the member attributes of the parent class and the derived class that need to be retested in the context of the derived class.

On the other hand, one major problem in inter-class integration testing is the order in which classes are tested. This test order [30], referred to as interclass test order, is important for several reasons. First, this test order affects the order in which classes are developed. Second, inter-class test order impacts the use of test stubs and drivers for classes and the preparation of test cases. Third, inter-class test order determines the order in which inter-class faults are detected.

D. Issues in the Testing of Object-oriented Software.

The two concepts which will have a major impact on our testing strategies are information hiding and encapsulation [20]. In encapsulation we have the follow problems: 1. the basic testable unit will no longer be the sub program, and 2. we will have to modify our strategies for integration testing. With Object oriented paradigm we have to know that if we have a class this can encapsulate many sub programs, and the same sub program may find itself encapsulated in a number of different classes, also the classes will work in conjunction with the other items encapsulated within the same object. This means that, in an object-oriented environment, attempting to test a sub program in isolation is virtually meaningless and for this reason the impact in the testing is very high, because the net of relations between classes can be huge, for example it is difficult to claim that each operation-method combination can be tested. There is other problem when a user of an object is denied access to the underlying implementation, this creates problems during testing. For example, consider a list of Objects, that we have a interface where are the basic operations for work with the list, e.g. add, delete, and length. Supposedly the list is in ordered, and we add an object or item, how can we be assured that the specific item was actually added?, in the other case ,how can we be assured that the item was added in the correct position?, Because we cannot directly inspect the underlying implementation, for this reason we must seek some other strategy.

V. SOFTWARE TEST DATA GENERATION

Test data generation in program testing is the process of identifying a set of test data which satisfies given testing criterion. Most of the existing test data generators use symbolic evaluation to derive test data. However, in practical programs this technique frequently requires complex algebraic manipulations, especially in the presence of arrays [31].

In this topic, a test data generator is a tool which assists a programmer in the generation of test data for a program. There are three types of test data generators: path wise test data generators, data specification generators, and random test data generators. For example, the basic operation of the path wise generator consists of the following steps: program control flow graph construction, path selection, and test data generation. The path selector automatically identifies a set of paths (e.g., near-minimal set of paths) to satisfy selected testing criterion. Once a set of test paths is determined, then for every path in this set the test generator derives input data that results in the execution of the selected path. Most of the path wise test data generators use symbolic evaluation to derive input data. Symbolic evaluation involves executing a program using symbolic values of variables instead of actual values. Once a path is selected, symbolic evaluation is used to generate a path constraint, which consists of a set of equalities and inequalities on the program’s input variables; this path constraint must be satisfied for the path to be traversed.

In this section, we present an alternative approach of test data generation [32], referred to as a dynamic approach of test data generation, which is based on actual execution of a program under test, dynamic data flow analysis, and function minimization methods. The test data are developed using actual values of input variables. When the program is executed on some input data, the program execution flow is monitored. If, during program execution, an undesirable execution flow at some branch is observed then a real valued function is associated with this branch. This function is positive when a branch predicate is false and negative

When the branch predicate is true. Function minimization search algorithms are used to automatically locate values of input variables for which the function becomes negative. In addition, dynamic data flow analysis is used to determine input variables which are responsible for the undesirable program behavior, leading to significant speed-up of the search process. In this approach, arrays and dynamic data structures can be handled precisely because during program execution all variables values, including array indexes and pointers, are known; as a result, the effectiveness of the process of test data generation can be significantly improved.

A. Basic Concepts

A flow graph of program Q is a directed graph C = (N,A, s, e) where 1) N is a set of nodes, 2) A is a binary relation on N (a subset of N x N ), referred to as a set of edges, and 3) s and e are, respectively, unique entry and unique exit nodes, s, e E N.

For the sake of simplicity, we will see a subset of structured Pascal-like programming language constructs, namely: sequencing, if-then-else, and while statements. A node in N corresponds to the smallest single-entry, single-exist executable part f a statement in Q that cannot be further decomposed; such a part is referred to as an instruction. A single instruction corresponds to an assignment statement, an input or output statement, or the part of an if-then-else or while statement, in which case it is called a test instruction.

An edge corresponds to a possible transfer of control from instruction nj to instruction nj. For instance, (2, 3), (6, 7), and (6, 8) are edges in the program of Table 8. An edge is called a branch if nj is a test instruction. Each branch in the control flow graph can be labelled by a predicate, referred to as a branch predicate, describing the conditions under which the branch will be traversed. For example, in the program of Table 8 branch (5, 6) is labelled “i <>

“max <>

 

1
2
3
4
5
 
6,7
8,9
10
 
11

var

A: array[l..l00] of integer;

low,high,step: integer,

min,max: integer,

i: integer ;

begin

input(low,high,step,A);

min :=A[low] ;

max :=A[low];

i := low + step ;

while i <>

begin

if max <>

if min > A[i] then min := A[i];

i := i + step ;

end;

output (min,max);

end ;

Table 8. - A sample program

B. Example

For understand what is test data generation, we will work through a simple example to illustrate the basic approach. If we consider the program above, which is supposed to determine minimum and maximum values for selected elements of array A (determined by input variables low, step, and high).
Given is the following path P = <>, the goal of the test data generation is to find a program input x (i.e., values for the input variables: low, high, step, A[1], A[2], … , A[ 100]) which will cause P to be traversed. In the first step, all input variables receive initial values; suppose the following values have been assigned: low = 39, high = 93, step = 12, A[l] = 1, A[2] = 2, … , A[100] = 100. The program is executed on this input, and the following successful subpath P1 = <> of P is traversed; the violation occurs at branch (6, 8). Let F1(x) be a branch function of branch (6, 8), where values of F1(x) can be evaluated by computing the “A [i]- max” expression at 6. Note that the branch predicate of branch (6, 8) is max ³ A[i]. Our first subgoal is to find a program input x such that F1(x) £ 0, subject to the constraint: P1 is traversed on x.

Suppose that input variables are ordered in the following way: A[l], A[2], … , A[l00], high, step, low. The direct search starts with input variable A [1]. In the exploratory move, A [1] is increased by one, while the remaining variables are kept constant. It should be noted that because A [1] is declared as integer, the minimal increment of A [1] is one. The program is executed on this input and the value of Fl (x) is evaluated at 6; the value of branch function F1(x) is unchanged. A [ 1 ] is decreased by one, but the value of F1(x) is unchanged. The next input variable A[ 2] is selected. The exploratory moves for A[2], A[3], … , A [ 38] do not cause any change in the value of F1(x). The increment of A [ 39] by one causes the decrement in the value of F1(x). This step defines a promising direction for search. Suppose that the value of A [ 39] is increased by 100 in the pattern move, i.e., A[ 39] = 139. On this input, sub path P1 is traversed and branch (6, 8) is taken. As a result, this input is the solution to the first sub goal.

When the program is executed on this input, the following successful sub path P2 = <> of P is traversed; the violation occurs at branch (8, 10). Let F2(x) be a branch function of branch (8, l0), where values of F2(x) can be evaluated by computing the “min- A [i ]” expression at 8. The second sub goal is to find a program input x such that F2(x) £ 0, subject to the constraint: P2 is traversed on x.

Suppose that input variables are ordered in the same manner as for the first sub goal. Exploratory moves for A[1], ... , A [ 38] do not cause any improvement in the value of branch function F2( x). The decrement of A [ 39] by one causes the decrement in the value of F2(x). This step defines a direction for search. By performing the series of exploratory and pattern moves on A [ 39] and reducing the size of the pattern move, the value of A [ 39] = 51 can be found. This input is the solution to the second sub goal, i.e., sub path P2 is traversed and branch (8, 10) is taken.
When, the program is executed on this input, the following successful sub path P
3 = <> of P is traversed. The violation occurs on the second execution of branch (6, 8). Let F3(x) be a branch function of branch (6, 8), where values of F3(x) can be evaluated by the “A[ i ]-max” expression. The third subgoal is to find a program input x such that F3(x) £ 0, subject to the constraint: P3 is traversed on x. Exploratory moves for A [ 1], … , A [ 62 ] do not cause any improvement in the value of branch function F3(x). Observe that the exploratory moves for A [ 39] cause the constraint violation, i.e., P3 is not traversed. Only the decrement of A [ 63] causes the improvement of branch function F3( x). Suppose that the value of A [ 63 ] is decreased by 100 in the pattern move, i.e., A[63] = -37.This move finds the solution to the third subgoal.

When the program is executed on this input, the following successful sub path P4 = <> of P is traversed. The violation occurs on the branch (5, 11). Let F4(x) be a branch function of branch (5, 1 l), where values of F4( x) can be evaluated by the “high-i” expression. The fourth subgoal is to find a program input x such that F4(x) £ 0, subject to the constraint: P4 is traversed on x. In the first step all elements of array A are tried out.

However, none of the exploratory moves for array elements A[l], …, A[ 100] cause an improvement in the value of branch function F4(x). Only the decrement of input variable high causes the improvement of branch function F4( x). This exploratory move defines a direction for the change of the variable high. The sequence of the pattern moves can determine the final value of high = 67. The solution, high = 67, A[ 39] = 51, A[ 63] = -37 (where the rest of the input variables have the initial values), to the fourth sub goal is also the solution to the test generation problem.

VI. CONCLUSION

How we were talking before there is no silver bullet for the testing approach and the techniques that applies specially for a type of program, indeed we have to chosen one strategy that uses the combination of testing techniques. Software Quality is an important component in the life of the Software Development. We have many different testing techniques, methods and tools might be used and integrated for achieve our purpose, the quality of our Software. We saw the different concepts and testing techniques and we talk about the Object oriented testing, where is no much different from Testing procedural Software. We saw too that in this technique we have a challenge because the new paradigm bring it, such as encapsulation or inheritance, where we have to find the ways to change cleverly ,new management and testing for the new approaches. We also saw how with software test data generation we can do testing of several ways, where it is the process of identifying a set of test data which satisfies given testing criterion. But how all testing techniques there are not a simple approach for one criterion since testing how we saw, is a complicated task and then we need more effort for accomplish the goals.

REFERENCES

[1]

Jeff Tian, “Software Quality- Testing , Quality Assurance and Quantifiable Improvement” ,Wiley, 2005

[2]

Glenford J. Myers; `The art of software testing’, John Wiley & Sons, Inc, Second Edition 2004.

[3]

`ISEB Foundation Certificate in Software Testing` Presentation. SIM Group Ltd., SQS Group AG, 2002

[4]

Bertolino, A: ‘An overview of automated software testing’, J. Syst. Softw., 1991

[5]

John D. McGregor, David A. Sykes, “A Practical Guide to Testing Object-Oriented Software”, Addison Wesley, March 05, 2001.

[6]

John Frankovisch; Dong Pan; `What is not tracked is not done`, Department Computer Science, University of Calgery, Canada.

[7]

IEEE standard for Software Test documentation

[8]

Kit, E: ´software testing in the real world :improving the process’, Addison-Wesley, 1995.

[9]

Test Notes, Blog , (http://geekswithblogs.net/srkprasad/archive/2004/11/30/16490.aspx ).

[10]

Maaret Pyhäjärvi ,Kristian Rautiainen, Juha Itkonen, “Increasing Understanding of the modern testing perspective in software product development projects”, IEEE Computer ,2003.

[11]

Hamlet, D.: ‘Special section on software testing’, Comm. ACM, 1988, 31, (6), pp. 662–667.

[12]

Huey–Der Chu : ´An Evaluation Scheme of Software Testing Techniques’, Comm. ACM, pp. 3-4

[13]

Cem Kaner, www.testingeducation.org , 2004 .

[14]

Nzinga Tull : `Applications of specification based testing in flight software development for Hubble space telescope mission operations´, IEEE Computer, 13/12/2005.

[15]

‘Practical Techniques for automated specification based testing’, U. Virginia, Massachusetts Institute of technology.

[16]

Rob Hierons, `Specification Based Testing and Model Based Testing’, Brunel University, 2004.

[17]

BS7925-2 ,Standard for software component testing.

[18]

Thomas J. Ostrand , Ron Sigal, and Elaine Weyuker. “ Design for a Tool to Manage Specification-Based Testing”, in Workshop on Software Testing, Banff, Canada, July 1987. pp 41-50.

[19]

John D. McGregor, David A. Sykes, “A Practical Guide to Testing Object-Oriented Software”, Addison Wesley, March 05, 2001.

[20]

Edward V. Berad, “Issues in the Testing of Object-Oriented Software”, The Object Agency.

[21]

Elmendorf, W. R., "Cause-Effect Graphs in Functional Testing", TR_00.2487,IBM, Poughkeepsie, N.Y., Nov. 1973.

[22]

John Joseph Chilenski and Steven P. Miller, "Applicability of Modified Condition/Decision Coverage to Software Testing", Software Engineering Journal, September 1994, Vol. 9, No. 5, pp.193-200.

[23]

RTCA/DO-178B, "Software Considerations in Airborne Systems and Equipment Certification", RCTA (http://www.rtca.org/), December 1992, pp.31, 74.

[24]

Woodward, M.R., Hedley, D. and Hennell, M.A., "Experience with Path Analysis and Testing of Programs", IEEE Transactions on Software Engineering, Vol. SE-6, No. 3, pp. 278-286, May 1980.

[25]

P. G. Frankl, E. J. Weyuker, “An applicable family of data flow testing criteria,” IEEE Transactions on Software Engineering, Volume 14, Issue 10, Pages 1483 –1498, October 1988.

[26]

"Lessons Learned in Software Testing." by Cem Kaner, James Bach and Bret Pettichord ,2nd Edition ,2002, pp. 289.

[27]

A. Coen-Porisini, G. Denaro, C. Ghezzi, and M. Pezz`e. Using symbolic execution for verifying safety critical systems.

In Proceedings of the 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering

(FSE01), ACM Software Engineering Notes, NY, September 2001.

[28]

Vincenzo Martena, Alessandro Orso, Mauro Pezzè, Interclass Testing of Object Oriented Software, in Proceedings of the Eighth IEEE international Conference on Engineering of Complex Computer Systems (ICECCS’02), 2002.

[29]

Amit Paradkar , Inter-Class Testing of O-O Software in the Presence of Polymorphism , NASA Grant NAG-1-983, Centre for Advanced Studies, IBM Canada

[30]

Kuo-Chung Tai, Fonda J. Daniels , Test Order for Inter-Class Integration Testing of Object-Oriented Software, 1997 IEEE.

[31]

Roger Ferguson ,Bogdan Korel, “The chaining Approach for Software Test Data Generation”, Lawrence Technological Universitiy , Illinois institute of technology. ACM 1996.

[32]

Bogdan Korel, Automated Software Test Data generation, IEEE transactions on software engineering, vol 16, NO. 8 August 1990.