Friday, August 24, 2012

requirements testing

requirements testing

Overview



Requirements seem to be ephemeral. They flit in and out of projects, they are capricious, intractable, unpredictable and sometimes invisible. When gathering requirements we are searching for all of the criteria for a system's success. We throw out a net and try to capture all these criteria. Using Blitzing, Rapid Application Development (RAD), Joint Application Development (JAD), Quality Function Deployment (QFD), interviewing, apprenticing, data analysis and many other techniques, we try to snare all of the requirements in our net.
Contents:
  • Abstract          
We accept that testing the software is an integral part of building a system. However, if the software is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory.
The newspapers are full of stories about catastrophic software failures. What the stories don't say is that most of the defects can be traced back to wrong, missing, vague or incomplete requirements. We have learnt the lesson of testing software. Now we have to learn to implement a system of testing the requirements before building a software solution.
  • The Quality Gateway    
    As soon as we have a single requirement in our net we can start testing. The aim is to trap requirements-related defects as early as they can be identified. We prevent incorrect requirements from being incorporated in the design and implementation where they will be more difficult and expensive to find and correct.
    To pass through the quality gateway and be included in the requirements specification, a requirement must pass a number of tests. These tests are concerned with ensuring that the requirements are accurate, and do not cause problems by being unsuitable for the design and implementation stages later in the project.
      

  • Make The Requirement Measurable 
    Christopher Alexander describes setting up a quality measure for each requirement.
    "The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement."
    In other words, if we specify a quality measure for a requirement, we mean that any solution that meets this measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable.
    The quality measures will be used to test the new system against the requirements. The remainder of this paper describes how to arrive at a quality measure that is acceptable to all the stakeholders.

  • Quantifiable Requirements  
  • Consider a requirement that says "The system must respond quickly to customer enquiries". First we need to find a property of this requirement that provides us with a scale for measurement within the context. Let's say that we agree that we will measure the response using minutes. To find the quality measure we ask: "under what circumstances would the system fail to meet this requirement?" The stakeholders review the context of the system and decide that they would consider it a failure if a customer has to wait longer than three minutes for a response to his enquiry. Thus "three minutes" becomes the quality measure for this requirement.
    Any solution to the requirement is tested against the quality measure. If the solution makes a customer wait for longer than three minutes then it does not fit the requirement. So far so good: we have defined a quantifiable quality measure. But specifying the quality measure is not always so straightforward. What about requirements that do not have an obvious scale?

  • Non-quantifiable Requirements   
    Suppose a requirement is "The automated interfaces of the system must be easy to learn". There is no obvious measurement scale for "easy to learn". However if we investigate the meaning of the requirement within the particular context, we can set communicable limits for measuring the requirement.
    Again we can make use of the question: "What is considered a failure to meet this requirement?" Perhaps the stakeholders agree that there will often be novice users, and the stakeholders want novices to be productive within half an hour. We can define the quality measure to say "a novice user must be able to learn to successfully complete a customer order transaction within 30 minutes of first using the system". This becomes a quality measure provided a group of experts within this context is able to test whether the solution does or does not meet the requirement.
    An attempt to define the quality measure for a requirement helps to rationalise fuzzy requirements. Something like "the system must provide good value" is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure "good value" we identify the diverse meanings.
    Sometimes by causing the stakeholders to think about the requirement we can define an agreed quality measure. In other cases we discover that there is no agreement on a quality measure. Then we substitute this vague requirement with several requirements, each with its own quality measure.
    Requirements Test 1
    Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?

  • Coherency and Consistency    
    When a poet writes a poem he intends that it should trigger off rich and diverse visions for everyone who reads it. The requirements engineer has the opposite intention: he would like each requirement to be understood in the same way by every person who reads it. In practice many requirements specifications are more like poetry, and are open to any interpretation that seems reasonable to the reader. This subjectivity means that many systems are built to satisfy the wrong interpretation of the requirement. The obvious solution to this problem is to specify the requirement in such that it is understood in only one way.
    For example, in a requirements specification that I assessed, I found the term "viewer" in many parts of the specification. My analysis identified six different meanings for the term, depending on the context of its use. This kind of requirements defect always causes problems during design and/or implementation. If you are lucky, a developer will realise that there is inconsistency, but will have to re-investigate the requirement. This almost always causes a ripple effect that extends to other parts of the product. If you are not lucky, the designer will choose the meaning that makes most sense to him and implement that one. Any stakeholder who does not agree with that meaning then considers that the system does not meet the requirement.

    Requirements Test 2
    Does the specification contain a definition of the meaning of every essential subject matter term within the specification?
    I point you in the direction of abstract data modelling principles [7] which provide many guidelines for naming subject matter and for defining the meaning of that subject matter. As a result of doing the necessary analysis, the term "viewer" could be defined as follows:
    Viewer
    A person who lives in the area which receives transmission of television programmes from our channel.
    Relevant attributes are:
    Viewer Name
    Viewer Address
    Viewer Age Range
    Viewer Sex
    Viewer Salary Range
    Viewer Occupation Type
    Viewer Socio-Economic Ranking
    When the allowable values for each of the attributes are defined it provides data that can be used to test the implementation.
    Defining the meaning of "viewer" has addressed one part of the coherency problem. We also have to be sure that every use of the term "viewer" is consistent with the meaning that has been defined.

    Requirements Test 3

    Is every reference to a defined term consistent with its definition?

  • Completeness   
    We want to be sure that the requirements specification contains all the requirements that are known about. While we know that there will be evolutionary changes and additions, we would like to restrict those changes to new requirements, and not have to play "catch-up" with requirements that we should have known about in the first place. Thus we want to avoid omitting requirements just because we did not think of asking the right questions. If we have set a context for our project, then we can test whether the context is accurate. We can also test whether we have considered all the likely requirements within that context.
    The context defines the problem that we are trying to solve. The context includes all the requirements that we must eventually meet: it contains anything that we have to build, or anything we have to change. Naturally if our software is going to change the way people do their jobs, then those jobs must be within the context of study. The most common defect is to limit the context to the part of the system that will be eventually automated. The result of this restricted view is that nobody correctly understands the organisation's culture and way of working. Consequently there is misfit between the eventual computer system and the rest of the business system and the people that it is intended to help.

    Requirements Test 4
    Is the context of the requirements wide enough to cover everything we need to understand?
    Of course this is easy to say, but we still have to be able to test whether or not the context is large enough to include the complete business system, not just the software. ("Business" in this sense should be means not just a commercial business, but whatever activity - scientific, engineering, artistic - the organisation is doing.) We do this test by observing the questions asked by the systems analysts: Are they considering the parts of the system that will be external to the software? Are questions being asked that relate to people or systems that are shown as being outside the context? Are any of the interfaces around the boundary of the context being changed?
    Another test for completeness is to question whether we have captured all the requirements that are currently known. The obstacle is that our source of requirements is people. And every person views the world differently according to his own job and his own idea of what is important, or what is wrong with the current system. It helps to consider the types of requirements that we are searching for:
  • Conscious Requirements
Problems that the new system must solve
  • Unconscious Requirements
Already solved by the current system
  • Undreamed of Requirements
Would be a requirement if we knew it was possible or could imagine it
Conscious requirements are easier to discover because they are uppermost in the stakeholders' minds. Unconscious requirements are more difficult to discover. If a problem is already satisfactorily solved by the current system then it is less likely for it to be mentioned as a requirement for a new system. Other unconscious requirements are often those relating to legal, governmental and cultural issues. Undreamt of requirements are even more difficult to discover. These are the ones that surface after the new system has been in use for a while. "I didn't know that it was possible otherwise I would have asked for it."

Requirements Test 5
Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?
Requirements engineering experience with other systems helps to discover missing requirements. The idea is to compare your current requirements specification with specifications for similar systems. For instance, suppose that a previous specification has a requirement related to the risk of damage to property. It makes sense to ask whether our current system has any requirements of that type, or anything similar. It is quite possible, indeed quite probable, to discover unconscious and undreamed of requirements by looking at other specification.
We have distilled experience from many projects and built a generic requirements template that can be used to test for missing requirement types. I urge you to look through the template and use it to stimulate questions about requirements that otherwise would have been missed. Similarly, you can build your own template by distilling your own requirements specifications, and thus uncover most of the questions that need to be asked.
Another aid in discovering unconscious and undreamed of requirements is to build models and prototypes to show people different views of the requirements. Most important of all is to remember that each stakeholder is an individual person. Human communication skills are the best aid to complete requirements.

Requirements Test 5 (enlarged)
Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modelling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?


  • Relevance 
    When we cast out the requirements gathering net and encourage people to tell us all their requirements, we take a risk. Along with all the requirements that are relevant to our context we are likely to pick up impostors. These irrelevant requirements are often the result of a stakeholder not understanding the goals of the project. In this case people, especially if they have had bad experiences with another system, are prone to include requirements "just in case we need it". Another reason for irrelevancy is personal bias. If a stakeholder is particularly interested or affected by a subject then he might think of it as a requirement even if it is irrelevant to this system.
    Requirements Test 6
    Is every requirement in the specification relevant to this system?
    To test for relevance, check the requirement against the stated goals for the system. Does this requirement contribute to those goals? If we exclude this requirement then will it prevent us from meeting those goals? Is the requirement concerned with subject matter that is within the context of our study? Are there any other requirements that are dependent on this requirement? Some irrelevant requirements are not really requirements, instead they are solutions.

  • Requirement or Solution? 
    When one of your stakeholders tells you he wants a graphic user interface and a mouse, he is presenting you with a solution not a requirement. He has seen other systems with graphic user interfaces, and he wants what he considers to be the most up-to-date solution. Or perhaps he thinks that designing the system is part of his role. Or maybe he has a real requirement that he has mentally solved by use of a graphic interface. When solutions are mistaken for requirements then the real requirement is often missed. Also the eventual solution is not as good as it could be because the designer is not free to consider all possible ways of meeting the requirements.
    Requirements Test 7
    Does the specification contain solutions posturing as requirements?
    It is not always easy to tell the difference between a requirement and a solution. Sometimes there is a piece of technology within the context and the stakeholders have stated that the new system must use this technology. Things like: "the new system must be written in COBOL because that is the only language our programmers know", "the new system must use the existing warehouse layout because we don't want to make structural changes" are really requirements because they are genuine constraints that exist within the context of the problem.
    For each requirement ask "Why is this a requirement?" Is it there because of a genuine constraint? Is it there because it is needed? Or is it the solution to a perceived problem? If the "requirement" includes a piece of technology, and it could be implemented by another technology, then unless the specified technology is a genuine constraint, the "requirement" is really a solution.

  • Stakeholder Value 
    There are two factors that affect the value that stakeholders place on a requirement. The grumpiness that is caused by bad performance, and the happiness that is caused by good performance. Failure to provide a perfect solution to some requirements will produce mild annoyance. Failure to meet other requirements will cause the whole system to be a failure. If we understand the value that the stakeholders put on each requirement, we can use that information to determine design priorities.
     
    Requirements Test 8
    Is the stakeholder value defined for each requirement?
    Pardee suggests that we use scales from 1 to 5 to specify the reward for good performance and the penalty for bad performance. If a requirement is absolutely vital to the success of the system then it has a penalty of 5 and a reward of 5. A requirement that would be nice to have but is not really vital might have a penalty of 1 and a reward of 3. The overall value or importance that the stakeholders place on a requirement is the sum of penalty and reward. In the first case a value of 10 in the second a value of 4.
    The point of defining stakeholder value is to discover how the stakeholders really feel about the requirements. We can use this knowledge to make prioritisation and trade-off decisions when the time comes to design the system.

  • Traceability
    We want to be able to prove that the system that we build meets each one of the specified requirements. We need to identify each requirement so that we can trace its progress through detailed analysis, design and eventual implementation. Each stage of system development shapes, repartitions and organises the requirements to bring them closer to the form of the new system. To insure against loss or corruption, we need to be able to map the original requirements to the solution for testing purposes.

    Requirements Test 9

    Is each requirement uniquely identifiable?
    In the micro spec in Figure 1 we see that each requirement must have a unique identifier. We find the best way of doing this is simply to assign a number to each requirement. The only significance of the number is that it is that requirement's identifier. We have seen schemes where the requirements are numbered according to type or value or whatever, but these make it difficult to manage changes. It is far better to avoid hybrid numbering schemes and to use the number purely as an identifier. Other facts about the requirement are then recorded as part of the requirements micro spec.

  • Order In A Disorderly World
    We have considered each requirement as a separately identifiable, measurable entity. Now we need to consider the connections between requirements and to understand the effect of one requirement on others. This means we need a way of dealing with a large number of requirements and the complex connections between them. Rather than trying to tackle everything simultaneously, we need a way of dividing the requirements into manageable groups. Once that is done we can consider the connections in two phases: the internal connections between the requirements in each group; and then the connections between the groups. It reduces the complexity of the task if our grouping of requirements is done in a way that minimises the connections between the groups.
    Events or use cases provide us with a convenient way of grouping the requirements [8, 4, 11,]. The event/use case is a happening that causes the system to respond. The system's response is to satisfy all of the requirements that are connected to that event/use case. In other words, if we could string together all the requirements that respond to one event/use case, we would have a mini-system responding to that event. By grouping the requirements that respond to an event/use case, we arrive at groups with strong internal connections. Moreover, the events/use cases within our context provide us with a very natural way of collecting our requirements.
    The event/use case is a collection of all the requirements that respond to the same happening. The n-to-n relationship between Event/Use Case and Requirement indicates that while there are a number of Requirement to fulfil one Event/Use Case, any Requirement could also contribute to other Events/Use Cases. The model also shows us that one Requirement might have more than one Potential Solution but it will only have one Chosen Solution.
    The event/use case provides us with a number of small, minimally-connected systems. We can use the event/use case partitioning throughout the development of the system. We can analyse the requirements for one event/use case, design the solution for the event/use case and implement the solution. Each requirement has a unique identifier. Each event/use case has a name and number. We keep track of which requirements are connected to which events/use cases using a requirements tool or spreadsheet. If there is a change to a requirement we can identify all the parts of the system that are affected.
    Requirements Test 10

    Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?

  • Conclusions
    The requirements specification must contain all the requirements that are to be solved by our system. The specification should objectively specify everything our system must do and the conditions under which it must perform. Management of the number and complexity of the requirements is one part of the task.

    The most challenging aspect of requirements gathering is communicating with the people who are supplying the requirements. If we have a consistent way of recording requirements we make it possible for the stakeholders to participate in the requirements process. As soon as we make a requirement visible we can start testing it. and asking the stakeholders detailed questions. We can apply a variety of tests to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. We can ask the stakeholders to define the relative value of requirements. We can define a quality measure for each requirement, and we can use that quality measure to test the eventual solutions.

    Testing starts at the beginning of the project, not at the end of the coding. We apply tests to assure the quality of the requirements. Then the later stages of the project can concentrate on testing for good design and good code. The advantages of this approach are that we minimise expensive rework by minimising requirements-related defects that could have been discovered, or prevented, early in the project's life.
Software blogs

Wednesday, August 8, 2012

black box testing techniques

black box testing techniques

 

Firstly let us understand the meaning of Black Box Testing.The term 'Black Box' refers to the software, which is treated as a black box. By treating it as a black box, we mean that the system or source code is not checked at all. It is done from customer's viewpoint. The test engineer engaged in black box testing only knows the set of inputs and expected outputs and is unaware of how those inputs are transformed into outputs by the software.

Types of Black Box Testing Techniques:
Following techniques are used for performing black box testing
1) Boundary Value Analysis (BVA)
2) Equivalence Class Testing

3) Decision Table based testing

4) Cause-Effect Graphing Technique

1) Boundary Value Analysis (BVA):
This testing technique believes and extends the concept that the density of defect is more towards the boundaries. This is done to the following reasons
a) Usually the programmers are not able to decide whether they have to use <= operator or < operator when trying to make comparisons.
b) Different terminating conditions of For-loops, While loops and Repeat loops may cause defects to move
around the boundary conditions.

c) The requirements themselves may not be clearly understood, especially around the boundaries, thus causing even the correctly coded program to not perform the correct way.

2) Equivalence Class Testing:
The use of equivalence classes as the basis for functional testing is appropriate in situations like

a) When exhaustive testing is desired.

b) When there is a strong need to avoid redundancy.

The above are not handled by BVA technique as we can see massive redundancy in the tables of test cases. In this technique, the input and the output domain is divided into a finite number of equivalence classes.


3) Decision Table Based Testing:

Decision tables are a precise and compact way to model complicated logic. Out of all the functional testing methods, the ones based on decision tables are the most rigorous due to the reason that the decision tables enforce logical rigour.

Decision tables are ideal for describing situations in which a number of combinations of actions are taken under varying sets of conditions.


4) Cause-Effect Graphing Technique:
This is basically a hardware testing technique adapted to software testing. It considers only the desired external behaviour of a system. This is a testing technique that aids in selecting test cases that logically relate Causes (inputs) to Effects (outputs) to produce test cases.

A “Cause” represents a distinct input condition that brings about an internal change in the system. An “Effect” represents an output condition, a system transformation or a state resulting from a combination of causes.
Software blogs

fault, error & failure

fault, error and failure


Fault : It is a condition that causes the software to fail to perform its required function.
Error : Refers to difference between Actual Output and Expected output.
Failure : It is the inability of a system or component to perform required function according to its specification.
IEEE Definitions
  • Failure: External behavior is incorrect
  • Fault: Discrepancy in code that causes a failure.
  • Error: Human mistake that caused fault
Note:
  • Error is terminology of Developer.
  • Bug is terminology of Tester
Software blogs

Sunday, July 29, 2012

Verification vs Validation

 

Verification and Validation: Definition, Differences, Details:

 
The terms ‘Verification‘ and ‘Validation‘ are frequently used in the software testing world but the meaning of those terms are mostly vague and debatable. You will encounter (or have encountered) all kinds of usage and interpretations of those terms, and it is our humble attempt here to distinguish between them as clearly as possible.
Criteria Verification Validation
Definition The process of evaluating work-products (not the actual final product) of a development phase to determine whether they meet the specified requirements for that phase. The process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements.
Objective To ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements. To ensure that the product actually meets the user’s needs, and that the specifications were correct in the first place. In other words, to demonstrate that the product fulfills its intended use when placed in its intended environment.
Question Are we building the product right? Are we building the right product?
Evaluation Items Plans, Requirement Specs, Design Specs, Code, Test Cases The actual product/software.
Activities
  • Reviews
  • Walkthroughs
  • Inspections
  • Testing
It is entirely possible that a product passes when verified but fails when validated. This can happen when, say, a product is built as per the specifications but the specifications themselves fail to address the user’s needs.
  • Trust but Verify.
  • Verify but also Validate.

Software blogs

Software Testing Myths

 Software Testing Myths and Facts:

Just as every field has its myths, so does the field of Software Testing. Software testing myths have arisen primarily due to the following:
  • Lack of authoritative facts.
  • Evolving nature of the industry.
  • General flaws in human logic.
Some of the myths are explained below, along with their related facts:
  1. MYTH: Quality Control = Testing.
    • FACT: Testing is just one component of software quality control. Quality Control includes other activities such as Reviews.
  2. MYTH: The objective of Testing is to ensure a 100% defect- free product.
    • FACT: The objective of testing is to uncover as many defects as possible. Identifying all defects and getting rid of them is impossible.
  3. MYTH: Testing is easy.
    • FACT: Testing can be difficult and challenging (sometimes, even more so than coding).
  4. MYTH: Anyone can test.
    • FACT: Testing is a rigorous discipline and requires many kinds of skills.
  5. MYTH: There is no creativity in testing.
    • FACT: Creativity can be applied when formulating test approaches, when designing tests, and even when executing tests.
  6. MYTH: Automated testing eliminates the need for manual testing.
    • FACT: 100% test automation cannot be achieved. Manual Testing, to some level, is always necessary.
  7. MYTH: When a defect slips, it is the fault of the Testers.
    • FACT: Quality is the responsibility of all members/stakeholders, including developers, of a project.
  8. MYTH: Software Testing does not offer opportunities for career growth.
    • FACT: Gone are the days when users had to accept whatever product was dished to them; no matter what the quality. With the abundance of competing software and increasingly demanding users, the need for software testers to ensure high quality will continue to grow.
Management folk tend to have a special affinity for myths; so it will be your responsibility as a software tester to convince them that they are wrong. Tip: Put forward your arguments with relevant examples and numbers to validate your claims.
Software blogs

Tuesday, July 17, 2012

glossary of testing terms

glossary of testing terms




Acceptance Testing

Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component.

Actual Outcome

The behaviour actually produced when the object is tested under specified conditions.

Adding Value

Adding something that the customer wants that was not there before.

Ad hoc Testing

Testing carried out using no recognized test case design technique.

Alpha Testing

Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.

Arc Testing

A test case design technique for a component in which test cases are designed to execute branch outcomes.

Backus-Naur Form

A metalanguage used to formally describe the syntax of a language.

 

Basic Block

A sequence of one or more consecutive, executable statements containing no branches.

Basis Test Set

A set of test cases derived from the code logic, which ensure that 100% branch coverage is achieved.

Bebugging

The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

Behaviour

The combination of input values and preconditions and the required response for a function of a system. The full specification of a function would normally comprise one or more behaviours.

Benchmarking

Comparing your product to the best competitors'.

Beta Testing

Operational testing at a site not otherwise involved with the software developers.

Big-bang Testing

Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.

Black Box Testing

Test case selection that is based on an analysis of the specification of the component without reference to its internal workings. Testing by looking only at the inputs and outputs, not at the insides of a program.
Sometimes also called "Requirements Based Testing". You don't need to be a programmer, you only need to know what the program is supposed to do and be able to tell whether an output is correct or not.

Bottom-up Testing

An approach to testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Value

An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary.

Boundary Value Analysis

A test case design technique for a component in which test cases are designed which include representatives of boundary values.

Boundary Value Coverage

The percentage of boundary values of the component's equivalence classes, which have been exercised by a test case suite.

Boundary Value Testing

A test case design technique for a component in which test cases are designed which include representatives of boundary values.

Boundary Value Coverage

The percentage of boundary values of the component's equivalence classes, which have been exercised by a test case suite.

Boundary Value Testing

A test case design technique for a component in which test cases are designed which include representatives of boundary values.

Branch

A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.

Branch Condition

A condition within a decision.

Branch Condition Combination Coverage

The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite.

Branch Condition Combination Testing

A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.

Branch Condition Coverage

The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.

Branch Condition Testing

A test case design technique in which test cases are designed to execute branch condition outcomes.

Branch Coverage

The percentage of branches that have been exercised by a test case suite.

Branch Outcome

The result of a decision (which therefore determines the control flow alternative taken).

Branch Point

A program point at which the control flow has two or more alternative routes.

Branch Testing

A test case design technique for a component in which test cases are designed to execute branch outcomes.

Bring to the Table

Refers to what each individual in a meeting can contribute to a meeting, for  example, a design or brainstorming meetings.

 

Bug

A manifestation of an error in software. A fault, if encountered may cause a failure.

Bug Seeding

The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

C-use

A data use not in a condition.

Capture/Playback Tool

A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.

Capture/Replay Tool

A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.

CAST

Acronym for computer-aided software testing.

Cause-effect Graph

A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

Cause-effect Graphing

A test case design technique in which test cases are designed by consideration of cause-effect graphs.

Certification

The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use.

Chow's Coverage Metrics

The percentage of sequences of N-transitions that have been exercised by a test case suite.

Code Coverage

An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
If we have a set of tests that gives Code Coverage, it simply means that, if you run all the tests, every line of the code is executed at least once by some test.

Code-based Testing

Designing tests based on objectives derived from the implementation (e.g., tests that execute specific control flow paths or use specific data items).

Compatibility Testing

Testing whether the system is compatible with other systems with which it should communicate.

Complete Path Testing

A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables.

Component

A minimal software item for which a separate specification is available.

Component Testing

The testing of individual software components.

Computation Data Use

A data use not in a condition. Also called C-use.

Concurrent (or Simultaneous) Engineering

Integrating the design, manufacturing, and test processes.

Condition

A Boolean expression containing no Boolean operators. For instance, A < B is a condition but A and B is not.

Condition Coverage

The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.

Condition Outcome

The evaluation of a condition to TRUE or FALSE.

Conformance Criterion

Some method of judging whether or not the component's action on a particular specified input value conforms to the specification.

Conformance Testing

The process of testing that an implementation conforms to the specification on which it is based.

Continuous Improvement

The PDSA process of iteration which results in improving a product.

 

Control Flow

An abstract representation of all possible sequences of events in a program's execution.

Control Flow Graph

The diagrammatic representation of the possible alternative control flow paths through a component.

Control Flow Path

A sequence of executable statements of a component, from an entry point to an exit point.

Conversion Testing

Testing of programs or procedures used to convert data from existing systems for use in replacement systems.


Correctness

The degree to which software conforms to its specification.

Coverage

The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite.
A measure applied to a set of tests. The most important are called Code Coverage and Path Coverage. There are a number of intermediate types of coverage defined, moving up the ladder from Code Coverage (weak) to Path Coverage (strong, but hard to get). These definitions are called things like Decision -Condition Coverage, but they're seldom important in the real world. For the curious, they are covered in detail in Glenford Myers' book, The Art of Software Testing.

Coverage Item

An entity or property used as a basis for testing.

Customer Satisfaction

Meeting or exceeding a customer's expectations for a product or service.

Data Coupling

Data coupling is where one piece of code interacts with another by modifying a data object that the other code reads. Data coupling is normal in computer operations, where data is repeatedly modified until the desired result is obtained. However, unintentional data coupling is bad. It causes many hard to find bugs, and it causes all "side effect" bugs caused by changing existing systems. It's the reason why you need to test all paths to do really tight testing. The best way to combat it is with Regression Testing, to make sure that a change didn't break something else.

Data Definition

An executable statement where a variable is assigned a value.

Data Definition C-use Coverage

The percentage of data definition C-use pairs in a component that are exercised by a test case suite.

Data Definition C-use Pair

A data definition and computation data use, where the data use uses the value defined in the data definition.

Data Definition P-use Coverage

The percentage of data definition P-use pairs in a component that are exercised by a test case suite.

Data Definition P-use Pair

A data definition and predicate data use, where the data use uses the value defined in the data definition.

Data Definition-use Coverage

The percentage of data definition-use pairs in a component that are exercised by a test case suite.

Data Definition-use Pair

A data definition and data use , where the data use uses the value defined in the data definition.

Data Definition-use Testing

A test case design technique for a component in which test cases are designed to execute data definition-use pairs.

Data Flow Coverage

Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

Data Flow Testing

Testing in which test cases are designed based on variable usage within the code.

Data Use

An executable statement where the value of a variable is accessed.

Debugging

The process of finding and removing the causes of failures in software.

Decision

A program point at which the control flow has two or more alternative routes.

Decision Condition

A condition within a decision.

Decision Coverage

The percentage of decision outcomes that have been exercised by a test case suite.

Decision Outcome

The result of a decision (which therefore determines the control flow alternative taken).

Design

The creation of a specification from concepts.

 

Design-based Testing

Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms).

Desk Checking

The testing of software by the manual simulation of its execution.

Dirty Testing

Testing aimed at showing software does not work.

Documentation Testing

Testing concerned with the accuracy of documentation.

Domain

The set from which values are selected.

Domain Testing

A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Dynamic Analysis

The process of evaluating a system or component based upon its behaviour during execution.

Driver

A throwaway little module that calls something we need to test, because the real guy who'll be calling it isn't available.
For example, suppose module A needs module X to fire it up.
X isn't here yet, so we use a stub:


Emulator

A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Entry Point

The first executable statement within a component.

Equivalence Class

A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
An Equivalence Class (EC) of input values is a group of values that all cause the same sequence of operations to occur.
In Black Box terms, they are all treated the same way according to the specs. Different input values within an Equivalence Class may give different answers, but the answers are produced by the same procedure.
In Glass Box terms, they all cause execution to go down the same path.

Equivalence Partition

A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

Equivalence Partition Coverage

The percentage of equivalence classes generated for the component, which have been exercised by a test case suite.

Equivalence Partition Testing

A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Error

A human action that produces an incorrect result.

Error Guessing

A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them.

Error Seeding

The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

Executable Statement

A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.

Exercised

A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.

Exhaustive Testing

A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables.

Exit Point

The last executable statement within a component.

Facility Testing

Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

Failure

Deviation of the software from its expected delivery or service.

Fault

A manifestation of an error in software. A fault, if encountered may cause a failure.

Feasible Path

A path for which there exists a set of input values and execution conditions, which causes it to be executed.

Feature Testing

Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

Flow Charting

Creating a 'map' of the steps in a process.

 

Functional Chunk

The fundamental unit of testing.
Its precise definition is "The smallest piece of code for which all the inputs and outputs are meaningful at the spec level." This means that we can test it Black Box, and design the tests before the code arrives without regard to how it was coded, and also tell whether the results it gives are correct.

Functional Specification

The document that describes in detail the characteristics of the product with regard to its intended capability.

Functional Test Case Design

Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

Glass Box Testing

Test case selection that is based on an analysis of the internal structure of the component.
Testing by looking only at the code.
Sometimes also called "Code Based Testing".
Obviously you need to be a programmer and you need to have the source code to do this.

Incremental Integration

A systematic way for putting the pieces of a system together one at a time, testing as each piece is added. We not only test that the new piece works, but we also test that it didn't break something else by running the RTS (Regression Test Set).

Incremental Testing

Integration testing where system components are integrated into the system one at a time until the entire system is integrated.

Independence

Separation of responsibilities, which ensures the accomplishment of objective evaluation.

Infeasible Path

A path, which cannot be exercised by any set of possible input values.

Input

A variable (whether stored within a component or outside it) that is read by the component.

Input Domain

The set of all possible inputs.

Input Value

An instance of an input.

Inspection

A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Installability Testing

Testing concerned with the installation procedures for the system.

Instrumentation

The insertion of additional code into the program in order to collect information about program behaviour during program execution.

Instrumenter

A software tool used to carry out instrumentation.

Integration

The process of combining components into larger assemblies.

Integration Testing

Testing performed to expose faults in the interfaces and in the interaction between integrated components.

Interface Testing

Integration testing where the interfaces between system components are tested.

Isolation Testing

Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs.

LCSAJ

A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ Coverage

The percentage of LCSAJs of a component, which are exercised by a test case suite.

LCSAJ Testing

A test case design technique for a component in which test cases are designed to execute LCSAJs.

Logic-coverage Testing

Test case selection that is based on an analysis of the internal structure of the component.

Logic-driven Testing

Test case selection that is based on an analysis of the internal structure of the component.

Maintainability Testing

Testing whether the system meets its specified objectives for maintainability.

Manufacturing

Creating a product from specifications.

Metrics

Ways to measure: e.g., time, cost, customer satisfaction, quality.

 

Modified Condition/Decision Coverage

The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite.

Modified Condition/Decision Testing

A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.

Multiple Condition Coverage

The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite.

Mutation Analysis

A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program. See also  The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

N-switch Coverage

The percentage of sequences of N-transitions that have been exercised by a test case suite.

N-switch Testing

A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions.

N-transitions

A sequence of N+1 transitions.

Negative Testing

Testing aimed at showing software does not work.

Non-functional Requirements Testing

Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.

Operational Testing

Testing conducted to evaluate a system or component in its operational environment.

Oracle

A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.

Outcome

Actual outcome or predicted outcome. This is the outcome of a test.

Output

A variable (whether stored within a component or outside it) that is written to by the component.

Output Domain

The set of all possible outputs.

Output Value

An instance of an output.

P-use

A data use in a predicate.

Partition Testing

A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Path

A sequence of executable statements of a component, from an entry point to an exit point.

Path Coverage

The percentage of paths in a component exercised by a test case suite.
A set of tests that gives Path Coverage for some code if the set goes down each path at least once.
The difference between this and Code Coverage is that Path Coverage means not just "visiting" a line of code, but also includes how you got there and where you're going next. It therefore uncovers more bugs, especially those caused by Data Coupling.
However, it's impossible to get this level of coverage except perhaps for a tiny critical piece of code.

Path Sensitizing

Choosing a set of input values to force the execution of a component to take a given path.

Path Testing

A test case design technique in which test cases are designed to execute paths of a component.

Performance Testing

Testing conducted to evaluate the compliance of a system or component with specified performance requirements.

Portability Testing

Testing aimed at demonstrating the software can be ported to specified hardware or software platforms.

Precondition

Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value.

Predicate

A logical expression, which evaluates to TRUE or FALSE, normally to direct the execution path in code.

Predicate Data Use

A data use in a predicate.

Predicted Outcome

The behaviour predicted by the specification of an object under specified conditions.

Process

What is actually done to create a product.

 

Program Instrumenter

A software tool used to carry out instrumentation.

Progressive Testing

Testing of new features after regression testing of previous features.

Pseudo-random

A series which appears to be random but is in fact generated according to some prearranged sequence.

Quality Tools

Tools used to measure and observe every aspect of the creation of a product.

 

Recovery Testing

Testing aimed at verifying the system's ability to recover from varying degrees of failure.

Regression Testing

Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Re-running a set of tests that used to work to make sure that changes to the system didn't break anything. It's usually run after each set of maintenance or enhancement changes, but is also very useful for Incremental Integration of a system.

RTS (Regression Test Set)
The set of tests used for Regression Testing. It should be complete enough so that the system is defined to "work correctly" when this set of tests runs without error.

Requirements-based Testing

Designing tests based on objectives derived from requirements for the software component (e.g., tests that exercise specific functions or probe the non-functional constraints such as performance or security).

Result

Actual outcome or predicted outcome. This is the outcome of a test.

Review

A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval.

Security Testing

Testing whether the system meets its specified security objectives.

Serviceability Testing

Testing whether the system meets its specified objectives for maintainability.

Simple Subpath

A subpath of the control flow graph in which no program part is executed more than necessary.

Simulation

The representation of selected behavioural characteristics of one physical or abstract system by another system.

Simulator

A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs.

Six-Sigma Quality

Meaning 99.999997% perfect; only 3.4 defects in a million.

 

Source Statement

An entity in a programming language, which is typically the smallest indivisible unit of execution.

SPC

Statistical process control; used for measuring the conformance
of a product to specifications.

 

Specification

A description of a component's function in terms of its output values for specified input values under specified preconditions.

Specified Input

An input for which the specification predicts an outcome.

State Transition

A transition between two allowable states of a system or component.

State Transition Testing

A test case design technique in which test cases are designed to execute state transitions.

Statement

An entity in a programming language, which is typically the smallest indivisible unit of execution.

Statement Coverage

The percentage of executable statements in a component that have been exercised by a test case suite.

Statement Testing

A test case design technique for a component in which test cases are designed to execute statements.

Static Analysis

Analysis of a program carried out without executing the program.

Static Analyzer

A tool that carries out static analysis.

Static Testing

Testing of an object without execution on a computer.

Statistical Testing

A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.

Storage Testing

Testing whether the system meets its specified storage objectives.

Stress Testing

Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Structural Coverage

Coverage measures based on the internal structure of the component.

Structural Test Case Design

Test case selection that is based on an analysis of the internal structure of the component.

Structural Testing

Test case selection that is based on an analysis of the internal structure of the component.

Structured Basis Testing

A test case design technique in which test cases are derived from the code logic to achieve 100% branch coverage.

Structured Walkthrough

A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.

Stub

A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it. A little throwaway module that can be called to make another module work (and hence be testable).
For example, if we want to test module A but it needs to call module B, which isn't available, we can use a quick little stub for B. It just answers "hello from b" or something similar; if asked to return a number it always returns the same number - like 100.

Subpath

A sequence of executable statements within a component.

Symbolic Evaluation

A static analysis technique that derives a symbolic expression for program paths.

Syntax Testing

A test case design technique for a component or system in which test case design is based upon the syntax of the input.

System Testing

The process of testing an integrated system to verify that it meets specified requirements.

Technical Requirements Testing

Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.

Test

Testing the product for defects.

Test Automation

The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

Test Case

A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Case Design Technique

A method used to derive or select test cases.

Test Case Suite

A collection of one or more test cases for the software under test.

Test Comparator

A test tool that compares the actual outputs produced by the software under test with the expected outputs for that test case.

Test Completion Criterion

A criterion for determining when planned testing is complete, defined in terms of a test measurement technique.

Test Coverage

The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite.

Test Driver

A program or test tool used to execute software against a test case suite.

Test Environment

A description of the hardware and software environment in which the tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test Execution

The processing of a test case suite by the software under test, producing an outcome.

Test Execution Technique

The method used to perform the actual test execution, e.g. manual, capture/playback tool, etc.

Test Generator

A program that generates test cases in accordance to a specified strategy or heuristic.

Test Harness

A testing tool that comprises a test driver and a test comparator.

Test Measurement Technique

A method used to measure test coverage items.

Test Outcome

Actual outcome or predicted outcome. This is the outcome of a test. The result of a decision (which therefore determines the control flow alternative taken). The evaluation of a condition to TRUE or FALSE.

Test Plan

A record of the test planning process detailing the degree of tester independence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.

Test Procedure

A document providing detailed instructions for the execution of one or more test cases.

Test Records

For each test, an unambiguous record of the identities and versions of the component under test, the test specification, and actual outcome.

Test Script

Commonly used to refer to the automated test procedure used with a test harness.

Test Specification

For each test case, the coverage item, the initial state of the software under test, the input, and the predicted outcome.

Test Target

A set of test completion criteria.

Testing

The process of exercising software to verify that it satisfies specified requirements and to detect errors.

Thread Testing

A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top-down Testing

An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Control (TQM)
Controlling everything about a process.

Unit Testing

The testing of individual software components.
A Unit, as we use it in these techniques, is a very small piece of code with only a few inputs and outputs. The key thing is that these inputs and outputs may be implementation details (counters, array indices, pointers) and are not necessarily real world objects that are described in the specs. If all the inputs and outputs are real-world things, then we have a special kind of Unit, the Functional Chunk, which is ideal for testing. Most ordinary Units are tested by the programmer who wrote them, although a much better practice to have them subsequently tested by a different programmer.
Caution: There is no agreement out there about what a "unit" or "module" or subsystem" etc. really is. I've heard 50,000 line COBOL programs described as "units". So the Unit defined here is, in fact, a definition.

Usability Testing

Testing the ease with which users can learn and use a product.

Validation

Determination of the correctness of the products of software development with respect to the user needs and requirements.

Verification

The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase.

Volume Testing

Testing where the system is subjected to large volumes of data.

Walkthrough

A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.

White Box Testing

Test case selection that is based on an analysis of the internal structure of the component.