Saturday, July 20, 2013

How to do GUI Testing

How to do GUI Testing

How to Perform GUI Testing?

Basically GUI testing is performed to check the user interface and test the functionality working properly or not. GUI testing involves several tasks like to test all object events, mouse events, menus, fonts, images, content, control lists, etc. GUI testing improves the overall look and feel of the application according to the requirements. User interface for the website should be designed to attract the audiences globally. There are both options available to the testers one is manual and second is to automate with the use of an automation tool like QTP. There is planning required for all the stages in testing.

Below discussed points will let to know how to perform GUI Testing:

First of all tester should be aware and understand design documents for the testing application. All the mentioned requirements need to be checked properly with exact figures for font size, images, navigations, controls listed in the application.
Create the environment and test the application in different browsers like Mozilla, IE, Opera, and Safari. Sometimes design for the application does not work or behave properly in other browsers. So, make sure you are testing in all the browsers.
Testers can check manually and if you are finding it difficult to test if application has N number of pages then chooses the option to automate the application. You can easily record the application with all respective controls to test with. And at the end iterations can be checked that they are passed or failed.
Prepare a checklist for the GUI testing since there are many things which is related to the user interface. Make out in one sheet and execute them for the application.
Prepare all the test cases for the user interface to perform the GUI testing effectively.
Tester should divide all the modules which are available in the application and start performing testing module wise. This process can be easily tested and there are high chances of getting defects.
Above mentioned points are very beneficial in performing GUI testing. Tester should be more focussed on how to test the application. Planning to test the system or application is very necessary to achieve the respective goals with the help of GUI testing. Testing should be performed in keeping mind the perspective of the users who will come and use the application. Remember first impression is last impression.
Software blogs Blog Directory Blog Site List

How to do Cookie Testing

How to do Cookie Testing

What is Cookie?

Cookie is to be known as text a file that gets saved in the hard disk of the user’s system. Browsers are required to use the cookies which have been saved in the desired location. Informative data is recorded in the cookie and that can be retrieved for the web pages.

How to do Cookie Testing?

Being a tester, testing for the cookies is very essential since there are many web applications which include informative content and payment transactions. Below are the steps which should be considered while doing testing:

Major cookie testing involves disabling the cookie from the option available in every browser. You need to make sure that cookies are disabled and access the respective website and check the functionality of the pages that are working properly or not. Browse the whole website and see for crashes. Basically, message should be displayed as “Cookies are disabled. Please enable the cookies to browse the website easily”.
Another testing should be performed after corrupting the cookies. For corrupting the cookies you make sure the location of the cookies and manually edit respective cookie with fake figures or with any invalid data and save them. In some cases, internal information of the domain can be easily accessed/ hacked with the help of corrupted cookies. If you are testing banking or finance domain then this point should come first in your checklist.
Remove all the cookies for the website you are testing and check all the website pages that are working properly.
Cookie testing can be performed on different browsers to check whether the website is writing the cookies or not. You can check cookies manually for the existence for each and every page.
If you are testing an application which authenticate login then get logged in with valid details. You will see the Username parameter in the address bar and can change it to see the behavior of the application. You should not get logged into different user’s account. Proper message is getting displayed or not for the action you have performed.
Therefore, above discussed is the most important cookie testing concepts that tell how to do cookie testing. Cookie testing is really necessary and it should not be missed from the hands of testers while testing any web application. Cookie testing really make the application more stable if any issues found. Software blogs Blog Directory


Things to look after in GUI Testing

Things to look after in GUI Testing


Ideally, motive of the GUI testing is to recognize your application which is according to the specific standards for graphical user interface. These checklists would be beneficial for the QA and development team. Checklist must be comprises of all the GUI components that will be tested systematically. Below is the checklist which is really beneficial for every tester to perform under GUI testing for specific application:

Colors used in the web application:

Check the colors for the hyperlinks.
Background color for all the pages should be tested.
Color for the warning message should be checked.
Content used in the web application:

Font on the whole website should be same according the requirements.
Check that content is properly aligned or not.
All the labels should be proper aligned
Check all the content for respective words to be in lower and upper case.
Check all the error message on the screen and should not have any spelling mistake.
Images used in the web application:

Check all the graphics should be proper aligned.
Check for the broken images all over.
Check the size for the graphics used or uploaded.
Check all the banners and their size.
Check the buttons for the respective command like size, font and font size.
Navigation in the application:

All the tabs should be loaded on time.
Tabs should be displayed in sequence.
All the tabs from the left to right should be correct according to the requirements.
Check scrollbar is getting displayed if required.
Check all the links given in the sitemap and see for broken links.
Usability in the application:

Check the font size and it should be readable.
Check the spelling for all the fields that they are prompting properly or not.
Check whole website’s look and feel.
Test the website in all resolutions like 640x480 etc.
Hence, if the entire checklist followed then there is no chance of missing any requirement from the hands of tester and developer. It is suggested to draft all the points in the excel sheet and add the columns pass and fail against each checklist. And send the development team after performing testing cycle. There are other tools which are also available and it is apart from the manual testing like web link validator to check broken link and online w3c validator tool for checking usability standards.
Software blogs
Blog Directory

What do you mean by Agile Testing

What do you mean by Agile Testing



Agile testing involves all members of a cross-functional agile team, with special expertise contributed by testers, to ensure delivering the business value desired by the customer at frequent intervals, working at a sustainable pace.

Agile Testing Myth

One thing that must bind it tight to your minds is that Agile Testing is not an easy task to undertake. The other important thing is to nullify the statement that Agile Testing can be done without the need of the testers. Get in touch with your developers to obtain each and every minute detail on the Application workflow in order to come out with successful Black box as well as White Box Testing in short span of time.

Challenges in Agile Testing

Agile Testing offers an open door policy for the requirements to creep in any time during any phase of the project and it will be greeted vey graciously and tested as per it is intended to function. Though not an easy task, but becomes a challenge for the team working on Agile methodology to accommodate the same in the already allotted functionalities to be validated.
Because Agile methodologies move faster, it becomes a challenge for the team to navigate through whole application. Proper approach needs to be followed in order to accomplish the same such as Testing the application as per Risk-based Strategy can do wonders and also prioritizing the Test scenarios as per their priority will enhance the quality of the deliverable.
In case of incomplete or inadequate unit testing performed by the developers, it is the testers who are at target to showcase their abilities in finding out the 90% of the hidden bugs missed by developers. It is indeed a challenging task considering the time frame of the testing phase that Agile methodology offers.
It is not only new functionalities that need to be working as designed but Agile Testers also need to make sure that the already placed functionalities do not get affected and hence regression testing plays a vital role in Agile Methodology.
Projects following Agile approach always lack with the appropriate documents and hence it becomes a challenging job for the testers to validate the requirements with the developers and confirm if the same is expected to work the way it is reflecting in the application else an another resource of communication will be required which will involve the customers of this Application to get the queries clarified and test accordingly.
Looking at the Challenges that Agile Testing is equipped with; it indeed becomes a very difficult task though not impossible to meet the expected commitment and deadlines
Software blogs Blog Directory

Difference between Top down Testing and Bottom Up Testing

Difference between Top down Testing and Bottom Up Testing


Top down Testing: In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the submodule.


Advantages:

- Advantageous if major flaws occur toward the top of the program.
- Once the I/O functions are added, representation of test cases is easier.
- Early skeletal Program allows demonstrations and boosts morale.

Disadvantages:
- Stub modules must be produced
- Stub Modules are often more complicated than they first appear to be.
- Before the I/O functions are added, representation of test cases in stubs can be difficult.
- Test conditions ma be impossible, or very difficult, to create.
- Observation of test output is more difficult.
- Allows one to think that design and testing can be overlapped.
- Induces one to defer completion of the testing of certain modules.

Bottom up testing: In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.

Advantages:

- Advantageous if major flaws occur toward the bottom of the program.
- Test conditions are easier to create.
- Observation of test results is easier.


Disadvantages:

- Driver Modules must be produced.
- The program as an entity does not exist until the last module is added.

Stubs and Drivers

It is always a good idea to develop and test software in "pieces". But, it may seem impossible because it is hard to imagine how you can test one "piece" if the other "pieces" that it uses have not yet been developed (and vice versa).

A software application is made up of a number of ‘Units’, where output of one ‘Unit’ goes as an ‘Input’ of another Unit. e.g. A ‘Sales Order Printing’ program takes a ‘Sales Order’ as an input, which is actually an output of ‘Sales Order Creation’ program.

Due to such interfaces, independent testing of a Unit becomes impossible. But that is what we want to do; we want to test a Unit in isolation! So here we use ‘Stub’ and ‘Driver.

A ‘Driver’ is a piece of software that drives (invokes) the Unit being tested. A driver creates necessary ‘Inputs’ required for the Unit and then invokes the Unit.

Driver passes test cases to another piece of code. Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in isolation. It can be called as as a software module which is used to invoke a module under test and provide test inputs, control and, monitor execution, and report test results or most simplistically a line of code that calls a method and passes that method a value.

For example, if you wanted to move a fighter on the game, the driver code would bemoveFighter(Fighter, LocationX, LocationY);

This driver code would likely be called from the main method. A white-box test case would execute this driver line of code and check "fighter.getPosition()" to make sure the player is now on the expected cell on the board.

A Unit may reference another Unit in its logic. A ‘Stub’ takes place of such subordinate unit during the Unit Testing.

A ‘Stub’ is a piece of software that works similar to a unit which is referenced by the Unit being tested, but it is much simpler that the actual unit. A Stub works as a ‘Stand-in’ for the subordinate unit and provides the minimum required behavior for that unit. A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system.

Four basic types of Stubs for Top-Down Testing are:

- Display a trace message
- Display parameter value(s)
- Return a value from a table
- Return table value selected by parameter

A stub is a computer program which is used as a substitute for the body of a software module that is or will be defined elsewhere or a dummy component or object used to simulate the behavior of a real component until that component has been developed.

For example, if the movefighter method has not been written yet, a stub such as the one below might be used temporarily – which moves any player to position 1.

public void moveFighter(Fighter player, int LocationX, int LocationY)

{fighter.setPosition(1);}

Ultimately, the dummy method would be completed with the proper program logic. However, developing the stub allows the programmer to call a method in the code being developed, even if the method does not yet have the desired behavior.

Programmer needs to create such ‘Drivers’ and ‘Stubs’ for carrying out Unit Testing.

Both the Driver and the Stub are kept at a minimum level of complexity, so that they do not induce any errors while testing the Unit in question.

Stubs and drivers are often viewed as throwaway code. However, they do not have to be thrown away: Stubs can be "filled in" to form the actual method. Drivers can become automated test cases.

Example - For Unit Testing of ‘Sales Order Printing’ program, a ‘Driver’ program will have the code which will create Sales Order records using hardcoded data and then call ‘Sales Order Printing’ program. Suppose this printing program uses another unit which calculates Sales discounts by some complex calculations. Then call to this unit will be replaced by a ‘Stub’, which will simply return fix discount data.
Software blogs

How to prepare a effective bug report

How to prepare a effective bug report


After you complete your Software Testing, it is good practice to prepare an effective bug report. Fixing a bug depends on how effectively you report it. Below are some tips to write a good software bug report:
If you are doing manual Software Testing and reporting bugs withour the help of any tool, assign a unique number to each bug report. This will help to identify the bug record.Clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step.Be Specific and to the pointApart from these tips, below are some good practices:Report the problem immediatelyReproduce the bug atleast one more time before you report itTest the same bug occurrence on other similar modules of the applicationRead bug report before you submit it or send it.Never ever criticize any developer or attack any individual If you are using any Software Testing Management tool or any Bug reporting tool like Bugzilla or Test Director or Bughost or any other online bug tracking tool, then; the tool will automatically generate the bug report. If you are not using any tool, you may refer to the following template for your software bug report:
Name of Reporter:Email Id of Reporter:Version or Build: <Version or Build of the product>Module or component: <mention here the name of tested module or component>Platform / Operating System:Type of error: <coding error / design error / suggestion / UI / documentation / text error / hardware error >Priority:Severity:Status:Assigned to:Summary:Description: <mention here the steps to reproduce, expected result and actual result>
Software blogs
Blog Directory | Submit to Directories and Promote your Blogs

Tuesday, July 16, 2013

How to do Integration Testing

How to do Integration Testing

Integration Testing is a four step procedure. Below are the steps for creating integration test cases:

1. Identify Unit Interfaces: The developer of each program unit identifies and documents the unit’s interfaces for the following unit operations:- External inquiry (responding to queries from terminals for information)- External input (managing transaction data entered for processing)- External filing (obtaining, updating, or creating transactions on computer files)- Internal filing (passing or receiving information from other logical processing units)- External display (sending messages to terminals)- External output (providing the results of processing to some output device or unit)2. Reconcile Interfaces for Completeness: The information needed for the integration test template is collected for all program units in the software being tested. Whenever one unit interfaces with another, those interfaces are reconciled. For example, if program unit A transmits data to program unit B, program unit B should indicate that it has received that input from program unit A. Interfaces not reconciled are examined before integration tests are executed.3. Create Integration Test Conditions: One or more test conditions are prepared for integrating each program unit. After the condition is created, the number of the test condition is documented in the test template.4. Evaluate the Completeness of Integration Test Conditions: The following list of questions will help guide evaluation of the completeness of integration test conditions recorded on the integration testing template. This list can also help determine whether test conditions created for the integration process are complete.1. Is an integration test developed for each of the following external inquiries:- Record test?- File test?- Search test?- Match/merge test?- Attributes test?- Stress test?- Control test?2. Are all interfaces between modules validated so that the output of one is recorded as input to another?3. If file test transactions are developed, do the modules interface with all those indicated files?4. Is the processing of each unit validated before integration testing?5. Do all unit developers agree that integration test conditions are adequate to test each unit’s interfaces?6. Are all software units included in integration testing?7. Are all files used by the software being tested included in integration testing?8. Are all business transactions associated with the software being tested included in integration testing?9. Are all terminal functions incorporated in the software being tested included in integration testing?
Software blogs

How to do Boundary Value Analysis

How to do Boundary Value Analysis


Boundary value is a software design technique to determine the test cases covering off by one errors.

Boundary value analysis is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges Boundary value analysis is a method which refines equivalence partitioning. Boundary value analysis generates test cases that highlight errors better than equivalence partitioning.

The purpose of boundary value analysis is to concentrate the testing effort on error prone areas by accurately pinpointing the boundaries of conditions,(e.g., a programmer may specify >, when the requirement states > or =).

To set up boundary value analysis test cases you first have to determine which boundaries you have at the interface of a software component.This has to be done by applying the equivalence partitioning technique. Boundary value analysis and equivalence partitioning are inevitably linked together. For the example of the month in a date you would have the following partitions:

 ... -2 -1  0 1 .............. 12 13  14  15 ..... 
      --------------|-------------------|---------------------
 invalid partition 1     valid partition    invalid partition 2

Applying boundary value analysis you have to select now a test case at each each side of the boundary between two partitions. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "dirty" test case. A "clean" test case should give you a valid operation result of your program. A "dirty" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The boundary value analysis can have 6 testcases. n, n-1,n+1 for the upper limit and n, n-1,n+1 for the lower limit.

There are two steps to perform boundary value analysis:
1. Identify the equivalence classes.

2. Design test cases. 

Step 1.

Follow the same rules you used in equivalence partitioning. However, consider the output specifications as well. For example, if the output specifications for the inventory system stated that a report on inventory should indicate a total quantity for all products no greater than 999,999, then you d add the following classes to the ones you found previously:

 The valid class ( 0 < = total quantity on hand < = 999,999 )

 The invalid class (total quantity on hand <0)

 The invalid class (total quantity on hand> 999,999 )

Step 2.

In this step, you derive test cases from the equivalence classes. The process is similar to that of equivalence partitioning but the rules for designing test cases differ. With equivalence partitioning, you may select any test case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to the edges of the range.

Rules for Test Cases

1. If the condition is a range of values, create valid test cases for each end of the range and invalid test cases just beyond each end of the range. For example, if a valid range of quantity on hand is -9,999 through 9,999, write test cases that include:
1.        the valid test case quantity on hand is -9,999
2.       the valid test case quantity on hand is 9,999
3.       the invalid test case quantity on hand is -10,000 and
4.       the invalid test case quantity on hand is 10,000
You may combine valid classes wherever possible, just as you did with equivalencepartitioning, and, once again, you may not combine invalid classes. Don�t forget to consider output conditions as well. In our inventory example the output conditions generate the following test cases:
1.       the valid test case total quantity on hand is 0
2.       the valid test case total quantity on hand is 999,999
3.       the invalid test case total quantity on hand is -1 and
4.       the invalid test case total quantity on hand is 1,000,000
2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the acceptable range.

3. Design tests that highlight the first and last records in an input or output file. 

4.Look for any other extreme input or output conditions, and generate a test for each of them.


Software blogs

Monday, July 15, 2013

Integration Testing

Integration Testing

 Integration testing is designed to test the structure and the architecture of the software and determine whether all software components interface properly. Integration testing does not verify that the system is functionally correct, only that it performs as designed.

It is the process of identifying errors introduced by combining individual program unit tested modules. Integration Testing should not begin until all units are known to perform according to the unit specifications. It can start with testing several logical units or can incorporate all units in a single integration test.
Below are the four steps of creating integration test cases:
Step 1 - Identify Unit Interfaces: The developer of each program unit identifies and documents the unit’s interfaces for the following unit operations:
- Responding to queries from terminals for information- Managing transaction data entered for processing- Obtaining, updating, or creating transactions on computer files- Passing or receiving information from other logical processing units- Sending messages to terminals- Providing the results of processing to some output device or unit
Step 2 - Reconcile Interfaces for Completeness: The information needed for the integration test template is collected for all program units in the software being tested. Whenever one unit interfaces with another, those interfaces are reconciled. For example, if program unit A transmits data to program unit B, program unit B should indicate that it has received that input from program unit A. Interfaces not reconciled are examined before integration tests are executed.
Step 3 - Create Integration Test Conditions: One or more test conditions are prepared for integrating each program unit. After the condition is created, the number of the test condition is documented in the test template.
Step 4 - Evaluate the Completeness of Integration Test Conditions: The following list of questions will help guide evaluation of the completeness of integration test conditions recorded on the integration testing template. This list can also help determine whether test conditions created for the integration process are complete.
Q1. Is an integration test developed for each of the following external inquiries:
- Record test?- File test?- Search test?- Match/merge test?- Attributes test?- Stress test?- Control test?
Q2. Are all interfaces between modules validated so that the output of one is recorded as input to another?
Q3. If file test transactions are developed, do the modules interface with all those indicated files?
Q4. Is the processing of each unit validated before integration testing?
Q5. Do all unit developers agree that integration test conditions are adequate to test each unit’s interfaces?
Q6. Are all software units included in integration testing?
Q7. Are all files used by the software being tested included in integration testing?
Q8. Are all business transactions associated with the software being tested included in integration testing?
Q9. Are all terminal functions incorporated in the software being tested included in integration testing?
Software blogs

Equivalence Partitioning

Equivalence Partitioning

Equivalence partitioning is a test case selection technique for black box testing. In this method, testers identify various classes of input conditions called as equivalence classes. These classes are identified such that each member of the class causes the same kind of processing and output to occur.

Basically, a class is a set of input conditions that are similar in nature for a system. In this test case selection technique, it is assumed that if the system will handle one case in the class erroneously, it would handle all cases erroneously.
This technique drastically cuts down the number of test cases required to test a system reasonably. Using this technique, one can found the most errors with the smallest number of test cases.
To use equivalence partitioning, you will need to:Determining conditions to be testedDefining and designing testsDetermining conditions to be tested:All valid input data for a given condition are likely to go through the same process.Invalid data can go through various processes and need to be evaluated more carefully. For example:Treat the blank entry differently than an incorrect entry.Treat a value differently if it is less than or greater than a range of values.If there are multiple error conditions within a function, one error may override the other, which means that the subordinate error does not get tested unless the other value is valid.Defining and Designing Test Cases:First, include valid tests and include as many valid tests as possible in one test case.For invalid input, include only one test in a test case in order to isolate the error.Example: In a company, first three digits of all employee IDs, the minimum number is 333 and the maximum number is 444. For the fourth and fifth digits, the minimum number is 11 and the maximum number is 99.So, for the first three digits the various test conditions can be
a. = or > 333 and = or <>b. <>c. > 444, (invalid input, above the range)d. Blank, (invalid input, below the range).And for the third and fourth digits the various test conditions can bee. = or > 11 and = or <>f. <>g. > 99, (invalid input, above the range)h. blank, (invalid input, below the range)
Now, while using equivalence partitioning; only one value that represents each of the eight equivalence classes needs to be tested.
Now, after identifying the tests, you will need to create test cases to test each equivalence class. Create one test case for the valid input conditions and identify separate test cases for each invalid input.
As a black box tester, you might not know the manner in which the programmer has coded the error handling. So, you will need to create separate tests for each invalid input, to avoid masking the result in the event one error takes priority over another.
Thus, based on the test conditions, there can be seven test cases:Test case for a and e - (both are valid)Test case for b and e - (only the first one is invalid)Test case for c and e - (only the first one is invalid)Test case for d and e - (only the first one is invalid)Test case for a and f - (only the second one is invalid)Test case for a and g - (only the second one is invalid)Test case for a and h - (only the second one is invalid)
Software blogs

Sunday, July 14, 2013

How to decide when software is ready to release

How to decide when software is ready to release


Deciding when software is ready to release is a difficult. You have pressure from all sides to release perfect software, with added features. Below are some common facts:
- The engineers have said that the code was complete months ago, but are still making code changes.
- Sales people promised the software to major accounts months ago and are making commitments for the next release.
- The product manager wants a few more features added and you want to release a zero defect software.
Having no specific release criteria, many organizations wait for the manager to say "Release it", not knowing when or why the decision was made at that time. In other cases, there is a general group consensus to just push the software out the door because the tired and stressed group wants to move out from the seemingly never ending cycles.
You can never take all of the stress out of releasing good software, but by planning ahead and setting up a good defect tracking repository, predetermining acceptable bug counts, properly testing the software and effectively triaging the defects you can always know the current state of the software, but most importantly know when the software is ready to ship.
Defect Tracking Repository:
The most common as well as important tool to determine the state of the software is the defect tracking repository. The repository must be able to provide information required for the triage process. The repository must include the following info:
- The expected results of the testers test- The steps to recreating the defect- The version in which the defect was found- The module in which the defect was found- The severity of the defect- A short descriptive name for the defect- A description of the defect- The actual results if the testers test- Resolution notes
The project team should be able to query the database and gather information such as defect counts at various severity levels, and module levels, descriptions of the defects and steps to reproduce them. The repository becomes the archive for the project and it becomes important that correct and complete information be entered in order to use this information for later projects as a planning tool.
Predetermined Acceptable Bug Counts: The goal of releasing software with no defects cannot be achieved given the limited time and resources so, in the preliminary planning of the project exit criteria must be set up. The exit criteria should then be translated into the test plan outline, which should include acceptable bug count levels in order to ship the software.
The exact acceptable bug count level is not some magic number that will insure a successful product, but more a target goal that the group can use to see that they are moving forward toward a common goal. An acceptable bug count statement may look like this:
- Showstoppers: There may not be any- High: There may not be any- Medium: There may not be any that have a high probability of appearance to the customer- Low: There may be a minimal amount in specified core areas and all other areas may have any amount.
In order for the entire group to know the state of the software at all timesit would be preferable to produce a weekly build grading process. The idea is that each week the build would be treated as if it were the software to be shipped and given a grade. The weekly grade keeps the team current on the software state and the team can see the software progressively improve.
This quantitative method takes a lot of the guesswork out of the equation.
Proper Testing: The key to shipping quality software is finding and fixing all defects. The testers responsibility becomes finding the defects and properly reporting them. In order for the testers to find the defects the lead tester must set up in the test plan outline, the specific stages of testing to ensure that the process is organized and covers all aspects of the software. The test plan may contain the following 7 phases:
- Test Case and Test Suite Planning- Unit Testing- Regression Testing- System Testing- Regression testing- Performance Testing- Release Candidate Testing
The goal is to turn up as many defects as early as possible in order to make the fixes easier on the engineers and to provide ample time for regression testing since every time the code is altered there is a greater risk of creating more defects.
As important as finding the defect, the tester must be able to correctly report it in the repository. This becomes extremely important in large projects where the defect counts may rise in the thousands.
Effective Triage:
The Test Lead and Engineer must sit down regularly to evaluate the reported defects and drive the process forward in the most efficient manner for the engineers as well as the testers. Much of the decision making here is based on queries from the defect repository, which shows the importance of the accuracy and completeness of the repository.
The creation of a "Hot List" which is a list of important bugs is a great tool to create. This can be done in an plain Excel sheet, which identifies the most important defects, their module and a brief description. This list is great to use in triage to identify the defect action items for the upcoming week.
Defects that prevent further testing in a certain area must be given precedence and the engineers can assist in the risk analysis of future modules to assist the testers in their test case development. In the determination of what to fix next generally it is advantageous to group and fix defects relating to the same module at the same time even though there may be other important defects in other modules. Grouping of defects assists the engineers finish a module and move on.
Shipping quality software is not an easy task but by being organized from the start of the project the ship date and defect counts don't have to be a "gut feeling". The project must have a Defect Tracking Repository that must be maintained and updated continually with correct and complete information. The Repository is used as the main tool for an effective triage process and in conjunction with the Test Plan Outline that must be createdat the start of the project describing in detail the exit criteria to ship the software.
Please note the entire process rests on proper testing and the use of proper test planning which is what uncovers the defects. If you do not uncover the defects the users of the software will.
Thus, by following the above described process it is possible to know the state of the software at any time so that the entire team knows the goal, sees the progress and knows when the software is ready to release. Software blogs Blog Directory

Test Readiness

Test Readiness


Before starting the actual testing, it is important to check whether the system / project / environment is ready for testing. This is called as Test Readiness Review. It is better to do it with a checklist.
Below is a sample Test Readiness Review Checklist:
1. Whether all the tests are conducted according to the Test Plan / Cases ?
2. Are all problems / defects translated into Defect Reports ?
3. Are all the Defect Reports satisfactorily resolved ?
4. Is the log of the tests conducted available?
5. Is unit testing complete in all respects?
6. Is Integration testing complete ?
7. Is all the relevant documentation baselined ?
8. Is all work products products baselined?
9. Is the test plan baselined ?
10. Does the test plan contain the strategy / procedure to conduct the system test ?
11. Are baselined test designs and test cases ready?
12. Is unit/integrated test software ready ?
13. Is the user manual ready?
14. Is the installation procedure documented ?
15. Are all the product requirements implemented? 
16. Is the list of known problems available? Is there any "workaround" procedure for the known bugs ?
17. Are test environment needs met for Hardware, code, procedures, scripts, test tools etc.? 
18. List of exceptions in test software and test procedures and their work around if any?
19. Is the test reporting tool is available?
20. Are the designers educated on Test reporting tool?
21. Is any standard methodology / tool used and is appropriate to the type of the project?
22. Is the criteria for regression testing defined? Has the regression testing been done accordingly?
23. Is the source code available from the client for performing regression testing complete in all respects?
24. Is the source code freezed for testing? Software blogs

Testing Team Organizing

Testing Team Organizing

The people component includes human resource allocations and the required skill sets. The test team should comprise the highest-caliber personnel possible. They are usually extremely busy because their talents put them in great demand, and it therefore becomes vital to build the best case possible for using these individuals for test purposes. A test team leader and test team need to have the right skills and experience, and be motivated to work on the project. Ideally, they should be professional quality assurance specialists but can represent the executive sponsor, users, technical operations, database administration, computer center, independent parties, etc. The latter is particularly useful during final system and acceptance testing. In any event, they should not represent the development team, for they may not be as unbiased as an outside party. This is not to say that developers shouldn't test. For they should unit and function test their code extensively before handing it over to the test team.
There are two areas of responsibility in testing: 
1. Testing the application, which is the responsibility of the test team2. The overall testing processes, which is handled by the test manager. 
The test manager directs one or more testers, is the interface between quality assurance and the development organization, and manages the overall testing effort. Responsibilities include:
• Setting up the test objectives • Defining test resources • Creating test procedures • Developing and maintaining the test plan • Designing test cases • Designing and executing automated testing tool scripts • Test case development • Providing test status • Writing reports • Defining the roles of the team members • Managing the test resources • Defining standards and procedures • Ensuring quality of the test process • Training the team members • Maintaining test statistics and metrics
The test team must be a set of team players and have the following responsibilities:
• Execute test cases according to the plan • Evaluate the test results • Report errors • Design and execute automated testing tool scripts • Recommend application improvements • Record defects 
The main function of a team member is to test the application and report defects to the development team by documenting them in a defect tracking system. Once the development team corrects the defects, the test team re executes the tests which discovered the original defects. 
It should be pointed out that the roles of the test manager and team members are not mutually exclusive. Some of the team leader’s responsibilities are shared with the team member and visa versa. 
The basis for allocating dedicated testing resources is the scope of the functionality and the development time frame, e.g., a medium development project will require more testing resources than a small one. If project A of medium complexity requires a testing team of 5, project B with twice the scope would require 10 testers (given the same resources).
Another rule of thumb is that the testing costs approach 25% of the total budget. Since the total project cost is known, the testing effort can be calculated and translated to tester headcount.
The best estimate is a combination of the project scope, test team skill levels, and project history. A good measure of required testing resources for a particular project is the histories of multiple projects, i.e., testing resource levels and performance compared to similar projects.
Software blogs

Software Testing Principles

SoftwareTesting Principles


Below are some basic Software Testing Principles:
- A necessary part of a test case is a definition of the expected output or result. 
- A programmer should avoid attempting to test his or her own program. 
- A programming organization should not test its own programs. 
- Thoroughly inspect the results of each test. 
- Test cases must be written for input conditions that are invalid and unexpected, as well as for those that are valid and expected.
- Examining a program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do.
- Avoid throwaway test cases unless the program is truly a throwaway program.
- Do not plan a testing effort under the tacit assumption that no errors will be found.
- The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.
- Software Testing is an extremely creative and intellectually challenging task.
Software blogs

Saturday, July 13, 2013

Test Planning

Test Planning


The quality of Software Testing effort depends on the quality of quality of Software Testing Planning. Software Testing Planning is very critical and important part of Software Testing Process.

Below are some questions and suggestions for Software Test Planning:

- Have you planned for an overall testing schedule and the personnel required, and associated training requirements?

- Have the test team members been given assignments?

- Have you established test plans and test procedures for

1. Module testing
2. Integration testing
3. System testing
4. Acceptance testing

- Have you designed at least one black-box test case for each system function?

- Have you designed test cases for verifying quality objectives/factors (e.g. reliability, maintainability, etc.)?

- Have you designed test cases for verifying resource objectives?

- Have you defined test cases for performance tests, boundary tests, and usability tests?

- Have you designed test cases for stress tests (intentional attempts to break system)?

- Have you designed test cases with special input values (e.g. empty files)?

- Have you designed test cases with default input values?

- Have you described how traceability of testing to requirements is to be demonstrated (e.g. references to the specified functions and requirements)?

- Do all test cases agree with the specification of the function or requirement to be tested?

- Have you sufficiently considered error cases? Have you designed test cases for invalid and unexpected input conditions as well as valid conditions?

- Have you defined test cases for white-box-testing (structural tests)?

- Have you stated the level of coverage to be achieved by structural tests?

- Have you unambiguously provided test input data and expected test results or expected messages for each test case?

- Have you documented the purpose of and the capability demonstrated by each test case?

- Is it possible to meet and to measure all test objectives defined (e.g. test coverage)?

- Have you defined the test environment and tools needed for executing the software test?

- Have you described the hardware configuration an resources needed to implement the designed test cases?

- Have you described the software configuration needed to implement the designed test cases?

- Have you described the way in which tests are to be recorded?

- Have you defined criteria for evaluating the test results?

- Have you determined the criteria on which the completion of the test will be judged?

- Have you considered requirements for regression testing?
Software blogs

Test Specification

Test Specification



Test Specification – It is a detailed summary of what scenarios will be tested, how they will be tested, how often they will be tested, and so on and so forth, for a given feature. Trying to include all Editor Features or all Window Management Features into one Test Specification would make it too large to effectively read.

However, a Test Plan is a collection of all test specifications for a given area. The Test Plan contains a high-level overview of what is tested for the given feature area.

Contents of a Test Specification:

Revision History - This section contain information like Who created the test specification? When was it created? When was the last time it was updated?

Feature Description – A brief description of what area is being tested.

What is tested? – An overview of what scenarios are tested.

What is not tested? - Are there any areas that are not being tested. There can be several reasons like... being covered by different people or any test limitations etc. If so, include this information as well.

Nightly Test Cases – A list of the test cases and high-level description of what is tested whenever a new build becomes available.

Breakout of Major Test Areas - It is the most interesting part of the test specification where testers arrange test cases according to what they are testing.

Specific Functionality Tests – Tests to verify the feature is working according to the design specification. This area also includes verifying error conditions.

Security tests – Any tests that are related to security.

Accessibility Tests – Any tests that are related to accessibility.

Performance Tests - This section includes verifying any performance requirements for your feature.

Localization / Globalization - tests to ensure you’re meeting your product’s Local and International requirements.

Please note that your Test Specification document should be in such a manner that should prioritize the test case easily like nightly test cases, weekly test cases and full test pass etc:

  • Nightly - Must run whenever a new build is available.
  • Weekly - Other major functionality tests run once every three or four builds.
  • Lower priority - Run once every major coding milestone
Software blogs

Software Testing Estimation

Software Testing Estimation


Software Testing estimation process is one of the most difficult and critical activity. When say that one project will be completed in a particluar time at a particular cost, then it must happen. If it does not happen, it may result in peer's comments and senior management’s warnings to being fired depending on the reasons and seriousness of the failure.

Here are a few rules for effective software testing estimation:

- Estimation must be based on previous projects: All estimation should be based on previous projects.

- Estimation must be recorded: All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again.

- Estimation shall be always based on the software requirements: All estimation should be based on what would be tested. The software requirements shall be read and understood by the testing team as well as development team. Without the testing participation, no serious estimation can be considered.

- Estimation must be verified. All estimation should be verified: Two spreadsheets can be created for recording the estimations. At the end, compare both the estimations. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.

- Estimation must be supported by tools: Tools such as spreadsheet containing metrics calculates automatically the costs and duration for each testing phase. Also, a document containing sections such as: cost table, risks, and free notes should be created. Showing this document to customer can help the customer to decide which kind of test he needs.

- Estimation shall be based on expert judgment: The experinced resources can easily make estimate that how long it would take for testing.

Classifiy the requirements in the following categories:

Critical: The development team has little knowledge in how to implement it.

High: The development team has good knowledge in how to implement it but it is not an easy task.

Normal: The development team has good knowledge in how to implement
Software blogs

Test Strategy

Test Strategy


In continuation to my previous post on Test Strategy, I'm here describing it in more details.
Test strategy can be defined as a high level management method that confirms adequate confidence in the software product being tested, while ensuring that the cost efforts and timelines are all within acceptable limits.
Test strategy can be of different levels:Enterprise-wide test strategy to set up a test group for the entire organization.Application or the product test strategy to set up the test requirements of a software product for its entire lifeProject level test strategy to set up test plans for a single project life cycle.A good test strategy should:Define the objectives, timelines and approach for the testing effortList various testing activities and define roles & responsibilitiesIdentify and coordinate the test environment and data requirements before the starting of testing phaseCommunicate the test plans to stakeholders and obtain buy-in from business clientsProactively minimize fires during the testing phaseDecisions to be made in test strategy:When should the testing be stopped?What should be testing?What can remain untested?Further, the following three types of risks can be considered while making a test strategy:
1. Development related risks include:Inefficiently controlled project timelines.Complexity of the codeLess skilled programmersDefects in existing codeProblems in team co-ordinationLack of reviews and configuration controlLack of specifications2. Testing related risks include:Lack of domain knowledgeLack of testing and platform skillsLack of test bed and test dataLack of sufficient time3. Production related risks include:Dynamic frequency of usageComplexity of user interfaceHigh business impact of the functionTest strategies generally acknowledge thatThe earlier a bug is detected, the cheaper it is to fix.Splitting the testing into smaller parts and then aggregating, ensures a quicker debug and fix cycleBugs found in critical functions mean more time to fixSix steps for developing a test strategy:Determine the objective and scope of the testingIdentify the types of tests requiredBased on the external and internal risks, add, modify or delete processes.Plan the environment, test bed, test data and other infrastructurePlan strategy for managing changes, defects and timeline variations.Decide both the in-process and the post process metrics to match the objectives.While creating test strategy for maintenance projects, following aspects needs to be considered:How much longer is the software being supported and is it worth-while strategizing to improve the testing of this software?Is it worthwhile to incrementally automate the testing of the existing code and functionality?How much of the old functionality should we test to give us the confidence that the new code has not corrupted the old code?Should we have same set of people support the testing of multiple software maintenance projects?What analysis can be applied on the current defect database that will help us improve the development itself?However, for creating the test strategy of product development, following aspects needs to be considered:How much longer is the product going to last? This includes multiple versions etc. since test case and artifacts can continue to be used across versions.Would automation be worth considering?How much testing should we do per release (minor, patch, major, etc.)Do we have a risk based test plan that will allow releases to be made earlier than planned?Is the development cycle iterative? Should the test cycle follow that?As per IEEE software engineering standards, the Test Strategy document should answer the following aspects:
1. Objective and scope of testingWhat is the business objective of testing?What are the quality goals to be met by the testing effort?To what extent will each application be tested?What external systems will not be tested?What systems and components need to be tested?2. Types of testingDifferent phases of the testing that are required.Different types of testingTest Coverage3. Testing approachDefinition of testing process life cycleCreation of testing related templates, checklists and guidelinesMethodology for test development and executionSpecification of test environment setupPlanning for test execution cycles4. Test Environment specificationHardware and software requirementsTest data creation, database requirements and setupConfiguration management, maintenance of test bed and build management5. Test AutomationCriteria and feasibility of test automationTest tool identificationTest automation strategy (effort, timelines etc.)6. Roles and Responsibilities / Escalation mechanismTesting team organization and reporting structureDifferent roles as a part of testing activities and their corresponding responsibilitiesWho to escalate to and when?7. Defect ManagementCategorization of defects based on criticality and priorityDefinition of a workflow or the disposition of defectsTechniques and tools for tracking of defects.8. Communication and status reportingStatus meetings for communication of testing statusFormat and content of different status reportsPeriodicity of each status reportDistribution list for each report9. Risks and mitigation plansIdentification of all testing related risks, their impact and exposurePlan for mitigation and managing these risks10. Configuration managementList of testing artefacts under version controlTools and techniques for configuration management11. Change managementPlan for managing requirement changesModels for assessing impact of changes on testingProcess for keeping test artifacts in sync with development artifacts12. Testing metricsWhat metrics are to be collected? Do they match the strategic objectives?What will be the techniques for collection of metrics?What tools will be employed to gather and analyze metrics?What process improvements are planned based on these metrics?

Software blogs

Tuesday, July 9, 2013

How to improve Software testing process

How to improve Software testing process

Improving the testing process is not the responsibility of test team only. It is a joint effort of Development & Testing team and Management to understand the health of existing testing process and identify the necessary measures to improve it.
How to Improve Software Testing Process
Points for Leads, Test Managers, Project Managers and Delivery Heads:
  1. Identify all the platforms on which application will be run.
  2. Improve requirements management process – business & functional requirements should be well documented. Involve testing team in requirement gathering phase.
  3. Create a document / list of all possible scenarios before writing test cases. Include it into test planning.
  4. Measure testing effort on periodical basis – New Bug Rate, Effort Variance, Schedule Variance, Test Case Effectiveness, Residual defect density etc.
  5. Keep developers away from test environments.
  6. Post mortem meetings must be planned after every release. Both testing & development teams should participate in these meetings.
  7. Keep track of bug fixing time taken by development team. Keep informed about the time remaining for regression testing to relevant stack holders. Tracking of this point by relevant stack holders is very critical as it may result in low quality releases.
Points for Software Testers:
  1. Try to understand the logic behind the screen and try to break that logic. Understand the internal workings of code from developers during lunch time or tea breaks.
  2. Analyse test results thoroughly. Try to identify root cause from functional perspective.
  3. Break the application into smaller functional modules.
  4. First write test cases for valid conditions, then cover invalid conditions.
  5. While writing test cases, refer design documents as well.
  6. As soon as you complete the test case writing, share test case with development team. It could result in time saving.
  7. During test case writing phase, group test cases using impact analysis. It will help in effective regression testing in less time.
  8. If you are new tester for a old development team, have a look on old bug reports of modules / project where development worked previously. Generally developers, repeat similar mistakes.
  9. Test the application for both implicit as well as explicit requirements.
  10. Never communicate bugs verbally. For any critical / show stopper bugs, have an immediate discussion & then document via mail and share it to relevant stack holders.
Software blogs