What is Defect/Bug?
Introduction to Defect Management
While executing test cases you may find that actual results vary from the expected results. This is nothing but a defect also called incident , bug , problem or issues.
While reporting the bug to developer, your Bug Report should contain the following information
· Defect_ID - Unique identification number for the defect.
· Defect Description - Detailed description of the defect including information about the module in which defect was found.
· Version - Version of the application in which defect was found.
· Steps - Detailed steps along with screenshots with which the developer can reproduce the defects.
· Date Raised - Date when the defect is raised
· Reference- where in you Provide reference to the documents like . requirements, design, architecture or may be even screenshots of the error to help understand the defect
· Detected By - Name/ID of the tester who raised the defect
· Status - Status of the defect , more on this later
· Fixed by - Name/ID of the developer who fixed it
· Date Closed - Date when the defect is closed
· Severity which describes the impact of the defect on the application
· Priority which is related to defect fixing urgency. Severity Priority could be High/Medium/Low based on the impact urgency at which the defect should be fixed respectively
Consider the following as a Test Manager
Your team found bugs while testing the ABC Banking project.
After a week the developer responds -
In next week the tester responds
As in the above case, if the defect communication is done verbally, soon things become very complicated. To control and effectively manage bugs you need a defect lifecycle.
Defect Management Process
This topic will guide you on how to apply the defect management process to the project ABC Bank website. You can follow the below steps to manage defects.
Discovery
In the discovery phase, the project teams have to discover as many defects as possible, before the end customer can discover it. A defect is said to be discovered and change to status accepted when it is acknowledged and accepted by the developers
In the above scenario, the testers discovered 84 defects in the website ABC.
.
Let’s have a look at the following scenario; your testing team discovered some issues in the ABC Bank website. They consider them as defects and reported to the development team, but there is a conflict -
In such case, as a Test Manager, what will you do?
A) Agree With the test team that it's defect
B) Test Manager takes the role of judge to decide whether the problem is defect or not
C) Agree with the development team that is not a defect
B) Test Manager takes the role of judge to decide whether the problem is defect or not
C) Agree with the development team that is not a defect
In such case, a resolution process should be applied to solve the conflict, you take the role as a judge to decide whether the website problem isa defect or not.
Categorization
Defect categorization help the software developers to prioritize their tasks. That means that this kind of priority helps the developers in fixing those defects first that are highly crucial.
Defects are usually categorized by the Test Manager –
Let’s do a small exercise as following
Drag & Drop the Defect Priority Below
·
· · · · 1) The website performance is too slow
|
00001.
|
2) The login function of the website does not work
properly |
00001.
|
3) The GUI of the website does not display correctly
on mobile devices |
00001.
|
4) The website could not remember the user login
session |
00001.
|
5) Some links doesn’t work
|
00001.
|
Here are the recommended answers
No.
|
Description
|
Priority
|
Explanation
|
1
|
The website performance
is too slow |
High
|
The performance bug can cause huge
inconvenience to user. |
2
|
The login function of the
website does not work properly |
Critical
|
Login is one of the main function of the
banking website if this feature does not work, it is serious bugs |
3
|
The GUI of the website
does not display correctly on mobile devices |
Medium
|
The defect affects the user who use
Smartphone to view the website. |
4
|
The website could not
remember the user login session |
High
|
This is a serious issue since the user will
be able to login but not be able to perform any further transactions |
5
|
Some links doesn’t work
|
Low
|
This is an easy fix for development guys
and the user can still access the site without these links |
Resolution
Once the defects are accepted and categorized, you can follow the following steps to fix the defect.
· Assignment: Assigned to a developer or other technician to fix, and changed the status to Responding.
· Schedule fixing: The developer side take charge in this phase. They will create a schedule to fix these defects, depend on the defect priority.
· Fix the defect: While the development team is fixing the defects, the Test Manager tracks the process of fixing defect compare to the above schedule.
· Report the resolution: Get a report of the resolution from developers when defects are fixed.
Verification
After the development team fixed and reported the defect, the testing team verifies that the defects are actually resolved.
For example, in the above scenario, when the development team reported that they already fixed 61 defects, your team would test again to verify these defects were actually fixed or not.
Closure
Once a defect has been resolved and verified, the defect is changed status as closed. If not, you have send a notice to the development to check the defect again.
Reporting
The management board has right to know the defect status. They must understand the defect management process to support you in this project. Therefore, you must report them the current defect situation to get feedback from them.
Important Defect Metrics
Back the above scenario. The developer and test teams have reviews the defects reported. Here is the result of that discussion
How to measure and evaluate the quality of the test execution?
This is a question which every Test Manager wants to know. There are 2 parameters which you can consider as following
In the above scenario, you can calculate the defection rejection ratio (DRR) is 20/84 = 0.238 (23.8 %).
Another example, supposed the ABC Bank website has total 64 defects, but your testing team only detect 44 defects i.e. they missed 20 defects. Therefore, you can calculate the defect leakage ratio (DLR) is 20/64 = 0.312 (31.2 %).
Conclusion, the quality of test execution is evaluated via following two parameters
Defect reject ratio = 23.8%
Defect leakage ratio = 31.2%
The smaller value of DRR and DLR is, the better quality of test execution is. What is the ratio range which is acceptable? This range could be defined and accepted base in the project target or you may refer the metrics of similar projects.
In this project, the recommended value of acceptable ratio is 5 ~ 10%. It means the quality of test execution is low. You should find countermeasure to reduce these ratios such as
· Improve the testing skills of member.
· Spend more time for testing execution, especially for reviewing the test execution results.
Defect/Bug Life Cycle - A Tester's Guide
From the discovery to resolution a defect moves through a definite lifecycle called the defect lifecycle.
Below find the various states that a defects goes through in its lifecycle. The number of states that a defect goes through varies from project to project. Below lifecycle, covers all possible states.
· New: When a new defect is logged and posted for the first time. It is assigned a status NEW.
· Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to developer team
· Open: The developer starts analyzing and works on the defect fix
· Fixed: When developer makes necessary code change and verifies the change, he or she can make bug status as "Fixed."
· Pending retest:Once the defect is fixed the developer gives particular code for retesting the code to the tester. Since the testing remains pending from the testers end, the status assigned is "pending request."
· Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and change the status to "Re-test."
· Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bugdetected in the software, then the bug is fixed and the status assigned is "verified."
· Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to "reopened". Once again the bug goes through the life cycle.
· Closed: If the bug is no longer exits then tester assign the status "Closed."聽
· Duplicate: If the defect is repeated twice or the defect corresponds the same concept of the bug, the status is changed to "duplicate."
· Rejected: If the developer feels the defect is not a genuine defect than it changes the defect to "rejected."
· Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next release, then status "Deferred" is assigned to such bugs
· Not a bug:If it does not affect the functionality of the application then the status assigned to a bug is "Not a bug".
This training video describes the various stages in a bug aka defect life cycle and its importance with the help of an example
Suppose we have a flight reservation application. Now in order to login into the webpage, you have to enter the correct password "Mercury".
Any wrong password entered for the login page will be addressed as a defect.
While testing the application, tester finds that an error pops out when a wrong password entered into the login page and assigned this error or defect as, NEW. This defect is then assigned to development project manager to analyze whether the defect is valid or not. The project manager finds that the defect is not a valid defect.
1. Tester finds the defect
2. Status assigned to defect- New
3. Defect is forwarded to Project Manager for analyze
4. Project Manager decides whether defect is valid
5. Here the defect is not valid- status given "Rejected."
So, project manager assigns a status rejected. If the defect is not rejected then the next step is to check whether it is in scope. Suppose we have another function- email functionality for the same application, and you find a problem with that. But it is not a part of the current release then such defects are assigned as a postponed or deferred status.

If no the defect is assigned to the developer who starts fixing the code. During this stage, the defect is assigned a status in- progress. Once the code is fixed. Defect is assigned a status fixed.
Next the tester will re-test the code. In case, the test case passes the defect is closed. If the test cases fails again, the defect is re-opened and assigned to the developer.
Consider a situation where during the 1st release of Flight Reservation a defect was found in Fax order that was fixed and assigned a status closed. During the second upgrade release the same defect again re-surfaced. In such cases, a closed defect will be re-opened.
That's all to Bug Life Cycle
Best Testing Tools for diverse testing activity
Human beings err a lot. It is something we do the best !!!
If we humans are made to do the same repetitive task over and over again, we soon become bored and start making mistakes. This is where tools become helpful. Besides tools improve reliability , reduce turn around time and increase ROI.
They are various types of tools that assist in diverse testing activities ranging from requirements capturing to test management.
But just a plain mention of tools and their corresponding characteristics would be boring. So we have designed an interactive test to help you learn key features of the various testing tools.
Later go through this ready-reckoner of different tools and their key features.
Type Of Tool
|
TEST
MANAGEMENT TOOL |
TEST
EXECUTION TOOLS |
PERFORMANCE
MEASUREMENT TOOLS |
REQUIREMENTS
MANAGEMENT TOOLS |
Key Features
&
Functionalities
|
Management
of Tests |
Storing an
expected result in the form of a screen or GUI object and comparing it with run-time screen or object |
Ability to simulate
high user load on the application under test |
Storing
Requirements |
Scheduling
of Tests |
Executing tests
from a stored scripts |
Ability to create
diverse load conditions |
Identifying
undefined , missing or to be defined requirements | |
Management
of Testing Activities |
Logging test
results |
Support for
majority of protocols |
Traceability of
Requirements | |
Interfaces to
other testing tools |
Sending test
summary to test management tools |
Powerful analytical
tools to interpret the performance logs generated |
Interfacing with
Test Management Tools | |
Traceability
|
Access of data
files for use as test data |
Requirements
Coverage | ||
Example
|
Quality
Center |
QTP
|
Loadrunner
|
Case
|
Type Of Tool
|
CONFIGURATION
MANAGEMENT TOOL |
REVIEW
TOOL |
STATIC
ANALYSIS TOOLS |
MODELING
TOOLS |
Key Features
&
Functionalities
|
Information About
Versions and builds of Software and Test Ware |
Sorting and
Storing Review Comments |
Calculate
Cyclomatic Complexity |
Identify
Inconsistencies or defects in Models |
Build and release
management |
Communicating
Comments to relevant people |
Enforce
Coding Standards |
Help in
prioritization of tests in accordance with the model in review | |
Build and release
management |
keeping track of
review comments , including defects |
Analyze
Structure and Dependencies |
Predicting
system response under various levels of loads | |
Access control
(check in and check out) |
Traceability between
review comments & review documents |
Help in
understanding Code |
Using UML,
it helps in understanding system functions and tests. | |
Monitoring Review
Status ( Pass , Pass with corrections , requires more changes ) |
Identify
defects in code | |||
Example
|
Source
Anywhere |
InView
|
PMD
|
Altova
|
Type Of Tool
|
Test Data
Preparation Tools |
Test Harness /
Unit Test Framework Tools |
Coverage
Measurement Tool |
Security Tools
|
Key Features
&
Functionalities
|
Extract Selected
data records from files or databases |
Supplying inputs
or receiving outputs for the software under test |
Identifying
Coverage Items |
Identify Viruses
|
Data
Anonymization |
Recording pass
/ fail status |
Reporting
coverage items which are not covered yet |
Identify Denial
of Service Attacks | |
Create new
records populates with random data |
Storing tests
|
Identifying
test inputs to exercise |
Simulating Various
Types of External Attacks | |
Create large
number of similar records from a template |
Support for
debugging |
Generating
stubs and drivers |
Identifying Weakness
in Passwords for files and passwords | |
Code coverage
measurement |
Probing for open
ports or externally visible points of attacks | |||
Example
|
Clone & Test
|
Junit
|
CodeCover
|
Fortify
|
Another tool category is a Comparator Tool which is usually used to compare pre - code change results with post - code change results to detect any regression defects.
Ex ExamDiff
I think this is the best article today. Thanks for taking your own time to discuss this topic, I feel happy about that curiosity has increased to learn more about this topic. Keep sharing your information regularly for my future reference.Excellent blog admin. This is what I have looked. Check out the following links for QA services
ReplyDeleteTest automation software
Best automated testing software
Mobile app testing services