Complete Web Application Testing Checklist
While testing the web applications, one should consider the below mentioned checklist. The below mentioned checklist is almost applicable for all types of web applications depending on the business requirements.
The web application checklist consists of: -
· Usability Testing
· Functional Testing
· Compatibility Testing
· Database Testing
· Security Testing
· Performance Testing
Now let's look each checklist in detail:
Usability Testing
What is Usability Testing?
· Usability testing is nothing but the User-friendliness check.
· In Usability testing, the application flow is tested so that a new user can understand the application easily.
· Basically, system navigation is checked in Usability testing.
What is the purpose or Goal of Usability testing?
A Usability test establishes the ease of use and effectiveness of a product using a standard Usability test practices.
Usability Test Scenarios:
· Web page content should be correct without any spelling or grammatical errors
· All fonts should be same as per the requirements.
· All the text should be properly aligned.
· All the error messages should be correct without any spelling or grammatical errors and the error message should match with the field label.
· Tool tip text should be there for every field.
· All the fields should be properly aligned.
· Enough space should be provided between field labels, columns, rows, and error messages.
· All the buttons should be in a standard format and size.
· Home link should be there on every single page.
· Disabled fields should be grayed out.
· Check for broken links and images.
· Confirmation message should be displayed for any kind of update and delete operation.
· Check the site on different resolutions (640 x 480, 600x800 etc.?)
· Check the end user can run the system without frustration.
· Check the tab should work properly.
· Scroll bar should appear only if required.
· If there is an error message on submit, the information filled by the user should be there.
· Title should display on each web page
· All fields (Textbox, dropdown, radio button etc) and buttons should be accessible by keyboard shortcuts and the user should be able to perform all operations by using keyboard.
· Check if the dropdown data is not truncated due to the field size and also check whether the data is hardcoded or managed via administrator.
Functional Testing:
What is Functional Testing?
· Testing the features and operational behavior of a product to ensure they correspond to its specifications.
· Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
What is the purpose or Goal of Functional testing?
· The goal of Functional testing is to verify whether your product meets the intended functional specifications mentioned in your development documentation.
Functional Test Scenarios:
· Test all the mandatory fields should be validated.
· Test the asterisk sign should display for all the mandatory fields.
· Test the system should not display the error message for optional fields.
· Test that leap years are validated correctly & do not cause errors/miscalculations.
· Test the numeric fields should not accept the alphabets and proper error message should display.
· Test for negative numbers if allowed for numeric fields.
· Test division by zero should be handled properly for calculations.
· Test the max length of every field to ensure the data is not truncated.
· Test the pop up message ("This field is limited to 500 characters") should display if the data reaches the maximum size of the field.
· Test that a confirmation message should display for update and delete operations.
· Test the amount values should display in currency format.
· Test all input fields for special characters.
· Test the timeout functionality.
· Test the Sorting functionality.
· Test the functionality of the buttons available
· Test the Privacy Policy & FAQ is clearly defined and should be available for users.
· Test if any functionality fails the user gets redirected to the custom error page.
· Test all the uploaded documents are opened properly.
· Test the user should be able to download the uploaded files.
· Test the email functionality of the system.
· Test the java script is properly working in different browsers (IE, Firefox, Chrome, safari and Opera).
· Test to see what happens if a user deletes cookies while in the site.
· Test to see what happens if a user deletes cookies after visiting a site.
· Test all the data inside combo/list box is arranged in chronological order.
Compatibility Testing:
What is Compatibility testing?
· Compability testing is used to determine if your software is compatible with other elements of a system with which it should operate, e.g. Browsers, Operating Systems, or hardware.
What is the purpose or Goal of Compatibility testing?
· The purpose of Comp ability testing is to evaluate how well software performs in a particular browser, Operating Systems, hardware or software.
Compability Test Scenarios:
· Test the website in different browsers (IE, Firefox, Chrome, Safari and Opera) and ensure the website is displaying properly.
· Test the HTML version being used is compatible with appropriate browser versions.
· Test the images display correctly in different browsers.
· Test the fonts are usable in different browsers.
· Test the java script code is usable in different browsers.
· Test the Animated GIF's across different browsers.
Tool for Compatibility Testing:
Spoon.net: Spoon.net provides access to thousands of applications (Browsers) without any installs. This tool helps you to test your application on different browsers on one single machine.
Database Testing:
What is Database Testing?
· In Database testing backend records are tested which have been inserted through the web or desktop applications. The data which is displaying in the web application should match with the data stored in the Database.
To perform the Database testing, the tester should be aware of the below mentioned points:
· The tester should understand the functional requirements, business logic, application flow and database design thoroughly.
· The tester should figure out the tables, triggers, store procedures, views and cursors used for the application.
· The tester should understand the logic of the triggers, store procedures, views and cursors created.
· The tester should figure out the tables which get affected when insert update and delete (DML) operations are performed through the web or desktop applications.
With the help of the above mentioned points, the tester can easily write the test scenarios for Database testing.
Test Scenarios for Database Testing:
· Verify the database name: The database name should match with the specifications.
· Verify the Tables, columns, column types and defaults: All things should match with the specifications.
· Verify whether the column allows a null or not.
· Verify the Primary and foreign key of each table.
· Verify the Stored Procedure:
· Test whether the Stored procedure is installed or not.
· Verify the Stored procedure name
· Verify the parameter names, types and number of parameters.
· Test the parameters if they are required or not.
· Test the stored procedure by deleting some parameters
· Test when the output is zero, the zero records should be affected.
· Test the stored procedure by writing simple SQL queries.
· Test whether the stored procedure returns the values
· Test the stored procedure with sample input data.
· Verify the behavior of each flag in the table.
· Verify the data gets properly saved into the database after the each page submission.
· Verify the data if the DML (Update, delete and insert) operations are performed.
· Check the length of every field: The field length in the back end and front end must be same.
· Verify the database names of QA, UAT and production. The names should be unique.
· Verify the encrypted data in the database.
· Verify the database size. Also test the response time of each query executed.
· Verify the data displayed on the front end and make sure it is same in the back end.
· Verify the data validity by inserting the invalid data in the database.
· Verify the Triggers.
What is Security Testing?
Security Testing involves the test to identify any flaws and gaps from a security point of view.
Test Scenarios for Security Testing:
1. Verify the web page which contains important data like password, credit card numbers, secret answers for security question etc should be submitted via HTTPS (SSL).
2. Verify the important information like password, credit card numbers etc should display in encrypted format.
3. Verify password rules are implemented on all authentication pages like Registration, forgot password, change password.
4. Verify if the password is changed the user should not be able to login with the old password.
5. Verify the error messages should not display any important information.
6. Verify if the user is logged out from the system or user session was expired, the user should not be able to navigate the site.
7. Verify to access the secured and non secured web pages directly without login.
8. Verify the “View Source code” option is disabled and should not be visible to the user.
9. Verify the user account gets locked out if the user is entering the wrong password several times.
10. Verify the cookies should not store passwords.
11. Verify if, any functionality is not working, the system should not display any application, server, or database information. Instead, it should display the custom error page.
12. Verify the SQL injection attacks.
13. Verify the user roles and their rights. For Example The requestor should not be able to access the admin page.
14. Verify the important operations are written in log files, and that information should be traceable.
15. Verify the session values are in an encrypted format in the address bar.
16. Verify the cookie information is stored in encrypted format.
17. Verify the application for Brute Force Attacks
What is Performance Testing?
Performance testing is conducted to evaluate the compliance of a system or component with specified performance requirements.
General Test scenarios:
· To determine the performance, stability and scalability of an application under different load conditions.
· To determine if the current architecture can support the application at peak user levels.
· To determine which configuration sizing provides the best performance level.
· To identify application and infrastructure bottlenecks.
· To determine if the new version of the software adversely had an impact on response time.
· To evaluate product and/or hardware to determine if it can handle projected load volumes.
How to do Performance testing? By Manual Testing or by Automation
Practically it is not possible to do the performance testing manually because of some drawbacks like:
· More number of resources will be required.
· Simultaneous actions are not possible.
· Proper system monitoring is not available.
· Not easy to perform the repetitive task.
Hence to overcome the above problems we should use Performance testing tool. Below is the list of some popular testing tools.
· Apache JMeter
· Load Runner
· Borland Silk Performer.
· Rational Performance Tester
· WAPT
· NEO LOAD
What is Quality?
Quality is extremely hard to define and it is simply stated "Fit for use or purpose". It is all about meeting the needs and expectations of customers with respect to functionality, design, reliability, durability, & price of the product.
What is assurance?
Assurance is nothing but a positive declaration on a product or service ,which gives confidence. It is certainty of a product or a service, which it will work well. It gives a guarantee that the product will work without any problems as per the expectations or requirements.
What is Quality Assurance?
Quality Assurance popularly known as QA, is an activity to ensure that an organization is providing the best possible product or service to customers. QA focuses on improving the processes to deliver Quality Products to the customer. An organization has to ensure ,that processes are efficient, and effective as per the quality standards defined for software products.
Quality assurance has a defined cycle called PDCA cycle or Deming cycle.The phases of this cycle are:
· Plan
· Do
· Check
· Act
These above steps are repeated to ensure that processes followed in the organization are evaluated and improved on a periodic basis. Let's look into the above steps in detail -
· Plan - Organization should plan and establish the process related objectives and determine the processes that are required to deliver a high Quality end product.
· Do - Development and testing of Processes and also "do" changes in the processes
· Check - Monitoring of processes, modify the processes, and check whether it meets the predetermined objectives
· Act - Implement actions that are necessary to achieve improvements in the processes
An organization must use quality assurance to ensure that the product is designed and implemented with correct procedures. This helps reduce problems and errors , in the final product.
What is Quality Control?
Quality control popularly abbreviated as QC, is a process used to ensure quality in a product or a service.It does not deal with the processes used to create a product, rather It examines the quality of the "end products" and the final outcome .
Main aim of Quality control is to check whether the products meet the specifications and requirements of the customer. If an issue or problem is identified, it needs to be fixed before delivery to the customer.
QC also evaluates people on their quality level skill sets and imparts training and certifications. This evaluation is required for the service based organization, and helps provide "perfect" service to the customers.
Difference between Quality Control and Quality Assurance?
Sometimes, QC is confused with the QA. Quality control is to examine the product or service and check for the result. Quality assurance is to examine the processes and make changes to the processes which led to the end-product.
Examples of QC and QA activities are as follows:
Quality Control Activities
|
Quality Assurance Activities
|
Walkthrough
|
Quality Audit
|
Testing
|
Defining Process
|
Inspection
|
Tool Identification and selection
|
Checkpoint review
|
Training of Quality Standards and Processes
|
The above activities are concerned with QA and QC of any product and not essentially software. With respect to software
· QA becomes SQA ( Software Quality Assurance)
· QC becomes Software Testing.
Following table explains on differences between SQA and Software Testing:
SQA
|
Software Testing
|
Software Quality Assurance is
about engineering process that ensure quality |
Software Testing is to test a product
for problems before the product goes live |
Involves activities related to
implementation of processes, procedures and standards.
Example -
Audits
Training
|
Involves actives with respect to verification
of product
Example -
Review
Testing
|
Process focused
|
Product focused
|
Preventive technique
|
Corrective technique
|
Proactive measure
|
Reactive measure
|
The scope of SQA applied to all
products that will be created by the organization |
The scope of Software Testing applies to a
particular product being tested. |
Quality Assurance Certifications:
There are several certifications available in the industry to ensure that Organizations follow Standards Quality Processes. Customers makes this as qualifying criteria while selecting a software vendor.
ISO 9000
This standard was first established in 1987, and it is related to Quality Management Systems. This helps the organization ensure quality to their customers and other stakeholders. An organization who wishes to be certified as ISO 9000 is audited based on their functions, products, services and their processes.The main objective is to review and verify whether the organization is following the process as expected and check whether existing processes needs improvement.
This certification helps -
· Increase the profit of the organization
· Improves Domestic and International trade
· Reduces waste and increase productivity of the employees
· Provide Excellent customer satisfaction
CMMI level
The Capability Maturity Model Integrated (CMMI) is a process improvement approach developed specially for the software process improvement. It is based on the process maturity framework and used as general aid in business processes in Software Industry. This model is highly regarded and widely used in Software Development Organizations.
CMMI has 5 levels. An organizations is certified at CMMI level 1 to 5 based on the maturity of their Quality Assurance Mechanisms.
· Level 1 - Initial: In this stage quality environment is unstable. Simply, no processes has been followed or documented
· Level 2 - Repeatable : Some processes are followed which are repeatable. This level ensures processes are followed at the project level.
· Level 3 - Defined : Set of processes are defined and documented at the organizational level. Those defined processes are subject to some degree of improvement.
· Level 4 - Managed : This level uses process metrics and effectively controls the processes that are followed.
· Level 5 - Optimizing : This level focuses on the continuous improvements of the processes through learning & innovation.
Test Maturity Model (TMM):
This model assesses the maturity of processes in a Testing Environment. Even this model has 5 levels , defined below -
· Level 1 - Initial: There is no quality standard followed for testing processes and only ad-hoc methods are used at this level
· Level 2 - Definition: Defined process .Preparation of test strategy, plans, test cases are done.
· Level 3 - Integration: Testing is carried out throughout the software development life cycle (SDLC) - which is nothing but integration with the development activities E.g. V- Model.
· Level 4 - Management and Measurement: Review of requirements and designs takes place during this level and criteria has been set for each level of testing
· Level 5 - Optimization: Many preventive techniques are used for testing processes and tool support(Automation) is used to improve the testing standards and processes.
Conclusion:
Quality Assurance is to check whether the product developed is fit for use. For that, Organization should have processes and standards to be followed which need to be improved on a periodic basis. It concentrates mainly on the quality of product / service that we are providing to the customers during or after implementation of software.
Tips and Tricks to Design your Test Data
Everybody knows that testing is a process that produces and consumes large amounts of data. Data used in testing describes the initial conditions for a test and represents the medium through which the tester influences the software. It is a crucial part of most functional testing. But what actually is the test data? Why is it used? Maybe you would wonder ‘Designing Test cases is challenging enough , then why bother about something as trivial as Test Data’ The purpose of this tutorial is to introduce you to Test Data , its importance and give practical tips and tricks to generate test data quickly.So, Let's Begin!
What is Test Data ? Why is it Important?
Test data is actually the input given to a software program. It represents data that affects or is affected by the execution of the specific module. Some data may be used for positive testing, typically to verify that a given set of input to a given function produces an expected result. Other data may be used for negative testing to test the ability of the program to handle unusual, extreme, exceptional, or unexpected input. Poorly designed testing data may not test all possible test scenarios which will hamper the quality of the software.
What is Test Data Generation? Why test data should be created before test execution?
Depending on your testing environment you may need to CREATE Test Data (Most of the times)or atleast identify a suitable test data for your test cases (is the test data is already created).
Typically test data is created in-sync with the test case it is intended to be used for.
Test Data can be Generated -
· Manually
· Mass copy of data from production to testing environment
· Mass copy of test data from legacy client systems
· Automated Test Data Generation Tools
Typically test data should be generated before you begin test execution since in many testing environments creating test data takes many pre-steps or test environment configurations which is very time consuming. If test data generation is done while you are in test execution phase you may exceed your testing deadline.
Below are described several testing types together with some suggestions regarding their testing data needs.
Test Data for White Box Testing
In white box testing, test data is derived from direct examination of the code to be tested. Test data may be selected by taking into account the following things:
· It is desirable to cover as many branches as possible; testing data can be generated such that all branches in the program source code are tested at least once
· Path testing: all paths in the program source code are tested at least once - test data can be designed to cover as many cases as possible
· Negative API testing:
o Testing data may contain invalid parameter types used to call different methods
o Testing data may consist in invalid combination's of arguments which are used to call the program's methods
Test Data for Performance Testing
Performance testing is the type of testing which is performed in order to determine how fast system responds under a particular workload. The goal of this type of testing is not to find bugs, but to eliminate bottlenecks. An important aspect of performance testing is that the set of test data used must be very close to 'real' or 'live' data which is used on production. The following question arises: ‘Ok, it’s good to test with real data, but how do I obtain this data?’ The answer is pretty straightforward: from the people who know the best – the customers. They may be able to provide some data they already have or, if they don’t have an existing set of data, they may help you by giving feedback regarding how the real-world data might look like.In case you are in a maintenance testing project you could copy data from the production environment into the testing bed. It is a good practice to anonymize (scramble) sensitive customer data like Social Security Number , Credit Card Numbers , Bank Details etc while the copy is made.
Test Data for Security Testing
Security testing is the process that determines if an information system protects data from malicious intent. The set of data that need to be designed in order to fully test a software security must cover the following topics:
· Confidentiality:All the information provided by clients is held in the strictest confidence and is not shared with any outside parties. As a short example, if an application uses SSL, you can design a set of test data which verifies that the encryption is done correctly.
· Integrity: Determine that the information provided by the system is correct. To design suitable test data you can start by taking an in depth look at the design, code, databases and file structures.
· Authentication: Represents the process of establishing the identity of a user. Testing data can be designed as different combination of usernames and passwords and its purpose is to check that only the authorized people are able to access the software system.
· Authorization: Tells what are the rights of a specific user. Testing data may contain different combination of users, roles and operations in order to check only users with sufficient privileges are able to perform a particular operation.
Test Data for Black Box Testing
In Black Box Testing the code is not visible to the tester . Your functional test cases can have test data meeting following criteria -
· No data: Check system response when no data is submitted
· Valid data : Check system response when Valid test data is submitted
· Invalid data :Check system response when InValid test data is submitted
· Illegal data format: Check system response when test data is in invalid format
· Boundary Condition Data set: Test data meeting bounding value conditions
· Equivalence Partition Data Set : Test data qualifying your equivalence partitions.
· Decision Table Data Set: Test data qualifying your decision table testing strategy
· State Transition Test Data Set: Test data meeting your state transition testing strategy
· Use Case Test Data: Test Data in-sync with your use cases.
Note: Depending on the software application to be tested, you may use some or all of the above test data creation
Automated Test Data Generation
In order to generate various sets of data, you can use a gamut of automated test data generation tools. Below are some examples of such tools:
Test Data Generator by GSApps can be used for creating intelligent data in almost any database or text file. It enables users to:
· Complete application testing by inflating a database with meaningful data
· Create industry-specific data that can be used for a demonstration
· Protect data privacy by creating a clone of the existing data and masking confidential values
· Accelerate the development cycle by simplifying testing and prototyping
Test Data generator by DTM, is a fully customizable utility that generates data, tables (views, procedures etc) for database testing (performance testing, QA testing, load testing or usability testing) purposes.
Datatect by Banner Software, generates a variety of realistic test data in ASCII flat files or directly generates test data for RDBMS including Oracle, Sybase, SQL Server, and Informi.
Datatect by Banner Software, generates a variety of realistic test data in ASCII flat files or directly generates test data for RDBMS including Oracle, Sybase, SQL Server, and Informi.
In conclusion, well-designed testing data allows you to identify and correct serious flaws in functionality. Choice of test data selected must be reevaluated in every phase of a multi-phase product development cycle. So, always keep an eye on it.
No comments:
Post a Comment