Sunday, January 27, 2013

QC With Load Runner

In general, Load runner can save Vuser scripts in a folder. But, HP - QC can allow us to save Vuser scripts in QC database.

Navigation:

QC Explorer---> Browse QC bin URL---> Do login to corresponding project---> Click close in welcome screen---> Test plan component---> Select one test in performance testing---> Click TestScript--->Click launch to get Vuser script for recording---> Save script---> Close Vugen.





QC With QTP

In general, QTP can save one test script as folder. When we integrate QTP with QC, QTP scripts will be saved in QC database.

Navigation:

QC Explorer---> Browse QC bin URL---> Do login to corresponding project---> Click close in welcome screen---> Test plan component---> Select a manual test in functional testing folder---> Click testscript---> Click launch--->Record or Descriptive programming or step generator or Objects repository to generate scripts in QTP---> Click Save--->  Close QTP.





HP-QC (Test Engineer Responsibilities)

Connecting to QC software to share testing documents in a database in server computer. Database created by N/w admin and permissions was provided by test lead. After that below process will continue:


a) Tester Login:

To login into QC software test engineer can follow the below navigation:

QC Explorer---> Qc bin---> Login.

b) Knowing Responsible Requirements:

Navigation:

QC Explorer---> Qc bin---> Enter credentials---> Authenticate Login---> Close welcome screen---> Click requirement component---> Click requirement folder---> Identify list of allocated modules for that tester---> Logout.

c) Knowing responsible testing topics:

Navigation:

QC Explorer---> Qc bin---> Enter credentials---> Authenticate Login---> Close welcome screen---> Test plan component---> Subject folder---> Identify list of responsible topics---> Logout.

d) Writing Test Scenarios:

After getting information of allocated work by test lead, corresponding tester can start test scenarios writing for responsible modules and responsible testing topics.

Navigation:

QC Explorer---> QC bin---> GO---> Enter credentials---> Login---> Close welcome screen---> Test plan component---> Select responsible testing topic or subject---> Tests menu---> New test---> Select test type as manual---> Type test scenario in one sentence---> OK---> Requirement coverage---> Select Requirement option---> Select related module name to corresponding test scenario(Follow above navigation to write scenarios for all responsible modules and testing topics)---> Logout.

 e) Implementing Scenarios As Test cases Documents:

After completion of test scenarios writing, test engineer can start test cases selection for those scenarios and follows black box testing techniques.

Navigation:

QC Explorer---> QC bin---> GO---> Enter credentials---> Login---> Close welcome screen---> Test plan component---> Select responsible testing topic or subject---> Test scenario---> Details---> Enter required details for test scenario(Priority,Test set up, Suit id etc including design steps)---> Click new step icon---> Enter step description with expected result---> Create more steps by clicking new step icon and write cases for last step---> Click OK---> Click logout.

Follow above navigation to implement cases for all scenarios.

Note: 
1) While writing scenarios and cases for responsible modules test engineers are reading SRS.
2)Test engineers are following black box testing techniques for functional test cases writing for functional test scenarios of responsible modules.
3)No techniques for test cases writing for Non functional testing topics like  usability testing, performance testing and etc.

f) Test cases Execution On SUT:

After receiving software builds from developers, test engineers can start test cases execution.

Navigation:

QC Explorer---> QC bin---> GO---> Enter credentials---> Login---> Close welcome screen---> Test lab component---> Select root folder---> Test sets menu---> New folder---> Enter w.r.t testing topic as folder name---> Select cycle from the topic(Smoke, real and etc)---> Test sets menu---> New test set---> Enter a name to set---> Clcik OK---> Execution Grid---> Select related tests into set---> Run---> Put SUT in ready state---> Begin Run---> Operate SUT and compare test case expected value and SUT actual value.
In the same way continue next case until last case.Otherwise end run and goto defect report.

g) Defect Reporting WhenTest Failed:

Navigation:

Click defect components when test was failed---> New defect---> Fill design defect report---> Click submit---> Logout.

Note - : In the above defect report, test engineer can assign defect report to the test lead. Later test lead assign to PM with comments.PM can assign defect to team lead after review. Team will discuss with developers to perform changes in software coding.After completion of Bug fixing, team lead can change defect status to 'Fixed'.
and assign that report back to test lead or PM.

Note - 2: PM, Test lead, Team lead, Testers, Developers and Viewers along with Stakeholders can maintain permenent login in QC software and refresh after certain period of time.






HP- QC (Network Admin Responsibilities) Providing User-id For New Employee


From Quality center architecture, different category people are using.I n general, network admin is the main person to install QC software in server computer and "QC explorer" browser in every client machine in our company network. After completion of installation, network admin can use one client machine to access QC software in server computer by using "QC Explorer" browser.

Providing User-id For New Employee:

To Providing User-id For New Employee, N/w admin can follow below navigation:

QC Explorer---> Browse SA bin---> Click Go---> Do login by N/W admin---> Site users---> Click new user icon---> Allow employee to type details---> Click OK---> Click password---->Allow employee to enter password two times.





HP- QC (Network Admin Responsibilities) Creating a New Database


Creating a New Database:

After completion of providing userid for new employee, and when a new project is starting N/w admin can follow below criteria to create a new empty database in server computer with the help of QC tool.
In general, QC tool is allowing N/w admin can to create a new database in server by using the oracle and SQL server.In future, testing people are using that database to store testing documents with security.

To create database, N/w admin can follow below navigation:

Qc Explorer--> Browse SA bin--> Click Go--> Do login by N/w admin--> Site projects-->Select current project domain-->Create project-->Select empty project option--> Click next--> Enter current project name as database name--> Next--> Select test lead username as current project / product.

Note - 1: Network admin can take the help of HR department while creating the user-id for new employee, and he will take the help of project manager (PM) while creating a new empty database in server for a new project testing.

Note - 2: Logout is compulsory after completion of transactions by the network admin.






Responsibilities Of a Manual Tester


  1. Understanding all requirements of a project or product.
  2. Understanding testing requirements related to the current project / product.
  3. Assist test lead during test planning.
  4. Writing test scenarios for responsible modules and responsible testing topics by using "Black box techniques".
  5. Follow IEEE 829 standards while documenting test cases with required test data and test environment,
  6. Involve in test cases review meeting along with the test lead.
  7. Involve in smoke testing on initial software build
  8. Execute test cases by using test data on SUT in test environment to detect defects.
  9. Involve in defect reporting and tracking
  10. Conduct Retesting, Sanity testing and Regression testing on every modified SUT to close bugs.
  11. Strong in SQL to connect to SUT database while testing.
  12. Involve in final regression testing on final SUT during test closure
  13. Join with the developers to do acceptance testing to collect the feed back on software from the real customers / model customers
  14. Assist test lead in RTM preparation before signoff from the project / product

Testing stages or Phases (Release Testing)


6) Release Testing 
  • After completion of acceptance, corresponding project management can select few developers and few testers along with few hardware engineers to go to real customer site or license purchased customer site for software release. The remaining developers and testers in the team can go to bench ( wait for next project ).
  • The selected developers and testers chosen for release of software are called as release team or onsite team or delivery team. 

This team performs the tasks mentioned below in the corresponding customer site

I. Complete installation of software. 
II. Verify overall functionality of software. 
III.  Support from input devices ( Keyboard, mouse, . . .) 
IV.  Support from output devices ( monitor, printer, . . .) 
V.  Support from operating system and other system software's. 
VI.  Co-existence with other software's. 
VII.  Support from secondary storage devices ( external hard disk, pen drive, cd drive ) 

The tasks mentioned above are referred as green box techniques. 
After completion of above observations in customer site, corresponding release team can conduct training sessions for end users or customer site people. At the end, release team will come back from customer site and go to bench. 




Testing Stages or Phases (Maintenance Testing)


7) Maintenance Testing

During utilization of a software, customer site people send change requests to corresponding company or organization. Which means fixing done by enhancements called as Software Change Request(SCR).
To handle those change requests, project management establishes a team called CCB ( Change Control Board ) 
CCB members have both development and testing skills.
Note : Depends on the software failures in customers site, project manager can define the efficiancy of testing team.
Testing Team efficiancy  or bug removal efficiancy: = A/A+B
A -> No. of bugs found in software during testing.
B -> No. of failures of software during maintenance.
software change request scr
This testing involves Grey Box Techniques ( White Box + Black Box ) 

Case Study:
case study of all testing stages or phases

                                               Fig: Case Study of all Testing Stages or Phases

Note: From above case study, test engineers are conducting software testing and involving in acceptance testing. Some selected testers will goto onsite for release testing also.




Software Test Closure

After completion of all test cycles, corresponding test lead can conduct a review meeting with selected testers. In the review meeting, testing team discuss below factors.

1) Coverage Analysis

a) Module wise coverage.
b) Testing topic wise coverage.

2)Stability in software:

Stability of a software indicates, in 20% of testing we can find 80% of bugs and in 80% of testing we can find 20% of bugs.

3) Calculate Bug density:

Modules in SUT              % of bugs
--------------------------------------------------      
      A                                         20
      B                                         20
      C                                         40----------------> Final regression testing
      D                                         20
                                           ------------
                                                100 % of bugs     
                                           -----------

Note: Testing team can re-execute test cases related to high bug density modules on final SUT for "Golden bugs" to detected. This testing is called as "Final regression" testing or Post mortem testing or Pre- acceptance testing or confidence testing. If golden bug was found then testing team will request the developers to fix as early as possible or request customer site people for later patch release.

4) Analysis of differed bugs:

This analysis indicates, whether differed bugs are postponable or not.

The above four points called as "Exit criteria".


  • Acceptance testing: After completion of software test closure, testing team will join with development team to collect feedback from customer site people in alpha an beta testing manner. If any changes were needed in final software, development team can perform changes and testing team can review those changes for completeness and correctness.
  • Sign off: After completion of acceptance level, test lead can role off testers and team lead can roll off developers from current project / product. PM can role off test lesd and team lead from current project or product after receiving final summary report. 
  • In software testing final summarry report is called as Requirement Tracability Matrix(RTM).
 Requirement Tracability Matrix (RTM):



Example:











Testing Stages or Phases (Acceptance Testing)


5) Acceptance testing 

After completion of Software testing, corresponding project manager concentrates on acceptance and acceptance testing. In this stage, developers and testers are also involved either directly or indirectly to collect the feedback from the real customer in project and model customers in product. 
      Due to this reason,project acceptance is called as  Alpha Testing and product acceptance is called as Beta Testing which will be explained as below:


Testing Stages or Phases (Acceptance Testing)

From this acceptance testing, project management is getting an idea on changes to be needed in the developed software w.r.t changes needed by the customers. 
if changes are needed, then developers are performing the corresponding changes in that software and testers are approving those changes for completeness and correctness. 





Testing Stages or Phases (Integration Testing)


3) Integration Testing 

After completion of related Programs Writing and Unit Testing, corresponding Programmers can Integrate those Programs and conduct Integration Testing.Integration means connecting the two programs.While integrating programs, programmers follow any one of the approaches mentioned below :

a) Top - Down Approach:
In this approach, programmers are integrating programs with some modules.Because remaining sub modules are under construction.In the place of under constructive sub modules programmers can use the STUB. 
top down approach


STUB: STUB is a temporary program to send back run control to main module, instead of under constructive sub module. 

b) Bottom - Up Approach: 
In this approach, programmers are integrating sub modules without involvement of under constructive main module. In the place of under constructive main module programmers can use the DRIVER. 
bottom up approach


 DRIVER: DRIVER is a temporary program that is used instead of under constructive main module. 

c) Sandwich Approach: 
The combination of top - down & bottom - up approaches is called as Sand witch Approach or Hybrid approach . 
sandwich approach


Note : The 3 approaches i.e., top down approach,bottom up approach and sandwich approach mentioned above are also called as Incremental Integration approaches. 

d) System Approach or Big Bang Approach: 
In this Approach, Integration will be starting only after complete 100% Coding is done. ( There should be no  STUB or DRIVER in this Approach ) 
  • Driver programme is also called as calling program.
  • Stub programme is also called as called program.





Testing Stages or Phases (Software Testing)

4) Software Testing

From Fish,V, and Agile model, a seperate testing team is available for testing to validate a software w.r.t customer requirements and expectations.
Testing Stages or Phases (Software Testing)

1. Functional Testing 

In general, a seperate testing team job can start with functional testing to validate the customer requirements in Software Under Testing (SUT) and it is a mandatory testing topic in software testing. This functional tsting is classified into below software tests:

a) GUI testing ( Graphical User Testing ) / Control flow testing / behavioral testing 

During this testing tester can operate each object in every screen to validate that object is operatable or not. Which means that, testing whether the software screens are behaving / responding correctly or not while navigation ( operating them ). 
       In simple terms, GUI testing means that software is operatable or not.

b) Input Domain Testing 

During this testing , testing team can validate the correctness of size and type of every input object in every screen of Software Under Testing  (SUT).
        In simple terms,input domain testing  means that, testing whether software screens are taking correct inputs or not. 

c) Error handling testing

During this  test, testing team can operate full screen of SUT by giving invalid data to objects to get error messages or prompt messsages.
        In simple terms Error handling testing means ,whether software screens are returning error messages or not when we operated those screens in wrong manner. 

d) Manipulations testing 

During this  test, testing team can operate each screen of SUT by giving valid data to objects to get exact outputs w.r.t to given inputs.
         In simple terms , manipulation testing means that ,Testing whether software screens are proving correct outputs or not with respect to given inputs. 

e) Database testing ( back end ) 

Testing whether software screens are storing and manipulating data in the database corresponding  to opertrations performed in the front end. Which means  perform operations on Frontend - Observation on Backend 

         In simple terms, new data correctly inserted or not. and exixting data corretly modidfied or not w.r.t new data in the database.
  • Data Validation : New data correctly stored or not. 
  • Data Integrity    : Due to New Data Storage, Existing related Data got Modified or not. 

Note : In the Above Functional Testing Topics, first 4 Sub-Tests are considered as Frontend Testing or Screens Testing. Database Testing Topic is called as Backend Testing. 

f) Data volume testing 

This testing is also called as "Memory testing" or "Data capacity testing". During this test, testing people can insert the model data or real data from the front end screens to the back end database of SUT until corresponding database is full.


Testing Stages or Phases (Software Testing)
g) Intersytem testing or SOA testing

Sometimes our SUT is able to connect to other software via network to share external database. This type of sharing services testing is also called as Intersystem system testing /  SOA testing / web services testing / Interoperability testing / end to end testing.


  • Due to Globalization in the world, Companies are using Inter Connecting Software's to Share Resources. 
  • Software's that are helping the Software's are called as Service Oriented Software's. 
  • Software's that are using the Services of other Software's are called as Service Utilizers. 
Difference between Intersystem Testing and Integration Testing 
  • Testing Module to Module of 2 Different Software's is Inter System Testing 
  • Testing Module to Module of same Software is Integration Testing. 
2. Non - Functional Testing 

After Completion of Functional Testing, corresponding Testing Team is concentrating on Non Functional Testing to validate the customers expectations. so, thats why this type of testing is also called as "expectations testing". 
         During this testing team can concentrate on characteristics of software like usability, performance, compatibility, installation and etc. Due to this reason, this Non Functional Testing is also called as "Characteristics testing". 
        And without complete development of software this non functional testing cannot be performed. so we need a whole system to test. so, thats why this Non Functional Testing is also called as "System testing". 
In non functional testing, we have 10 sub tests as mentioned below. 

a) Usability Testing 

During this test, testing team is concentrating on the following expectations from corresponding SUT. 
SUT 
Software under Testing. 
* Ease of Use. 
* Look and Feel. 
* Short Navigations. 
* Understandable HELP ( User Manuals - HELP Documents of that Software ) 

b) Compatibility Testing 

This Testing is also called as Portability Testing. 
During this Test, Testing Team can operate SUT in various customer expected platforms. 
Here, Platform means Computer Operating System, Browsers and other System Software's. 

c) Hardware Configuration Testing 

This Testing is also called as Hardware Compatibility Testing . 
During this Test, Testing Team can confirm whether our SUT supports Hardware Devices belonging to Different Technology or Not. 
Example 
Different Types of Networks 
Different Types of printers 
Different Types of scanners
Different Types of fax machines, . . . 

Example - 1:
HW Configuration Testing















Example - 2:
HW Configuration Testing
d) performance testing

Performance testing means that calculate the  speed or find the speed of a software, testing team can apply load on SUT.
performence testing


The above performance testing is classified into below sub tests:

Load testing: Execution of SUT under customer expected configuration and customer expected load ( number of concurrent users ) to estimate the speed in processing  is called as "Load Testing". We can use the load runner tool to measure  this load.

Stress testing : Execution of SUT under customer expected configuration and more than customer expected load to estimate peak load ( maximum load, the project can handle )  is called as " stress Testing"

Spike testing : Execution of SUT under customer expected configuration and huge load   is called as "Spike Testing"

Endurance testing : Execution of SUT under customer expected configuration and customer expected load repeatedly long time without any memory leakages is called as endurance testing or soak testing or longivity.

e) Security Testing:This testing  is also called as penetration testing. During this testing, testing can conduct on below three sub tests on SUT:
i) Authorization / Authentication testing: It means that that the software is allowing valid users and prevent invalid users or not. 

ii) Access control testing: It means that whether valid user is having the permissions to access the certain functionality or not.

iii) Encryption Decryption testing: It means that whether our client process and server process are using typical encryption and decryption process or not. To conduct this test one needs to have knowledge of hacking and organizations can try to recruit hacking knowledge people with  working bond (e trust people). 
encryption decryption testing


h) Multi languity testing : During this test, testing team is validating SUT functionalities by entering inputs in various languages, when that software screens are developed in Java or .Net or other Unicode technologies. 
There are two ways in multi languity testing :
1) Localization 
2) Globalization

i) Installation testing : During this test, testing team is checking whether the software is easy to install on customer expected configured system or not. 
installation testing


  • " setup" program execution to start initialization. 
  •  easy screens during installation. 
  •  occupied disk space after installation. 
  •  easy to un install 


Note - 1 : Non functional tests except usability testing are expensive to conduct.

Note - 2 : Security testing to conduct testers will need the hacking knowledge.so, this test will not be conducted.

Note - 3 : Multilanguity testing to conduct testers will need multiple people languages or knowledge on atleast on language conversion tool. so, this test will not be conducted.

Note - 4 : (Parallel testing) When our software is product, testing team can compare our software with previous versions of the same software and with competitive product in market to find weakness and strengths.so, this testing is called as competitive testing or comparison testing.  And this testing is called as "Parallel Testing".

Note - 5 : (Compliance Testing) During software testing, management can test testing the teamwork to conform that whether testing team can finish testing at right time or not. And whether the developed software is meeting the testing standards and quality or not. This managemental testing is called as "Compliance testing".
Due to this reason reason management people will work as QA people and testers will  as QC people.






Creating Database

Creating Database Using Navigation

Open the management studio tool and connect to the required server. The below navigation explains us the creation of database by using navigation:

Right click on database selection menu---> Select new database---> Provide a name to database--->Click OK button---> Click refresh.

Creating Database Using Command

To create a database using command we can follow the below syntax:
Syntax: Create database <Database name>
Example: Create database Employee

Note: SQL language is not case sensitive

System stored procedure: The system stored procedure will be used to retrieve the structural details of a specified database. If we are not providing any database name, this command will returns the list of all databases in the current server.

Note: A maximum of 32767 databases can be created in installation of SQL server.




HP-QC (Test lead Responsibilities) Defining Testing Levels


Defining Testing Levels:

In defining testing levels, test lead can follow the below navigation:

QC Explorer---> Browse QC bin URL---> Click GO---> Enter user id and password by test lead---> Click Authenticate to get domain name and the corresponding project name---> Click login---> Click close in the welcome screen---> Management components---> Releases folder--->Releases menu---> New Releases Folder --->Enter project name as folder name---> Releases menu -->New Release---> Enter build version as release name---> Click OK---> Releases menu---> New cycle---> Enter testing level as cycle name(Follow above navigation to create cycles for all build versions)---> Do logout.

Example:Project name
                      |
               Version 1.0
                      |
              1) Smoke test
              2) Real test
                      |
               Version 1.0
                      |
              1) Smoke test
              2) Re test
              3) Sanity test
              4) Regression test
              5) Further real testing
                     .
                     .
                     .
                Version N.0
                      |
              1) Smoke test
              2) Re test
              3) Sanity test
              4) Regression test
              5) Final regression testing (Post mortum)




HP-QC (Test lead Responsibilities) Allocating Work As Testing Topics Wise:

After completion of work allocation modules wise to the testers, corresponding test lead can allocate work to the testers as testing topics wise.

Allocating Work As Testing Topics Wise:.

To give permission to the different roles, test lead can follow the below navigation:

QC Explorer---> Browse QC bin URL---> Click GO---> Enter user id and password by test lead---> Click Authenticate to get domain name and the corresponding project name---> Click login---> Click close in the welcome screen---> Test plan component---> Subject folder---> Enter testing topic as folder name---> OK (Follow above navigation to store all responsible testing topics names as folders)---> Logout.

Example: Subject
Functional testing                 ----- Pradeep
Usability testing                   ----- Srikanth
Compatibility testing            ----- Ramu
H/W configuration testing    ----- Vamshi
performence testing            ----- Geethika
Installation testing               ---- Pradeep





HP-QC (Test lead Responsibilities) Allocating Work To Testers

After completion of employees with assigning different roles and permission selections for each role, corresponding test lead can assign work to selected testers as TD admin.

Allocating Work To Testers (Module wise)

To give permission to the different roles, test lesd can follow the below navigation:

QC Explorer---> Browse QC bin URL---> Click GO---> Enter user id and password by test lead---> Click Authenticate to get domain name and the corresponding project name---> Click login---> Click close in the welcome screen---> Requirement components---> Requirement  folder--->Requirement menu---> New requirement --->Select requirement type as testing ---> Enter module name as requirement name---> Click OK---> Select responsible user name as Author---> Click Submit---> Close.

Follow above navigation to allocate all modules.

Example: Requirement Components
Login module      ----- Pradeep
Cheque Deposit ----- Srikanth
Money Transfer  ----- Ramu
Mini statement    ----- Vamshi
Bills pay               ----- Geethika
Logout                 ----- Pradeep

Like above wise all the modules in the project was assigned to the team members in the current project by the test lead.








HP-QC (Test lead Responsibilities) Permissions To Roles

After completion of employee selection for current project with different roles, corresponding test lead can give the permissions to those roles as TD admin.

Permissions To Roles:

To give permission to the different roles, test lesd can follow the below navigation:

QC Explorer---> Browse QC bin URL---> Click GO---> Enter user id and password by test lead---> Click Authenticate to get domain name and the corresponding project name---> Click login---> Click close in the welcome screen---> Tools menu---> Customize---> Module access---> Select and deselect different permissions to different roles---> Click return---> Do logout.
HP-QC (Test lead Responsibilities) Permissions To Roles






HP-QC (Test lead Responsibilities) Allocating Employees To Current Project

In general, test lead is working as administrator for testing related database in server.So, test lead can do more tasks in QC software.
HP-QC (Test lead Responsibilities) Allocating Employees To Current  Project
 Allocating Employees To Current  Project

For allocating employees to the current project, Test lead will follow the below navigation:

QC Explorer---> Browser QC bin URL---> Go---> Enter user name and password---> Click authenticate by test lead to get domain name and corresponding project name---> Click login---> Click close in welcome screen---> Tools menu---> Customize---> Project users---> Add users button---> Click OK to get list---> Select user name and provide role(PM,TD admin,QA tester,Developer, viewers)---> Follow above navigation to add different roles---> Click return---> Do logout.

Note: In general Test lead is only the person to working as TD admin to handle administrator tasks on database in server computer.





Software Test Execution

a) Formal Meeting:

Software test execution process can start with a formal meeting in between corresponding developers and testers along with test lead, team lead, business analyst, system analyst, and technical analyst. In this meeting people are discussing about software build release, defect reporting and build version control.In general, developers can place software build in a folder structure called as "Soft Base". In that folder developers can locate SUT. 





















In general, testers can use MS outlook or lotus notes to forward a defect report in local mailing system which is as follows:




From the above diagram, test engineer will forward any document to the test lead, then test lead will forward to PM. Then PM will forward those documents to the team lead / project lead from there to programmers. So, here PM will acts as head between the testing team and the development team.

In general developers are placing modified software builds in soft base in server with the version numbers. By using those numbers, testers are distinguish old version and new version of software build.
After fixing the defects by the developers, they will release a new software build with version numbers in order to distinguish the old version and new version for understanding the testers. This type of version numbers are called as "Software build version control".
When company release the software version by version to the customers called as "Software release version control".

Example:






b) Defining Test Execution Levels:




























From the above diagram, in first build version, Sanity test will be conducted if it is passed, then real test will be conducted on SUT. After that if defects are found then defects will be reported to the development team for the fixing of defects. After fixing (Modifyng) a new build version will be released. In the second version version of build retesting will be done on defects reported cases or scenarios. Then regression testing will be done to check with the modification of previous build any other modules are effected. And if defects are found then those defects will be reported to development team to fix the defects again same process continues till last version of build.

c) Checking Test Environment:

In general, H/w team can establish test environment for testers with required H/w and s/w. Testing team can approve established environment before going to test execution process.

d) Smoke Testing:

After downloading or launching SUT from server, testing team can operate that SUT build in one system or cabin to find that whether that SUT build is working or not.
This smoke testing is mandatory after receiving every SUT build from developers. But, to do this smoke testing team is using a fixed set of cases related to main modules of SUT.

If smoke testing was failed, testing team will reject the SUT build and wait for stable build or testable build. Due to this reason, smoke testing is also called as "Testability testing" or "Tester acceptance testing" or "Build verification testing".






























e) Real Testing:

When software build is testable, testing team members can separate each other and then download / launch that SUT into their cabin systems to conduct real testing.
 From  above mentioned diagram (Refer smoke testing diagram), every test engineer can open corresponding test cases file and SUT.
Tester can operate SUT and compare test cases expected value and SUT actual value. If both are same then tester can goto next case.If all cases are passed, then test the next scenario cases execution. If all scenarios are passed, next modules scenarios cases will be executed.
If all modules are passed, then goto next testing topic (Functional and Non functional).
If all testing topics are passed then goto software test closure.

Note - 1: While conducting real testing, test engineers can arrange test cases in order. The order of related test cases group is called as "Test Suite"(Module level) or "Test Batch". or "Test Build" or " Test Chain" or "Test Set" .

Note - 2: While conducting real testing, test engineers are preparing a daily report called as "Test Log ". This document will be prepared in IEEE 829 format.

Example:



From above format on test log document:
Passed means the expected value is equal to the actual value.
Failed means that expected value is not equal to the actual value.
Blocked means that case execution was postponed to future build version release.

f) Defect Report:

When, test case expected value is not equal to the software SUT actual value test engineer can stop test execution and start defect report preparation. Defect is also called as issue / incident. To prepare defect report also, test engineer can follow IEEE 829 format is as follows:

1) Defect ID: 

Unique name or number for a defect.

2) Defect description: 

About defect (What are inputs, what are outputs, what is the  expected result, what is the actual result and etc..).

3) Build version ID: 

Version number of SUT in which defect was detected.

4) Feature or Module: 

Name of module in which defect was detected.

5) Test cases Doc ID: 

ID of test cases document on which cases execution defect was detected.

Note: 3,4,5 points will indicates origin of defect.

6) Severity:

The seriousness of defects w.r.t tester.

Example: 
High / Critical / Showstopper ----> Not able to continue further testing until this type of defect was fixed.
Medium / Major                           ----> Able to continue further testing but, mandatory to fix this type of defects.
Low / Minor                                  ----> Able to continue further testing but, not mandatory to fix this type of defects.

7) Priority: 

The importance of defect fixing w.r.t customer.(High, Medium, Low).

8) Reproducible:

Failed test case will be executed more than one time for checking of defect reproducibility. 
case - 1: If defect is reproducible then attach corresponding failed test documents.

case - 2: If defect is not reproducible then attach corresponding failed test case document  along with Screen shot.

9) Test Environment: 

Used H/w and S/w while detecting this defect in SUT.

10) Status:

Indicates status of a defect which means whether it is New / Re- open.
New indicates reporting defect for the first time.
Re- open indicates re- reporting the defect.

11) Detected by:

Name of a tester who detected this defect.

12) Assigned to:

To Defect Tracking Team (DTT)(Test lead+PM+Team lead).

13) Suggested fix: Suggestions to developers to fix this defect.

Note: After preparing above like defect report, corresponding tester can send that defect report to DTT via email by using MS outlook / Lotus notes / Company websites.

g) Defect Tracking:

After receiving defect report from tester DTT can review that DR and decide that defect is acceptable or rejectable.
The below diagram explains you the defect tracking:

h) Test case Related Defect Report:

If tester reported defect is confirmed as 'test case' related then, the below process will be followed.


















i) Test data Related Defect Report:

If tester reported defect is confirmed as 'test data' related then, the below process will be followed.

















  

J)Test Environment Related Defect Report:

If tester reported defect is confirmed as 'test environment' related then, the below process will be followed.



















 k) Coding Related Defect Report or Bug Fixing:

If tester reported defect is confirmed as 'Coding' related then, the below process will be followed.

After receiving Build Release Note(BRN) / Software Release Note(SRN) from developers, test lead can forwarded to testers. Related testers w.r.t modifications can start below process. The remaining unrelated testers w.r.t modifications are continuing further testing on old version of build. But, related testers can follow below process.

Step - 1: Conducts  smoke testing on modified SUT by executing fixed test cases selected previously.

Step - 2: If smoke testing was passed, then we can say that modified build is working correctly. Then related testers will re-execute previously failed test cases called as Re-testing.

Step - 3: If re-testing was passed, then related testers will re execute previously passed most related test cases called as Sanity testing.

Step - 4:  If Sanity testing was passed, then related testers will  execute all previously passed related cases called as Regression testing.

Step - 5: If regression testing was passed without any side effects, then all related and non related testers can continue further testing on modified SUT responsible modules and responsible testing topics is coding related bug.

For clear understanding observe the diagram carefully:






















BUG Life Cycle:



The initial status of every bug is New and final status is closed or defferd.


Recommend on Google

 

Followers

Popular Posts