What is accessibility testing?

Accessibility testing is the technique of making sure that your product is accessibility compliant. There could be many reasons why your product needs to be accessibility compliant as stated above.
Typical accessibility problems can be classified into following four groups, each of them with different access difficulties and issues:
Visual impairments
Such as blindness, low or restricted vision, or color blindness. User with visual impairments uses assistive technology software that reads content loud. User with weak vision can also make text larger with browser setting or magnificent setting of operating system.
Motor skills
Such as the inability to use a keyboard or mouse, or to make fine movements.
Hearing impairments
Such as reduced or total loss of hearing
Cognitive abilities
Such as reading difficulties, dyslexia or memory loss.
Development team can make sure that their product is partially accessibility compliant by code inspection and Unit testing. Test team needs to certify that product is accessibility compliant during the functional testing phase. In most cases, accessibility checklist is used to certify the accessibility compliance. This checklist can have information on what should be tested, how it should be tested and status of product for different access related problems. Template of this checklist is available here.
For accessibility testing to succeed, test team should plan a separate cycle for accessibility testing. Management should make sure that test team have information on what to test and all the tools that they need to test accessibility are available to them.
Typical test cases for accessibility might look similar to the following examples -
  • Make sure that all functions are available via keyboard only (do not use mouse)
  • Make sure that information is visible when display setting is changed to High Contrast modes.
  • Make sure that screen reading tools can read all the text available and every picture/Image have corresponding alternate text associated with it.
  • Make sure that product defined keyboard actions do not affect accessibility keyboard shortcuts.
  • And many more..
There are many tools in the market to assist you in your accessibility testing. Any single tool cannot certify that your product is accessibility compliant. You will always need more than one tool to check accessibility compliance of your product. Broadly, tools related to accessibility can be divided into two categories. Inspectors or web checkers
This category of tool allows developer or tester to know exactly what information is being provided to an assistive technology. For example, tools like Inspect Object can be used to get information on what all information is given to the assistive technology. Assistive Technologies (AT)
This category of tools is what a person with disability will use. To make sure that product is accessibility compliant, tools like screen readers, screen magnifiers etc. are used. Testing with an assistive technology has to be performed manually to understand how the AT will interact with the product and documentation. More information on the tools is present in tool section of this website for you to explore.
Some tips that can be used for Accessibility testing .
  • When using a screen reader, be sure to include tests for everything the user would be doing, such as install and uninstall of the product.
  • If a function cannot be performed using an Assistive Technology, then it may be considered accessible if it has a command line interface to perform that function.
Most of the time on windows platform, accessibility is built in your product using Microsoft Active Accessibility (MSAA). You can get more information about MSAA on this page.
What is MSAA?
MSAA is the abbreviation of Microsoft Active Accessibility. MSAA is a set of dynamic link libraries (DLL's) that provide COM interface and APIs. It is incorporated into the Microsoft Windows operating system and provide methods for exposing information about UI elements.
MSAA is used by assistive technologies like screen readers. These tools get information from the MSAA and gives to the user. MSAA gives information in the form of objects. Every UI element is treated as UI object and information like Name, Value, Role, State, Keyboard Shortcut, etc. is given to the tools for assistive technology. MSAA also supports events to capture state changes in the UI objects, for example object focus changes.
Today most screen readers expect that application have implemented MSAA. Implementing MSAA is probably one of the easiest way to ensure that assistive technologies will work with your product.
MSAA Software Development Kit contains following tools:
Inspect Objects
Displays object information for user interface objects
Event Watcher
Displays events fired by application when navigating the user interface
Accessible Explorer
Displays object properties and relationship hierarchy
Development team can use these tools to find accessibility related defects in the development phase. Test team can validate that product is accessibility compliant using these tools and some other
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

What is usability testing?


Usability testing is a technique used to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.
Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are foods, consumer products, web sites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human-computer interaction studies attempt to formulate universal principles.
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

What is Localization testing?

Localization (L10N) is the process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets. This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists. Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.
 ______________________________________________________________________________

Localization testing is part of software testing focused on internationalization and localization aspects of software.

Localization Test is the process of adapting globalized application to a particular culture/locale. Localizing the application requires a basic understanding of the sets of character commonly employed in modern software development and an apprehension of the risks associated with them.

Localisation testing checks how well the build has been interpreted into a particular target language. This test is founded on the results of globalized validation where the functional support for that particular locale has already been validated. Whenever the product is not globalised enough to support a given language, you probably will not try to focalise it into that language in the first place.

You still have to check that the application you are delivering to a specific market actually works and the following section shows you some of the common areas on which to focus when executing a Localization of test.

Localization testing contains the translation of the application user interface and adapting graphics for a particular culture/locale. The localisation process can also include translating any help content associated with the application program into native language.

Localisation of business solutions needs that you implement the correct business processes and practices for a culture/locale. Differences in how cultures/locales conduct business are to a great extent determined by governmental and regulatory requirements. Hence, localisation of business logic can be a big task.

Things which are often altered during localization, such as the user interfaces and content files. Below is a sample localization testing checklist:

- Spelling Rules
- Sorting Rules
- Upper and Lower case conversions
- Printers
- Size of Papers
- Operating System
- Key boards
- Text Filters
- Hot keys
- Mouse
- Date formats
- Measurements and Rulers
- Available memory

Note: It is as well a good idea to test that everything you are going to distribute in a local market complies with the local laws and regulations

Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

What is manual testing?

Manual testing is the oldest and most rigorous type of software testing. Manual testing requires a tester to perform manual test operations on the test software without the help of Test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, open-minded, resourceful, unopinionated, and skillful.

Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use most of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

"V" Model is one of the SDLC Methodologies.
In this methodology Development and Testing takes place at the same time with the same kind of information in their hands.
Typical "V" shows Development Phases on the Left hand side and Testing Phases on the Right hand side.
Development Team follow "Do-Procedure" to achive the goals of the company
and
Testing Team follow "check-Procedure" to verify them.




Differences In V-Model and Waterfall Model
  1. In Waterfall Model the tester role will take place only in the test phase but in V-Model role will take place in the requirement phase itself
  2. Waterfall madel is a fixed process u can't make any changes in the requirement or in any phase.but in V-Model u can make any changes in the requirements
  3. V-model is the simultaneous process but it is not in case of water fall model
  4. waterfall model used only the requirements are fixed but V-model can be used for the any type of requirement(Uncertain requirement)
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

What is Test execution?

Test execution/efficiency metrics gives the detailed 
information of test cases executions.
Total number of test cases
Total number of test cases execution
Total number of test cases passed
Total number of test cases failed
Total number of test cases not tested
Total number of test cases not applicable as these are 
depended on the test cases those are either not tested or 
failed test execution.
blocked test cases 
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

Defect reports are among the most important deliverables to come out of test. They are as important as the test plan and will have more impact on the quality of the product than most other deliverables from test. It is worth the effort to learn how to write effective defect reports. Effective defect reports will:
. Reduce the number of defects returned from development

. Improve the speed of getting defect fixes

. Improve the credibility of test

. Enhance teamwork between test and development

Why do some testers get a much better response from development
than others? Part of the answer lies in the defect report. Following a few simple rules can smooth the way for a much more productive environment. The objective is not to write the perfect defect report, but to write an effective defect report that conveys the proper message, gets the job done, and simplifies the process for everyone.
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati


Priority” is associated with scheduling, and “severity” is associated with standards.
Priority” means something is afforded or deserves prior attention; a precedence
established by order of importance (or urgency). “Severity” is the state or quality of
being severe; severe implies adherence to rigorous standards or high principles and
often suggests harshness; severe is marked by or requires strict adherence to
rigorous standards or high principles, e.g. a severe code of behavior. The words
priority and severity do come up in bug tracking. A variety of commercial, problemtracking/
management software tools are available. These tools, with the detailed
input of software test engineers, give the team complete information so developers
can understand the bug, get an idea of its ‘severity’, reproduce it and fix it. The fixes
are based on project ‘priorities’ and ‘severity’ of bugs. The ‘severity’ of a problem is
defined in accordance to the customer’s risk assessment and recorded in their
selected tracking tool. A buggy software can ‘severely’ affect schedules, which, in
turn can lead to a reassessment and renegotiation of ‘priorities’.
_____________________________________________________________________


severity: This is assigned by the tester.severity of a defect is set based on the issue's seriousness..it can be stated as mentioned
show stopper:4,Major defect:3,Minor defect:2,Cosmetic:1
setting values for these four categories can be again defined by the organisation based on their views.
showstopper: this have a higher severity as u cannot proceed further testing with the application testing.
Major: If there are any main defect based on the functionality .
Minor: If there is any error in the functionality of one object under one functionality
Cosmetic: any error based on the look and feel of the system,or improper location of the object(something based on the design of the web page)
Priority: this will be set by the team lead or the project lead.
based on the severity and the time constraint that the module has the priority will be set
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

Defect Report Template
A good defect report might have following sections.
Headline
One line description of the defect. Remember, A good headline will always be clear, related to the defect and give some hints on how critical defect could be.
Product
In most cases defect tracking system is used for more than one product. So specifying appropriate product and version is very important.
Component
Products are normally very complex, and can be divided into components. A defect report containing proper information about component can help managers in assigning it to appropriate person.
Defect Type
Defect type could be, functionality, specification, regression, UI etc. This classification can be used to analyze how defects are distributed in the system.
Priority
Priority is the impact of defect on business. This field gives an indication of the impact of this defect on business. In some organizations, testers do not specify priority, It is defined my manager or triage team members.
Severity
Severity is the impact of the defect on the product. For example if you hit five keys together and your product crashes, it is a very severe defect. But its priority is probably low because normally people might not hit five keys together. Now consider that the company logo is not proper on the splash screen. From severity point of view it is not severe as it is not crashing application or blocking user from using the application. However, it is high priority as it is affecting organization's image.
Environment
Proper information about your test execution environment should be present. For example, information about platform, databases, run times everything should be included in your defect report. This information helps development team in reproducing the defects.
Steps
All the steps should be specified clearly. You should not assume that programmer will understand this. Programmer might be looking at your defect report when are not around to explain. Your steps should be clear enough for a novice user to follow and reproduce or verify the defect .
Attachments
Whatever additional information is needed for the defect, should also be attached. This additional information could be logs generated by system, application etc. It could be screen shots or any other thing that might help developers in reproducing the defects.
Comments
If you have any additional comments on the defect, you should specify it clearly. For example if you observe/think that defect is related to some other defects filed in the same component with little variation. You should specify that.
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

Explain Defect Life cycle.

What is Bug/Defect?
Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”
Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.
or
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.
Lastly the general definition of bug is: “failure to conform to specifications”.
If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.
We will discuss more on Writing effective bug report in another article. Let’s concentrate here on bug/defect life cycle.
Life cycle of Bug:
1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce
In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or screenshots.
The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module owner.
Look at the following Bug life cycle:

[Click on the image to view full size] Ref: Bugzilla bug life cycle
The figure is quite complicated but when you consider the significant steps in bug life cycle you will get quick idea of bug life.
On successful logging the bug is reviewed by Development or Test manager. Test manager can set the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.
When bug gets assigned to developer and can start working on it. Developer can set bug status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.
If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or Reopen.
Bug status description:
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.
1) New: When QA files new bug.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

Load testing -Load testing is a test whose objective is to 
determine the maximum sustainable load the system can 
handle. Load is varied from a minimum (zero) to the maximum 
level the system can sustain without running out of 
resources or having, transactions suffer (application-
specific) excessive delay.
 
 


Stress testing - Stress testing is subjecting a system to 
an unreasonable load while denying it the resources (e.g., 
RAM, disc, mips, interrupts) needed to process that load. 
The idea is to stress a system to the breaking point in 
order to find bugs that will make that break potentially 
harmful. The system is not expected to process the overload 
without adequate resources, but to behave (e.g., fail) in a 
decent manner (e.g., not corrupting or losing data). The 
load (incoming transaction stream) in stress testing is 
often deliberately distorted so as to force the system into 
resource depletion. 

Performance testing - Validates that both the online 
response time and batch run times meet the defined 
performance requirements.
Read more »
These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • Furl
  • Reddit
  • Spurl
  • StumbleUpon
  • Technorati

Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing:

  • Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
  • The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
  • Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
  • Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

_________________________________________________________________________________

Smoke Test:

When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing.



Smoke testing can be done for testing the stability of any interim build.



Smoke testing can be executed for platform qualification tests.



Sanity testing:

Once a new build is obtained with minor revisions, instead of doing a through regression, a sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes.  Its generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app.



Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.






Smoke
Sanity

1
Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke.  In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
2
A smoke test is scripted--either using a written set of tests or an automated test
A sanity test is usually unscripted.
3
A Smoke test is designed to touch every part of the application in a cursory way. It's is shallow and wide.
A Sanity test is used to determine a small section of the application is still working after a minor change.
4
Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification).
Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
5
Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

sanity testing is to verify whether requirements are met or not,
checking all features breadth-first.

    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    Re- test - Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application.

    Regression Testing - Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.




    Re-Test
    You have tested an application or a product by executing a test case. The end result deviates from the expected result and is notified as a defect. Developer fixes the defect. The tester executes the same test case that had originally identified the defect to ensure that the defect is rectified.

    Regression Test
    Developer fixes the defect. Tester re-test the application to ensure that the defect is rectified. He also identifies a set of test cases whose test scenario sorrounds the defect to ensure that the functionality of the application remains stable even after addressing the defect.
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    What is retesting?

    Retesting Testing : re-execution of testcases on same application build with different inputs or test data.

    Retesting - Testing functionality after the bug is fixed with different set of inputs

    another definition

    Retesting:testing the same application to make sure that it does not create any defects and if it does creat defects we r going to fix the bugs and then comes regression testing.in other words retesting is performed to ensure that if at all the bugs were found we r going to perform regression testing.this means that retesting and regression testing are performed in a cyclic process or chain process.

    another definition

    Re-Testing--
    New version of the application under test. Once the new version is ready for release, tests are rerun to ensure the previously found faults are actually been fixed. This is re-testing.

    It is rare that a test is used only one time. The reuse of old test is useful- it saves time, shows up changes and it is important for consistent testing.

    It is important to write tests so they can be reused. Test should not have to be re-written each time they are run. Re-writing is not only a waste of time, it provides more opportunity for introducing errors into test scripts.

    When carrying out re-testing it is useful to also consider additional testing for similar and related faults. 
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help.

    Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

    Difference between Verification and Validation:
    Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.


    Verification ensures that the application complies to 
    standards and processes. This answers the question " Did we 
    build the right system? "
    
    Eg: Design reveiws, code walkthroughs and inspections.
    
    Validation ensures whether the application is built as per 
    the plan. This answers the question " Did we build the 
    system in the right way? ".
    
    Eg: Unit Testing, Integration Testing, System Testing and 
    UAT.
     
    Also take note that 
    Verification takes place before validation,and not 
    viceversa. Verification evaluates 
    documents,plans,code,requirement and 
    specification.Validation evaluates product itself.The 
    inputs for verification are checklist,issues 
    lists,walkthroughs and inspection meetings,reviews and 
    meetings.The input for validation is actual testing of an 
    actual product.The output for verification is nearly 
    perfect set of documents,plans,specificationand requirement 
    document. The output of validation is nearly perfect product 
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    What is verification?

    Verification is preventing mechanism to detect possible failures before the testing begin. It involves reviews, meetings, evaluating documents, plans, code, inspections, specifications etc
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    1. Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
    2. Quality Control: A set of activities designed to evaluate a developed work product.
    3. Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)
    QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail.

    QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements

    Testing is one example of a QC activity, but there are others such as inspections

    The difference is that QA is process oriented and QC is product oriented.

    Testing therefore is product oriented and thus is in the QC domain. Testing for quality isn't assuring quality, it's controlling it.

    Quality Assurance makes sure you are doing the right things, the right way.
    Quality Control makes sure the results of what you've done are what you expected.
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    What is Quality Assurance?

    Quality assurance is the process of verifying or determining whether products or services meet or exceed customer expectations. Quality assurance is a process-driven approach with specific steps to help define and attain goals. This process considers design, development, production, and service.
    The most popular tool used to determine quality assurance is the Shewhart Cycle, developed by Dr. W. Edwards Deming. This cycle for quality assurance consists of four steps: Plan, Do, Check, and Act. These steps are commonly abbreviated as PDCA.
    The four quality assurance steps within the PDCA model stand for:
    • Plan: Establish objectives and processes required to deliver the desired results.
    • Do: Implement the process developed.
    • Check: Monitor and evaluate the implemented process by testing the results against the predetermined objectives
    • Act: Apply actions necessary for improvement if the results require changes.
    PDCA is an effective method for monitoring quality assurance because it analyzes existing conditions and methods used to provide the product or service customers. The goal is to ensure that excellence is inherent in every component of the process. Quality assurance also helps determine whether the steps used to provide the product or service are appropriate for the time and conditions. In addition, if the PDCA cycle is repeated throughout the lifetime of the product or service, it helps improve internal company efficiency.
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”
    Other definitions can be:
    An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.
    or
    A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

    Lastly the general definition of bug is: “failure to conform to specifications”.

    If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.

    We will discuss more on Writing effective bug report in another article. Let’s concentrate here on bug/defect life cycle.


     What are the reasons of defects
    1. Human factor: It is because human beings develop software. It is often said that “to err is human, to forgive divine”. Human beings are not perfect. They are prone to make mistakes. As human beings develop software, it would be foolish to expect the software to be perfect and without any defects in it! Hence there are errors in software. Ironically, we are yet to discover any other non-human agent who could develop software any better than human beings. So we continue to rely on the human intelligence to develop software thus allowing chances of errors in it.

    2. Communication failure: Another common reason for software defects can be miscommunication, lack of communication or erroneous communication during software development! The communication failure can happen at different levels (requirement gathering stage, requirement interpretation/documentation stage, requirement-to-implementation translation stage etc.). Imagine a case where the requirements are vague or incomplete. This could lead to a situation where the programmers would have to deal with problems that are not clearly understood, thus leading to errors. Another scenario of problem with communication may arise when a programmer tries to modify code developed by another programmer.

    3. Unrealistic development timeframe: Let’s face it. More often than not software are developed under crazy release schedules, with limited/insufficient resources and with unrealistic project deadlines. So it is probable that compromises are made in requirement/design to meet delivery schedules. Sometimes the programmers are not given enough time to design, develop or test their code before handing it over to the testing team. Late design changes can require last minute code changes, which are likely to introduce errors.

    4. Poor design logic: In this era of complex software systems development, sometimes the software is so complicated that it requires some level of R&D and brainstorming to reach a reliable solution. Lack of patience and an urge to complete it as quickly as possible may lead to errors. Misapplication of technology (components, products, techniques), desire/temptation to use the easiest way to implement solution, lack of proper understanding of the technical feasibility before designing the architecture all can invite errors. Unfortunately, it is not that the people are not smart; it is just that they often don't-have-time/are-not-allowed to think!

    5. Poor coding practices: Sometimes errors are slipped into the code due to simply bad coding. Bad coding practices such as inefficient or missing error/exception handling, lack of proper validations (datatypes, field ranges, boundary conditions, memory overflows etc.) may lead to introduction of errors in the code. In addition to this some programmers might be working with poor tools, faulty compilers, debuggers, profilers, validators etc. making it almost inevitable to invite errors and making it too difficult to debug them!

    6. Lack of version control: If as a tester you keep encountering lots of occasion of regression bugs that keep showing up at regular intervals, then it is about time to check the version control system (if at all any). Concurrent version systems help in keeping track of all work and all changes in a set of code base. Complete lack of a version control system to safeguard the frequently changing code base is a sure fire way to get lots of regression errors. Even if a version control system (e.g. Visual SourceSafe) is in place, errors might still slip into the final builds if the programmers fail to make sure that the most recent version of each module are linked when a new version is being built to be tested.

    7. Buggy third-party tools: Quite often during software development we require many third-party tools, which in turn are software and may contain some bugs in them. These tools could be tools that aid in the programming (e.g. class libraries, shared DLLs, compilers, HTML editors, debuggers etc.) or some third-party shrink-wrapped plug-ins/add-ons used to save time (like a shopping cart plug-in, a map navigation API, a third party client for 24X7 tech support etc.). A bug in such tools may in turn cause bugs in the software that is being developed.

    8. Lack of skilled testing: No tester would want to accept it but let’s face it; poor testing do take place across organizations. There can be shortcomings in the testing process that are followed. Lack of seriousness for testing, scarcity of skilled testing, testing activity conducted without much importance given to it etc. continues to remain major threats to the craft of software testing. Give your team some time to introspect and I won’t be too surprised if you find it in your own testing team! While you might argue that poor testing do not *introduce errors* in software, actually they do! Poor testing do leave the software in a buggy state. Moreover, in this era of agile software development poor unit tests (e.g. in TDD) may result in poor coding and hence escalate the risk of errors.

    9. Last minute changes: Changes that are made to requirement, infrastructure, tools, platform can be dangerous, especially if are being made at the 11th hours of a project release. Actions like database migration, making your software compatible across a variety of OS/browsers can be complex things and if done in a hurry due to a last minute change in the requirement may cause errors in the application. These kind of late changes may result in last minute code changes, which are likely to introduce errors.

    Considering that this post has been talking about possible cause of errors, defects and bugs in software, did you notice any error in this post! Did you notice that I have listed out only 9 reasons as against the promised 10 in the blog post title? Well, it is a deliberate error. Why don't you come up with the 10th (may be 11th, 12th... as well) reason why there are defects in software? Feel free to let me (and other readers) hear your reason(s) by commenting below.

    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    What is the role of software tester?

    1. Identify the different end users of the system and their interaction with the same.
    2. Capture the important scenarios for each end user. It’s good to note that we need to capture the story around the scenario and not just steps.
    3. Talk to different stake holders including the customers (incase if you have access) on how the feature might be used & capture the scenarios.
    4. Plan towards uncovering the critical issues of the system as early as possible
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    According to the British norm BS 7925-1: bug--generic term for fault, failure, error, human action that produces an incorrect result.
    Robert Vanderwall offers these formal definitions from IEEE 610.1. The
    sub-points are his own.

    mistake (an error): A human action that produces an incorrect result.
    - mistake made in translation or interpretation.
    - lots of taxonomies exist to describe errors.

    fault
    : An incorrect step, process or data definition.
    - manifestation of the error in implementation
    - this is really nebulous, hard to pin down the 'location'

    failure
    : An incorrect result.
    bug: An informal word describing any of the above. (Not IEEE)





    If you want the above in simple text:-
    A defect is for something that normally works, but it has something out-of-spec. On the other hand a Bug is something that was considered but not implemented, for whatever reasons.

    I have seen these arbitrary definitions:
    Error: programmatically mistake leads to error.
    Bug: Deviation from the expected result.
    Defect: Problem in algorithm leads to failure.
    Failure: Result of any of the above.

    Compare those to these arbitrary definitions:
    Error: When we get the wrong output i.e. syntax error, logical error
    Fault: When everything is correct but we are not able to get a result
    Failure: We are not able to insert any input

    See also http://en.wikipedia.org/wiki/Software_bug





    Also

    Error : A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, and fault

    Failure: The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, fault.

    Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.

    Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: bug, defect, error, exception.

    Defect: Mismatch between the requirements.  
    Read more »
    These icons link to social bookmarking sites where readers can share and discover new web pages.
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Mixx
    • Google
    • Furl
    • Reddit
    • Spurl
    • StumbleUpon
    • Technorati

    Skills required by Testers

    Software testing is a job for skilled persons and to test the software the person must at least have the

    1. knowledge of testing tools,
    2. logical and analytical ability and 
    3. quality approach towards the testing process. 
    along with

    1. Curiosity
    2. Detail oriented and thorough  
    3. Trouble-shooter
    4. Perseverance
    5. Creativity
    6. "Flexible" Perfectionists
    7. Good judgement (judgment)
    8. Persuasive
    9. Tact and Diplomacy
    source:-  techmanageronline
      Read more »
      These icons link to social bookmarking sites where readers can share and discover new web pages.
      • Digg
      • Sphinn
      • del.icio.us
      • Facebook
      • Mixx
      • Google
      • Furl
      • Reddit
      • Spurl
      • StumbleUpon
      • Technorati

      Debugging is the consequence of successful testing.    


      Debugging is finding the error and report it with the intention of fixing it via developer,known as Debugging.       


      Testing is the broader term which includes Static and Dynamic Testing.
      also
      Testing is nothing but finding an error/problem and its done by testers where as debugging is nothing but finding the root cause for the error/problem and that is taken care by developers.

      also take into consideration

      An Error is found by the developer.
      A defect is found by the tester.
      A bug is _ defect found by tester and accepted by the developer.
      Read more »
      These icons link to social bookmarking sites where readers can share and discover new web pages.
      • Digg
      • Sphinn
      • del.icio.us
      • Facebook
      • Mixx
      • Google
      • Furl
      • Reddit
      • Spurl
      • StumbleUpon
      • Technorati

      This can be a big debate. Developers testing their own code – what will be the testing output? All happy endings! Yes, the person who develops the code generally sees only happy paths of the product and don’t want to go in much details.
      The main concern of developer testing is – misunderstanding of requirements. If requirements are misunderstood by developer then no matter at what depth developer test the application, he will never find the error. The first place where the bug gets introduced will remain till end, as developer will see it as functionality.
      Optimistic developers – Yes, I wrote the code and I am confident it’s working properly. No need to test this path, no need to test that path, as I know it’s working properly. And right here developers skip the bugs.
      Developer vs Tester: Developer always wants to see his code working properly. So he will test it to check if it’s working correctly. But you know why tester will test the application? To make it fail in any way, and tester surely will test how application is not working correctly. This is the main difference in developer testing and tester testing.


      also refer to this article:

      Article About Tester Vs Developer

      Read more »
      These icons link to social bookmarking sites where readers can share and discover new web pages.
      • Digg
      • Sphinn
      • del.icio.us
      • Facebook
      • Mixx
      • Google
      • Furl
      • Reddit
      • Spurl
      • StumbleUpon
      • Technorati

      Testing is required to check that the application satisfies 
      the requirements.
       
      OR
       
      Testing is required to Build a Quality Product.
       
      OR
       
      Testing is required to Deliver a quality product. 
      OR
      Testing will give confidence for the software development 
      company that the software will work satisfactorily in 
      client environmant.
      
      Testing will improve the software quality.
      
      Testing will also reduces the maintanance cost also.
      
      Even the client of the software will get confidence on the
      software. 
       
      OR 
       
      without having to spend more time and cost we produce quality product 
       
      Due to psychological behaviour of a man is 
        "a man can recognizing other faults easily" 
      OR
      
      
      TESTING is VERY IMPORTANT BECAUSE
      
        1. TO AVOID USER TO FIND BUGS
        
        2. TO KEEP REALIABILITY IN  UR PRODUCT 
      
        3. TO KEEP QUALITY IN  UR PRODUCT 
      
        4. TO STAND IN BUISNESS
       
      OR 
       
      Poor testing can cost any thing from life to money. 
      
      Micro processor can execute 1000000 instructions / second. 
      Assume the application gone wrong in the first instruction 
      that is been executed by a micro processor.
      
      NASA's 120 million USD rocket blasted due to type casting 
      error as the developer used int data type instead of float. 
      London ambulance booking software failed causing deaths of 
      several patience those who needed emergency service.
       
       
       
       
       
       
      Read more »
      These icons link to social bookmarking sites where readers can share and discover new web pages.
      • Digg
      • Sphinn
      • del.icio.us
      • Facebook
      • Mixx
      • Google
      • Furl
      • Reddit
      • Spurl
      • StumbleUpon
      • Technorati