Monday, December 15, 2008

Post-mortem

What is the first thing that comes to your mind after reading the title?
A dead body being cut by scalapel,and a doctor trying to get into the blood and veins of the body. In the field of software testing also,post-mortem is nothing less than what it means literally. For those who are not aware of what post-mortem is in terms of software testing, here is the defination: In simple words, it is a detailed understanding of what went well with the project and what else can be done to make it better.

Post-mortem meetings are often conducted after the project is completed to identify what lessons have been learnt. Normally all parties,users,testers,developers have strong opinions about the things that have happened during the project,what the end product is. In Post-mortem meetings,the opinions are supposed to be honest so that they can contribute in running the future projects in the benefit of all the stakeholders. If one tends to get all the stakeholders in one place,what you think will the outcome be, a long,rarely conclusive, boring meeting.
So what can be done to make post-mortem meetings effective. Here are some of the ways:

1) It would be better to rely on the QE team to provide inputs on the postives and negatives of the product because they are experts in gathering such information (providing status reports for builds). These inputs can then be discussed with all stakeholders and can be reviewed.

2) The agenda of the meeting should be floated well in advance so that people have already put some of the brains and efforts in analysing the goods and bads.

3) Never turn a blind eye to the problem. A person who says the project went perfectly ok,needs to have either counselling or a strict performance appraisal.. :)

4) Moderator should take care that healthy arguments don't turn into finger pointing and blaming. this meeting is strictly not to make blame games.

5) Members participating should be made fully aware that even if they point problems in themselves or their team,it is in no way going to effect their or as a matter of fact,anybody's performance appraisal (that's the major worry that prevents people to be honest). They are here to make improvements.

6) Proper metrics should be used to analyse the results, for eg: rate of bugs found in different phases and also the type of bugs.

7) The points should be jotted down,issues should be prioritised.

8) Implementing the changes will make the team excited and more motivated to work for the betterment.

9) Proper communication should be done time to time about what has happened to the recommendations. Discussions should be kept live,changes made should be communicated along with the impactt they have brought.

These meetings are core to the success of any project. I have seen performances increase manifold as a result of these meetings and much more quality products being rolled out. The seriousness of these meetings entirely depends on the stakeholders involved and their level of commitment to make a difference.

Monday, December 8, 2008

uTest

A terrific idea of bringing top QA professionals from all around the world to a common platform and providing the software companies the much needed support. uTest is a site that provides software testing services to the software companies who are in need. It not only benefits the testers to market their garnered skills but also to the companies who need to pay only per bug.

It is in a way crowdsourced testing platform,which I believe is a superlative effort. Software testing professionals were always in need of such a site that can help them rake moolah with their skills.
Its very easy to sign up, just needs you to create a profile and you are up.
The releases which are according to your requirements will be visible and you can start working almost immediately.

The best thing the I found about this site is that you do not have to bid or wait to interact with the people to whom you are providing the services. You just need to find a genuine bug and get paid for it.

This kind of endeavour is sure to move the internet market a step ahead.
Checkout uTest .

Sunday, September 21, 2008

Twist

A Collaborative functional testing platform for software teams - Twist.
The creators of Selenium has introduced a really path breaking platform which is based on the idea that for lasting business value,both the testers and the business professionals should build test suites together.So business people can express their software requirements in the form of executable tests and that too in the language of their own domain.
And for testers,it allows to preserve business intent,control the complexity and increase productivity. I feel Twist is all set to make waves in the software industry and the reason is its wider approach. It is usable by business professionals,developers and testers making all of them responsible for product quality.

What do you guys say?

You can check it out at Twist
and you can also have a look at the webinar by legendary Ward Cunningham and Testing expert Jennitta Andrea having the first ever live demo of Twist.

Wednesday, August 20, 2008

Software Testing Certifications -II

This is in continuation to my previous post,I will hereby list various certifications on the basis of tool knowledge:

  • Mercury Interactive
It offers two levels of certifications: CPS(Certified Product Specialist) and CPI(Certified Product Instructor). These are designed for anyone who uses TestSuite, LoadRunner Web, LoadRunner DB, or Topaz.You can find the complete information on Mercury

  • IBM Rational
IBM has two categories of certifications:

* Certification for all candidates
There are two certifications: IBM Rational manual tester(RMT) and IBM Rational Performance Tester(RPT).
IBM Certified Solution Designer - RMT - This intermediate-level solution designer is an individual with extensive product knowledge who understands how to setup, configure and create a manual testing framework with Rational Manual Tester.

IBM Certified Solution Designer - RPT - This intermediate level solution designer is an individual with extensive product knowledge who understands how to use the IBM Rational Performance Tester tool to validate the performance, scalability and reliability of Web-based, SAP, Siebel, or Citrix hosted systems.

* Certification for IBM Business Partners, Educational Partners and IBM Employees
The information regarding this is available at IBM internal certifications
And the information for certification for all candidates is available at IBM certifications

  • Borland Segue
Borland offers three certifications:
Borland Certified SilkPerformer Engineer (SCSPE)
Borland Certified SilkTest Engineer (SCSTE)
Borland Certified SilkVision Project Manager (SCSVPM).
For details you can go to SCSTE and SCSPEand SCSVPM

Then there are certifications addressing different aspects of application:
Functional Test Management Expert - Focus of this certification is on the proficiency level of SilkTest, SilkCentral Test Manager and SilkCentral Issue Manager.

Performance Test Management Expert - Focus of this certification is on SilkPerformer, SilkCentral Test Manager and SilkCentral Issue Manager.

Application Performance Management Expert - Focus of this certification is on SilkPerformer, SilkCentral Performance Manager and SilkCentral Issue Manager.
The details can be found at FTME and PTME and APME

Hope the current post and the previous post are of help to all those who need this information. I invite you all to share more information on these certifications and many more in the market.




Monday, July 28, 2008

Software Testing Certifications -I

Software testing is growing up as a renowned profession and with the passage of time ,lot many certifications have come up to add value to this profession. It specializes you in your area of expertise, makes you more aware of testing nuisances, allow you to stand out from your peers. Till date, there are not many certified software test engineers,so demand is even more.
This post is not meant to preach that only certified testers should be there.Certification cannot bring intelligence and if a person is not certified that does not mean that he or she is not a good tester.

I am writing this post to provide information about the various available software testing certifications at one stop, the information that I didn’t received.

Listing down a few in no particular order:
  • CSQA (Certified Software Quality Analyst)
The regulatory authority is QAI and this designation means a professional level of competence in the principles and practices of quality assurance in the IT profession. This certification requires prior experience.

  • CSTE (Certified Software Test Engineer)
The regulatory authority is again QAI and this designation means a professional level of competence in the principles and practices of quality control in the IT profession. This certification also requires prior experience.
There are advanced levels of certifications also available from QAI like CMST and CMSQ. For detailed information, checkout software certifications .

  • CSTP (Certified Software Test Professional)
The regulatory authority is IIST. This certification is an education-based certification, based on a Body of Knowledge that covers areas essential for every test professional to effectively perform their job in testing projects. Prior experience of at least one year is required. For detailed information, checkout Indian institute.

  • ISTQB certifications
ISTQB is the regulatory authority that provides certifications in various countries through their approved national boards. For e.g. ‘ISTQB-certified-tester’ certification in India. For detailed information, you can checkout ISTQB

  • SECP(Software Engineering Process Certified Professional)
The regulatory authority is learning tree. This certification is for those who mange the development process. For detailed information, you can checkout Learning tree

  • CSQE (Software Quality Engineering Certification)
This certification indicates that tester understands software quality development and implementation, software inspection, testing, verification and validation; and implements software development and maintenance processes and methods. For detailed information checkout ASQ

All the links that I have mentioned are official websites, which will provide you the most accurate information.
This is the list of certifications based on body of knowledge. I am in no way claiming that the list is complete. Their will still be lot many certifications available and would like you all to come forward and share that information.

Keep looking for my next post, which will have the list of certifications on the basis of tool knowledge.

Wednesday, July 16, 2008

Testing for beginners

New to Testing? Don't know how to start your career path so as to become a successful software tester.
Here are some of the tips that I learned along my way and I hope they help you achieve what you want.

First you need to have following know hows:
  • Idea about terminologies used in the field of testing like methodologies,test strategies,testing types and all.
  • Reading articles to get into the frame of mind of a tester. Read good blogs and books on testing. Begin with a quote "You are here to break the software":-).
  • Technical skills: If possible go for courses that are offered but even if you don't,its Ok.Get yourself exposed to automation tools and keep in mind,they are not just "record and playback".There is so much more to them.
A good tester has superb soft skills which I would put as "testing skills":
  • Be a brilliant observer. Look for what usually doesn't catches the eye.
  • Be curious.
  • Its very important to be intuitive. As you go along in this field,you will start feeling that your intuition is going strong and strong.Slowly,you start identifying the areas just intuitively where bugs can be found.
  • Have the perseverance because bugs you find are not always reproducible.You have to try again and again with possible combinations.
  • Good communication skills.You will find them so handy in tricky situations when dealing with developers and PMs(project managers).
  • Habit of digging deep will not pay you best anywhere but in the field of software testing.
  • Out of box thinking. Believe me,this would make you stand apart in the crowd.
  • Knowledge of how the user will be using it. You have to be in th customer's shoes.
  • Team work. You cannot achieve anything on your own,it has to be a team effort and a good team member is the one which inspires others and take them towards a common goal.
Domain knowledge also proves helpful because it allows you to have an idea about the flow of control and data which is bound to uncover series of bugs. And if you feel having a list of test cases in front of you will take you through,you are badly mistaken,always remember its the tester who finds the bugs and not the test case(Courtesy: Its a tester).So go for the kill believing in yourself and not any external help.

Software testing is a very challenging field that helps you grow not only professionally but also personally. Its creative,interesting,intuitive and a great career option.

Thursday, July 10, 2008

Software Testing:An underrated skill

Many number of times,people enter the field of testing with a perception that it does not require any special skill.But to the disappointment of all,it is a very creative field and requires lots of brains. As they say,"Its easy to commit a mistake but difficult to find a genuine one " :) .

I believe that it is very underrated skill and this aura is created by none other than the developers. No offense intended,but in my career I have met many developers who think of testers as nothing but just an additional load on the organization.What they forget is,that hadn't the testers been there,owing to their negligence,the software would never have gone live or the customers would have just run away.I don't only blame the developers for that.
Some organizations also think of testers as one level lower to the developers and I want to ask a question here, Why?

In my own organization,I felt this rift. We all were working so diligently for the release of one of our products,the best of the efforts were going in,be it testers,developers,PMs, everyone was trying hard and achieve the milestone. Suddenly we heard about management giving away incentive awards to boost the morale. Everyone was elated and then came the real bet, the incentives for the R&D team and QA team has a difference of INR 5000,R&D gets more.
The reason when asked, "This is how the management has decided and its just a little bit of difference." "Its not about the money", we argued and they said nothing but "concentrate on your work,you are getting the incentives". Hard to believe but it is the truth of the industry.

Agreed,testing(manual) does not need technology but what about the soft skills a tester needs? And when it comes to automation and white box, testers are no less than the developers technologically rather I would say more technically sound because they have to find the problems in the code.
Before erasing it from the mind of people working in the organizations,the organizations themselves have to believe in the equality of the streams.They have to believe that QA and R&D compliment each other.And this does not just have to come in words.Actions always speak louder than words.
And guys remember,testing is not a no brainer's job,so gear up before getting into this field.

Friday, July 4, 2008

Automation Testing: Is this the answer?

All those who are familiar with the field of testing must be knowing what automation testing is. Lets dig into it and find what it is all about and Is Automation testing an answer to every testing problem.

Automation Testing is testing which is done without human intervention.This is the testing assisted with software tools and does not require any operator input,analysis or evaluation.
No doubt we have many advantages like:
  • accelerates the test time
  • easy to do repetitive testing
  • possibility of exhaustive testing
  • Better test path coverage
With all these benefits in mind,I can't stop but thinking is Automation an answer to every testing problem? Is it applicable everywhere?The answer according to me is NO. And why is that?

Simply because automation testing has many sub challenges, and is restricted by a number of questions:
  • Is the time frame permissible of creating an automation testing framework? Although this makes the testing faster but creating framework takes time.
  • Do we have the skilled and sufficient resources to do the job? If the resources are not there,nothing can happen and obviously they cannot be hired at the blink of the eye.
  • Does automating will make the process really faster? If a project is not big enough,is it a complete waste spending time and energy on it?
  • Up to what level the automation needs to be done?
I have seen this happening a number of times that the decision of using automation and the extent of it becomes debatable. In certain scenarios, we have implemented automation partially because completely automating the task was not worth the effort.

So whatever testing is decided to be used:manual or automated or a super mix,I believe one should always measure all the pros and cons of the process.

Tuesday, July 1, 2008

QA Glossary continued

In continuation to my last post,I have come up with some other terms and their meanings:
  • Alpha Testing: Testing performed at the developer's site by a group that is independent of the design team and all the issues so found are fixed before going into production.
  • Beta Testing: Testing performed at customer's site by one or more end users in order to know how good a software is,whether they should go into the production with this software or not. The software has to meet the acceptance criteria as set by the customer.
  • Software testing methodology: It is a three step process: a) Creating a test strategy b) Creating a test plan c) Executing tests.
  • Test strategy: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.
  • Clear box testing: Same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.
  • Boundary value analysis: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code, is to exercise it at its natural boundaries.
  • Ad hoc testing: Random testing which is performed with no formal documents so it is a least formal testing approach.

Wednesday, June 25, 2008

QA Glossary

While working in QA,I came across lots of terminologies which in the long run do skip out of mind. We use those terms so often to our convenience that the meanings also get twisted with them. So I decided to just have a re look at them and kind of reminder of the actual contexts and explain them in my words.
  • Verification: As I always remember it "Building the product right" i.e. right process is followed to create the product.It typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications.
  • Validation: "Building the right product" i.e. in conformance to the user requirements.It typically involves actual testing and takes place after verifications are completed.
  • Quality: A software which is conforming to the customer requirements and is reasonably bug free. There are number of factors that affect quality: reliability,efficiency,security,portability,correctness,integrity,usability and most important maintainability.
  • Test plan: A document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation.
  • Test case: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly.
  • Black box testing: A testing which focuses on the external behaviour of the product without keeping in mind or have the knowledge of internal design or code.
  • White box testing: A testing based on the internal design or code of the product and is meant to uncover design errors.Tests are based on coverage of code statements, branches, paths and conditions.
  • Unit Testing:Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the expected test results are met or differences are explainable/acceptable.
  • Usability testing:Testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.
  • Integration Testing: Testing which is performed to ensure that distinct parts of the product work correctly when put together. The flow of control and data is checked for correctness.
  • System testing: Testing of a system on whole including any third party softwares and hardware.
  • End-to-end testing:Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.
  • Regression testing:Testing to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.
  • Sanity testing: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
  • Performance testing: Performance testing verifies loads, volumes and response times, as defined by requirements.
  • Load testing: Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.
  • Installation testing: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.
  • Security testing: testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
  • Error testing: error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Also known as recovery testing.
  • Compatibility testing: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.
  • Comparison testing: Testing that compares software weaknesses and strengths to those of competitors' products.
  • Acceptance testing: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

Monday, June 23, 2008

Introduction

Hi

This is Jaanvi Singh. I am a software test engineer ,now a stay at home mom, who is just in love with this profession. Inspite of staying at home,I want to keep sharpening my testing skills and not stay away from this beautiful art and craft of testing. I hope this blog will allow me to be in touch with the world's best testers and share my expertise,views,tips and tricks across the table.
Cheers!
Blog Widget by LinkWithin