An Analogy to Explain the Limitations of Test Cases

I love analogies. They help me explain things in a way that (hopefully) others can understand and relate to. When I was thinking how can I try and explain the limitations of test cases (because knowing test cases aren’t all they are cracked up to be, and explaining that to someone - are two different things) the first thing that came to mind is job interviews.

We’ve all been on job interviews - it’s an understandable concept and we can all relate. So here we go: First, let’s agree that both testing and job interviews are information seeking activities.

In testing, we are trying to find out information about the Software Under Test. In job interviews, the company is trying to seek information on the candidate (actually it goes both ways- the candidate is also trying to seek information on the company as well)

Second, let’s agree that in both examples you want to make an informed decision. In testing, you want to know if the software is ready to go live or proceed to another testing phase (there are other missions related to testing, but sticking to this, for the sake of the analogy). In job interviews, the company wants to know if they want to hire you. (and the candidate wants to know, do I actually want to work here)

Using test cases is like coming to the job interview with all of your questions pre-planned (on both sides, candidate and company).

This means when you come to the job interview, both sides have a set of questions that they plan to ask and are only seeking the answers to THOSE questions.

No follow-up or investigation based on what the other side said.

Scenario:

Interviewer: Do you have any experience working in an Agile environment? (planned question)

Candidate: Yes, I do. In my previous project, we were working in scrum teams but we didn’t have scrum masters.

This answer could be considered strange or would warrant a follow-up. Technically it may “pass” the interviewer’s definition of acceptable, but not having a scrum master could be something that warrants investigation and further questioning to see if they were actually working in Scrum teams.

Even worse, using metrics to dictate success could be misleading.

If 89 test cases passed out of 90. All that tells me is that 89/90 test cases passed.

I don’t know if that’s great; amazing; concerning… to me, it’s just a number. But to many people who look at test case metrics, that’s not the case (see what I did there :D). With just these type of metrics, we don’t know the quality of the test cases, the coverage, how much overlapping material there is, if the 1 failing test case is a blocker. The high number (or low number) of test cases is also no indication of how well tested the feature is. Does 90 test cases mean the SUT is better tested than one with only 30 test cases? Maybe having 200 test cases would\’ve been preferable?

Back to our analogy: Let’s say the Interviewer has 15 preplanned questions for the candidate. But then some of the questions are a lot more “shallow” than others. We shouldn’t put equal weighting on each question. Some examples of questions that may be asked at a job interview:

Summary:

If you ever find yourself trying to explain to someone the limitations of test cases. Try using an analogy. Use specific examples of job interviews and the questions both sides asked. Did both sides only ask the questions they planned to before hand? Was there a “right” number of questions that had to be answered correctly? Did both sides know what the “right” expected answer was for all questions?