In past projects, written requirements have often been used as oracles to determine whether or not the behaviour they are seeing in the SUT (Software Under Test) is correct or not.
According to Cem Kaner, an oracle or test oracle is a mechanism for determining whether a test passed or failed.
The thing is, not every project has clear written requirements – some projects lack written requirements altogether, while others are not clearly written and are very open to interpretation.
This is where heuristics can be very useful. I have previously written about my most used test heuristics (along with some examples), but in this blog post I want to elaborate on why heuristics can be very useful for testers, with or without written requirements.
What is a heuristic?
There is (feasibly) no such thing as 100% clear written requirements with no gaps
In almost 10 years, I have never been on a project with 100% clear written requirements with no gaps. I have ALWAYS come across gaps or room for interpretation where there were also valid tests to be run but the written requirements could NOT be used as an oracle.
Another thing to note about 100% clear written requirements is that this is very subjective – the business analyst in charge of writing the requirements may think they have covered it with absolutely no gaps, but chances are another person in the team will find the gaps that the business analyst failed to see.
Lastly, the time involved to write these hypothetical 100% clear written requirements will be way too much for any team to realistically consider – so “good enough” will probably be the goal. Along with “we’ll figure it out” and “let’s have these discussions, when we get there”.
Heuristics can help you articulate clearly why the behaviour you are seeing is correct or incorrect
Have you ever read a bug report that said something along the lines of “this didn’t/doesn’t work” but then there was no explanation as to what the tester was expecting or why?
Unfortunately I have.
Heuristics can help you put into words what you, as a tester, feel is wrong. I’ve found the consistency heuristics to be extremely helpful here.
Here is a cheat sheet by Elizabeth Hendrickson that I have also found to be very useful.
If you’re going to advocate for a bug, you’re going to need to be able to “argue your case”. It’s pretty easy to do that when you have a written requirement requirement/user story to point to – but what if if the behaviour you’re seeing isn’t covered by a written requirement?
This doesn’t mean that the behaviour you’re seeing is correct – it just means you need to work a little bit harder for that bug you found and find an alternate way of proving that the bug you found is indeed a bug.