I wanted to create a very simple, common sense overview of Software Testing. And that is what I have tried to do, in this blog post.
Why do we Test Software?
The "Why" seems simple.
- People develop software. People make mistakes.
- When we develop software we make mistakes.
- Those mistakes might be released to live.
- The mistakes might manifest as Problems: defects, exploitable security issues, poor UX, performance issues
If we do not want those problems in live we want to try and detect them early to take action. That is why we test.
Search the web for "Software Problem" and you'll see thousands of reasons for why we test.
The general reason is that we test to learn about the reality of software, rather than our beliefs about the software.
What is Software Testing?
But What is Software Testing?
If we start with the notion that we want to find problems in software.
I have to:
- have the ability to spot a problem
- know how to use the software
- know various ways of finding problems
How can I spot a problem?
If I look at Software e.g. Google Search for news I have to have some understanding of what this is supposed to do e.g. I search for a term, and I will see news reports for that term and they will display on the screen so that I can read them.
I have some model of "Google Search for News". I then use Google Search for News in various ways and compare what I'm observing to that model.
- can I search? yes I appear to
- are these the results I'm supposed to see? I don't know
- can I see these results?
- currently I'm using a wide window, what if it is smaller? Should it do that?
When I resize the browser window I see that the rendered text does not resize so buttons and fields and links are cut off. Should it do that?
I've observed something that I think might be a problem. Based on my model of what I think Google News should do. That might not be the same model that Google share, so they might not view that as a problem.
When we implement Software Testing in an organisation we have a communication process and set of expectations that we incorporate into our Software Testing so that we report the results of comparing the Software to our Model to other people.
We also saw that we are learning as we test. I learned that I don't know if these are the correct results, all the results etc. I learned that I don't know if the display should resize. I'm also learning about the functionality as I test and that expands my model.
For example: my initial model didn't include paging, I can see paging at the bottom of the page. Now I'm expanding my model. I assume that if I click on "1" nothing will happen because I'm on the first page. I assume that if I click on "2" then I will be taken to page "2" and see different results, same with "3" etc. I'm also assuming that if I click back to "1" I will see the same results.
I can then use the software and compare what I see with my assumptions. I'll learn if my assumptions are valid, in which case they will form part of my model, or if the assumptions are incorrect and then I'll have to decide - is the "System" the truth, or is my model "The Truth". But I'm learning and and expanding my model.
Problems are not the only things we look for. We try to learn as much as we can. But problems are the most obvious type of information that Software Testing is expected to communicate.
And that's basically Software Testing.
What type of Models are used?
We have multiple models that we can use:
- Requirements - what it is supposed to do
- Risks - what we fear it might do
- Issues - what we know it does and don't want it to
- Performance - does it process quickly enough
- Physical models - what versions of what stuff
- Usage Scenario models - how we expect it to be used
- etc. there are a lot more models than this
And we build new models as we go:
- Investigations - what can it do that we didn't expect
- Exploits - what can it do, that I can use to do something bad
- etc. we build models to help us think about the Software in different ways
What Types of Testing are there?
Some descriptions of testing want all the models to be defined and unchanging in advance of any testing being performed. You might hear statements like "The requirements need to be signed off". Some descriptions of testing go even further and say that testing can't start until all the approaches we are going to use to compare the models with the software are written down, and all the results we expect to see are agreed in advance.
This is clearly a lot of work and will slow down the learning process. And there is a high risk that our models are wrong and our descriptions of the approaches are wrong and the expected results are wrong. This would mean that when we do start testing we have a lot of rework to do, as well as the testing.
Some organisations do test like this and this might be referred to as:
- Structured Testing,
- Formal Testing,
- Traditional Testing or
- Waterfall Development.
Other organisations implement testing in a more Agile or Exploratory way. So less time and detail is added to the models up front, the models are not complete when we start testing and we want to learn as much as we can as we test and expect to expand and refine our models as we test.
I'd probably call this:
- Exploratory Testing or
- Agile Testing if it was an Agile project.
How do we know what to test?
Some of the ways of identifying "What to test?" from our models have been codified as Test Techniques:
- Boundary Value Analysis - takes a model of data and identifies specific data values to use
- Equivalence Partitioning - takes a model of data and identifies what data we might sample
- Path Coverage - takes a graph model of flows through the system and identifies scenarios or paths to cover
- etc.
As you learn more about testing you'll encounter more techniques. These generally provide guidance on how to turn a model into something that you can use to drive your testing.
And we also identify other ways of comparing our models to the system; e.g. by identifying risks, comparing with other systems, thinking of what ifs looking more deeply into the technologies of the System, etc.
High Level View of Software Testing
That's a very high level description of Software Testing.
- there's a risk that we make mistakes and they might cause issues if they are triggered when the software is live
- when we test, we build models of the software and we compare those to the software, to change our models and find new things to test, or spot mismatches between the model and the software; which might mean our model needs to change or the System might have a problem, or we may have found some other information we need to communicate.
Free Video Inside
What is Software Testing and Why do we Test Software?
Software Testing explained in simple common sense language. Understand Software Testing in under 9 minutes.
You'll learn:
- What is Software Testing?
- Why do we Test Software?
- How do we test software?
- What is Structured, Traditional, Formal or Waterfall Testing?
- What is Agile Testing?
- What is Exploratory Testing?
And you'll see some simple examples of Software Testing and Problem identification in the video.