Archive for March, 2013

Rapid Software Testing with James Bach – Final Day

21 March, 2013

My employer, Allies Computing Ltd, has hosted a condensed version of this post: http://www.alliescomputing.com/blog/testing-psychology-rapid-software-testing-training-james-bach/

 

Well, that’s Rapid Software Testing concluded and I feel enthused about the future. In my previous post I explained some of the things I learned on the first two days of the course. Today we looked at so-called Exploratory Testing. In reality all testing by humans is exploratory to a degree and scripted to a degree and this can be shown on a continuum. Even if we are running through a set of steps we are interpreting those steps in a certain way and two people may well interpret those steps differently. (The term ‘Exploratory Testing’ was coined to distinguish the approach to testing where test design and test execution happen simultaneously so there’s no need to write scripts and ‘Scripted Testing’ where test design is a separate process to executing the tests and often happens at a different time making scripting a necessity.)

I thoroughly enjoyed the various testing exercises that we undertook during the day but there was one in particular where I wished my normal laptop had not been broken as Visual Studio would have been VERY useful as I couldn’t get Excel to generate the data I wanted quickly enough but never mind the important thing was the thought process that got me to that point.

In testing as in life there are many different ways of doing things. As a tester it is important that what we do stands up to scrutiny. We should equip ourselves to talk or write about our testing in a clear, coherent manner and that will help us gain the confidence and respect of our peers in the industry. Sometimes we take actions or make decisions which take us away from where we want to be and the important thing is that we learn from doing that and use that experience to get ourselves back on track.

I did not expect this course to have so many applications from the cognitive sciences – particularly psychology – but one of the things that I have been very occupied with for several months now has been understanding what makes me do the things that I do and, conversely, why I don’t do certain other things. What makes me see some things and blindingly obvious and miss other things completely? By practicing the things I have learned on this course and just absorbing the phenomenal amount of information I have received over the past three days I hope that all of this helps me become a better tester and a better person.

Rapid Software Testing with James Bach – First Two Days

20 March, 2013

My employer, Allies Computing Ltd, has hosted a condensed version of this post: http://www.alliescomputing.com/blog/testing-psychology-rapid-software-testing-training-james-bach/

 

Rapid Software Testing is a course I have wanted to do for a few years now having heard really good things about how it has improved various people’s testing and the way they think. We are two days in now and I am really excited about the things I have learned.

Thinking like a tester

This is a particularly challenging area for me. I tend to build ideas then narrow them down by critically analysing them however they can be quite haphazard and this could lead to an accusation that I’m not testing properly. I am going to have a go at categorising what I do in a similar way to James does with the Heuristic Test Strategy Model – where he has headings such as:

  • Structure
  • Function
  • Data
  • Interface
  • Platform
  • Operations
  • Time

I will likely have different headings (some may be the same) but by organising my thoughts and ideas in this way will give me confidence that I’m considering the right things and a much more professional image.

The other way I think this will help me is that it will channel my thoughts and make me less inclined to get stressed about what I am doing and whether I am doing it right. Critical analysis is good; it is what makes a good tester but sometimes striking a middle ground is a good thing. We spoke a lot about System 1 and System 2 thinking – with System 1 thinking we tend to be much more off-the-cuff and emotive and this is great for generating ideas quickly but the more measured, time-consuming approach is System 2.

I find this very useful because I often don’t recognise when I need to switch mode – and maybe even when the way I am thinking is counter-productive to the situation I am in.

Testability

Another big thing for me was the importance of testability and seeking things that will help me test very well. This was brought home to me during one particular exercise involving a sphere. James played a customer in a top secret problem domain and was unable to tell us things because we were not allowed to know these things! For me this is all part of honing our problem solving abilities.

A Bit of Magic

We had magic tricks to help us see the importance of having a broad mind and giving us the imagination to conceive of our ideas. It showed me the importance of what is lurking in my blind spots of which I have many. It is really important to recognise our limitations and the problems these limitations bring to us as testers.

Oracles

It is really important to recognise who, what and where our oracles are as it is they which will provide some help answering the question about whether or not we have a problem. I realised that I have oracles hidden away in all sorts of surprising places!

I am really looking forward to tomorrow when we will be looking at Exploratory Testing and I am excited to play the famous dice game!

SIGiST Conference 13/03/2013

14 March, 2013

Good session at the Special Interest Group in Software Testing (SIGiST) conference yesterday run, as usual, by the BCS (British Computer Society) with a good representation of testers from across the different project lifecycles.

Matt Robson from Mastek gave an opening keynote under the title “Be Agile or Do Agile” and gave some salutary warnings on the dangers of testing becoming an ‘ology’. It is very easy to become set in our ways and dogmatic about our approaches to testing and that is harmful. To be ‘agile’ does not just mean that we adopt the Agile Manifesto (http://agilemanifesto.org/) and follow an agile approach to project management; it means we think and act in a way that embraces change and adapts to the situations we are in.

Very often we forget the ‘people’ side of software development and the example was given of a company where senior management turned round one day and said ‘we’re going to go agile and this is how you’re going to work in future’ but didn’t get the staff on board with them. The consequences on staff morale were horrendous and as a result software quality dipped.

One of the ways we can do this is to think in terms of business goals and outcomes because that has meaning for people. For instance instead of saying ‘the registration widget looks broken; I advise against going live’ approach it more as ‘we have found instances where sales staff might not be able to register new customers on our system; I advise against going live’.

What was particularly good was the talk was done with no PowerPoint slides so it concentrated the mind far more. I think this is an area that testers really have to get good in but it is also an area that can easily go horribly wrong.

Next we had a talk from George Wallace on systems challenges going from an R & D product straight into production. The project was for a very large and complex system and it was being developed in a very traditional manner with testing entering the fray late on in the product’s lifecycle. Suffice to say that testing was supposed to take 3 months and they are already 9.5 months in and still going!

Sakis Ladopoulos from Intrasoft International talked to us about what he termed project agnostic test metrics for an independent test team. Essentially this was an attempt to measure the performance of testers working on projects but do so independently of how the whole project is performing. The way this was being approached was to normalise scores for whatever was being counted across all the projects the team was involved with. The ‘best tester’ was the one who found the highest number of bugs very often as a percentage of the time they had taken.

I was quite uncomfortable with the idea of detaching testing from the rest of the project team but I am just not used to working like that.

After lunch we had a talk on website testing from Balaji Iyer from Mindtree talking about the challenges faced by modern websites. In particular there was discussion around scripting challenges and how performance can be impacted by technologies such as Ajax (used extensively by Google), Flash (for instance You Tube) and JavaScript which is often used for making sites look ‘pretty’.

Mindtree have a module currently in development that works with JMeter (a popular open source load and performance testing tool operated under the Apache banner) to help testers parameterize their requests and correlate them.

Chris Comey and Davidson Devadoss from TSG (Testing Solutions Group) then gave a fascinating talk on test strategies and how we can write a strategy which looks great on paper but does not work at all in practice because we have written our strategy looking in on testing and have not thought about dependencies on other parts of the business and how to deal with the issues that arise as a result.

It was a great talk because both test strategies that were used as examples were good strategies; they just weren’t the right ones for the job in hand. There is little point in doing a Post Project Review either if, as in the case of one of the projects, you are just going to type it up then stick it on a shelf somewhere and not learn the lessons. All the failings in one of the projects looked at had been raised in a previous ‘lessons learned’ document. Perhaps it would have been better to have called it a ‘Lessons NOT learned’ document!

The closing keynote from Martin Mudge from Bugfinders.com was great and was talking about crowd sourcing services to get testing done quicker and perhaps with greater coverage. With some audience participation from 3 people all had different paths that they would take through a functionality diagram.

The testers come in from all over the world via registration and are selected for projects based on their skills, experience and ability. Defects that are raised are all re-tested to verify that they are repeatable as recorded and genuine issues. Testers receive training materials if there are problems with their testing.

This seems to be a particularly good way for small teams wanting to quickly catch user interface issues but it would certainly be dangerous to rely on crowd-sourcing for deeper level testing (and I can just see some companies getting the impression that is the way to go)!

Overall a good conference and plenty to take away and think about.