The Testing Planet – Annual Subscriptions Available Now

21 December, 2010

A great and fun way to advance your education in the software testing craft is to read well written articles by well-respected testers.

The Software Testing Club has introduced an annual subscription priced at £15 for UK residents; £21 for residents in the rest of Europe; and £25 for the rest of the world.

For more information go to or e-mail

Digital copies continue to be freely available for download but there is something nice about a printed copy though!


User Experience Testing: Communicating Through the User Interface

20 December, 2010

One of my many interests is how the individual parts of systems – whether they be software-driven or not – communicate with each other.  A lot of time and energy is spent on making sure that the software components work together in different situations but how much time do we devote to making sure that the systems all work together cohesively to form an entire process?  How much time is spent making sure that the process itself can work correctly?

One of the things that I think we need to plan more time for is testing the way humans interact with systems.  I know that this is not easy because time is short and there is a lot of pressure to make sure the computer software side of the system is working correctly – the rest can be handled with training so the argument goes – but I think as testers we should keep bringing the human side of systems to the table in meetings and discussions about the projects we are involved in.

The human side of systems is something that ‘just happens’ when everything is going well but when it all goes wrong the results can be spectacular.  My favourite example of this is the London Heathrow Terminal 5 opening debacle.  A lack of familiarity by staff and passengers about car park locations led to baggage build up because the people were not in place at the right times to move bags around the baggage system.  This in turn caused a heavy load on the baggage belts leading to a failure of the automated baggage delivery system and so on…  Testers, as the eyes and ears of a project, should be vigilant for situations that no-one else has thought of and raise them.  Of course it is possible that the testers on this project had asked these questions and nothing was done about mitigating the risks, but everybody did seem to be taken by surprise at the turn of events on T5’s opening day…

Let’s move on now to another aspect of human-computer interaction: messages and warnings.  I am sure we have all been bemused by the sight of an error message that just says: “An error occurred.” However, put yourself in a user’s shoes for a moment and think about how you would react to seeing the following (I have pulled this from my Application Event Log but the text is pretty much as I  remember it appearing on screen in the form of an error message):

Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d

Faulting module name: olmapi32.dll, version: 12.0.6538.5000, time stamp: 0x4bfc6ad9

Exception code: 0xc0000005

Fault offset: 0x00051c7c

Faulting process id: 0x1a18

Faulting application start time: 0x01cb9c9e50018c07

Faulting application path: C:\Program Files\Microsoft Office\Office12\OUTLOOK.EXE

Faulting module path: c:\progra~1\micros~2\office12\olmapi32.dll

Report Id: 0c733b39-0896-11e0-b5bf-00197ed8b39d

The practice of delivering such ‘techie’ messages to end users is common-place but in my opinion it is a bad approach.  Receiving messages like this is completely bewildering for novice computer users who are likely to panic and do something that really messes things up.  In my case I knew that it was an add-in that I had installed which was incompatible with Outlook 2007 and did not panic – I understood what I had to do and I got on with it but it made me think of my less experienced friends and colleagues and how disconcerting such a message would be for them.

Let me give you another example from my Application Event Log:

The application (Microsoft SQL Server 2005, from vendor Microsoft) has the following problem: After SQL Server Setup completes, you must apply SQL Server 2005 Service Pack 3 (SP3) or a later service pack before you run SQL Server 2005 on this version of Windows.

I would like to encourage you all to think carefully about the wording of error messages and warning that are displayed to users.  The above is not a ‘problem’ at all; I need to do something else before I can run SQL Server 2005 and there is no cause for alarm.  It might be argued that someone seeking to use SQL Server 2005 is bound to be a competent computer user and therefore does not need much help but I beg to differ.  I might have been given a task to do for which I am completely out of my depth and I do not need to be panicked further.

There is a fine balance to be reached between being able to give enough information so support professionals and developers can debug and understand how to fix a problem (which Microsoft may have done with their message from Outlook above – assuming they are all well versed in hexadecimal) and being informative to users.

If we get the user experience right we stand a much better chance of designing and implementing a system which really does work efficiently because people will not be wasting countless hours trying to understand cryptic messages coming back from the system; they will be less frustrated; and everyone will have a better perception of the system and the organisation that is using it.

I was in a supermarket a few weeks ago and I heard the remark from a fellow customer that “there are always problems at the tills here – nobody seems able to work them”.  Standing in the queue I could see where that perception would come from: two till operators and a supervisor were needed to make sense of a message that had come up on the screen.  Testers should be making more noise about human-computer interaction and user experience problems that they can foresee for the future good of our craft.

This is an area that I am striving to get better at and I hope there are other testers out there who give serious consideration to the user experience they are giving in their systems.

SIGiST 8 December 2010

9 December, 2010

The final SIGiST (Special Interest Group in Software Testing) conference took place yesterday in London and, as usual, was well worth attending.  The theme for the day was “Keynotes – Six of the best” and consisted of talks only on this occasion: six keynotes and one short talk after lunch.  Unlike other SIGiST conferences I have been to there were no workshops which I always enjoy but I still found the day inspiring.

Four talks stood out for me as being excellent:

Les Hatton from Kingston University gave a brilliant talk in which he cited the systems controlling space shuttles as an example of excellently engineered systems and then went on to talk about systems which “should never have been allowed to see the light of day”.

One of the ‘highlights’ (that should probably read ‘lowlights’) included the story of his passage through Heathrow Airport earlier this year.  He had printed his boarding card at self-check-in but the systems at security could not read the card; SAS (the airline he travelled with) could not issue a new boarding card because he already had one unless he gave them the assurance he was who he said he was (!); he was then unable to get through security because he had two boarding cards…  As if it could not get any worse the public information displays in the departure lounge had crashed.  As a keen traveller I could really identify with Les’ frustrations here!

The final part of Les’ talk encouraged us to focus on systems thinking and take some of our cues from the laws of physics.  Once you find a bug in a particular area of the system you are likely to find more bugs in that same area.  Don’t give up was the message.

Gojko Adzic gave his excellent talk on Specification by Example.  Once again he made very good use of Prezi and encouraged us to use clear and concise language that our colleagues and customers will understand.   Too much time is wasted by misused terminology.  In a later talk mention was made of test cases and test conditions – actually they could both have been referring to the same thing – so why distinguish between them?

As usual Gojko had lots of examples to illustrate the success of the technique.  I find the concept of ‘living documentation’ particularly valuable and I liked the example of customer service staff referring to the tests that had been run to help answer customer queries.  It makes the tests very powerful because each test is directly addressing a particular problem being faced.

In the afternoon Fran O’Hara from Sogeti Ireland gave a talk on Scrum.  Included in the delegate pack for the conference was a Scrum cheat sheet illustrating the different components of a Scrum project and showing how they fit together.  I took it into work today and our Project Manager has found it very helpful. I thoroughly enjoyed Fran’s talk and particularly liked the idea of having two definitions of ‘done’: one definition describes what it means to be ‘potentially shippable’ and the other defines what ‘done’ means in terms of the current sprint.  There are many projects where it is not feasible to produce a potentially shippable product after one or two three-week cycles and this helps to deal with that.

Susan Windsor from Gerrard Consulting finished the day talking about how we develop ourselves and what it means to be a really good tester.  Susan challenged each of us to become testing gurus, super-testers in our organisations.  This will pay dividends because of the tremendous knowledge that we can bring to the table of how our projects are really going whether we are working in a traditional or more agile context.

Susan discussed the certification issue briefly and reminded us that although we can get a sense of achievement out of having a certificate one of the biggest problems with certification is the fact that it is used as a screening mechanism by people who really know very little (if anything) about testing when hiring staff.  Personally, I would add that the syllabus is too restricted in its scope and is based on very traditional testing processes which have been shown to be less efficient than the agile methods being adopted more and more.

Other talks included a career progression report from Erkki Poyhonen where he experienced a paradigm shift without a clutch (cue a Dilbert cartoon), a report of an entity-relationship modelling exercise for testing effectiveness from John Kent and a short talk from Geoff Thompson on the things that have influenced him in testing.

As always the day ended in the Volunteer where I enjoyed continuing chatting to fellow testers about our exciting craft.  The best thing for me about SIGiST is the networking and getting to know other testers.  As a result of attending these and other conferences I have built up a network of people with whom I communicate regularly and it has really expanded by knowledge of my chosen craft.  I would encourage everyone to get involved with testing conferences in their various locations because together we can learn a tremendous amount.

Transpection Explored

6 December, 2010



I had a great learning experience last night.  Those of you who were at the European Weekend Testers session or who have read my write-up of the session (here) will know that I attempted a technique called Transpection which I had read about previously at

I was not very happy with my attempt: it just did not feel right.  One of the great things about Weekend Testing sessions is that you can try new things in an environment where it does not really matter and everything becomes part of the learning process.

I decided to solicit the help and advice of James Bach to see where I went wrong and understand what I should have been doing so contacted him on Skype.  He readily agreed to help me and demonstrate Transpection for me.  I am publishing the full transcript of that Skype session (see link above) at James’ suggestion because we think it will be of help to other testers.

I have only lightly edited the transcript to put some of the statements into paragraphs to aid readability but the content is all there for you to see.  You will see my own learning process through this and hopefully, for those that want to know more about the technique, understand this really useful aid better.  You will even see where I mistakenly thought James was trying to bring proceedings to a close!

I would like to thank James for his time yesterday evening and for supporting me in this quest.

Feel free to make comments or ask me any questions…

European Weekend Testing – 4 December 2010

4 December, 2010

Due to various commitments over recent months I have not been as regular an attendee at the Weekend Testing sessions as I would have liked.  However the session this afternoon was a great one to come back on.

Our mission was to devise a ‘cheat sheet’ to be used by Helpdesk staff to help them improve the quality of their defect reports.  A lot of questions were asked to clarify what the problems were at the moment, what sort of environments the Helpdesk staff had available to them, whether there were any language issues to be considered, etc.

Ajay Balamurugadas ( and suggested working on this in teams and I had the pleasure of working with him during the course of the afternoon.  We quickly got down to drafting our cheat sheet, starting off by each typing our ideas for what should go into the sheet using a brilliant tool which I had not seen before,, which allowed us to both work on the same document and see what each of us was doing in realtime.

Ajay asked me to note down the sort of information that I would ask for if he called me for technical support.  My answers spurred us both on and we became much more productive in thinking up the things that would need to go onto the cheat sheet.  Similarly I asked Ajay about how he deals with severity.  In some ways our conversation reminded me of James Bach and Michael Bolton’s work on Transpection ( which is a technique I am trying to master.  I really need more practice…

We categorised and smartened up the cheat sheet and Ajay prepared it all as a PDF that we could share with the other weekend testers.

Following this we had the de-briefing session which, as always, was as informative – if not more so – than the actual mission itself.  I find I learn so much from hearing about how others have tackled problems and finding out how they have put their knowledge of the testing craft to good use.   We had all taken slightly different approaches depending on how we each viewed the audience and what they were trying to do.  Ajay and I had focussed on the Helpdesk staff writing bug reports but others had concentrated on helping the Helpdesk staff get the right information out of the customer in the first place.

The whole session was really enjoyable.  “Thank you” to Anna Baik for facilitating the session and to all the contributors for their help during the afternoon.  I look forward to joining future sessions as time and circumstances permit.

London Tester Gathering – 2 November 2010

12 November, 2010

I really enjoyed the London Tester Gathering on 2 November.  It was good to finally meet Darren McMillan ( after several online conversations and Sharath Byregowda (

Michael Bolton ( gave us a short talk entitled “Burning Issues in Software Testing” which was appropriate with Bonfire Night being just round the corner.  As always this was an inspirational talk full of the Michael Bolton sense of humour which I – and most of the audience – appreciated.

There have been many good blogs on the night including Darren McMillan’s write-up so I will leave my own summary at that.  Can I just say, though, a big “thank you” once again to Tony Bruce for once again organising a great evening.  I am just sorry that I could not stay for longer but I was staying in an unfamiliar part of town overnight.

Until next time…

UK Test Management Forum – 27 October 2010

12 November, 2010

This is my write-up of the UK Test Management Forum meeting on 27 October 2010.  Sorry it’s been so long in coming but things have been pretty hectic of late.

As usual there were three tracks running in parallel with two talks apiece.

The first talk I went to was led by Gojko Adzic entitled “Continuous Validation, Living Documentation and other tales from the dark side”.  Gojko discussed the fact that we often use different names for the same thing or use the same word but mean something different each time.  He highlighted various examples of this and proposed some solutions which make the terms more meaningful for people.  Graham Thomas pointed out that although this process has happened before – most notably about 25 years ago in the structured software development world – we still need to keep reviewing our terminology.

Gojko is writing a book on this subject and has a website to run alongside the book.  See for more details.

We had great discussions within the session.  I think we could all see that there is a need to address the confusion that we create by our use of terminology in the industry.   As Gojko pointed out legacy technical names do confuse people and create barriers which can hold people back from embracing change and adopting new processes.

The second talk I went to was entitled “The testing challenges associated with distributed processing” and was by Mike Bartley from TVS.  Mike was talking about the challenges we face with the rise of multi-core processors.  Whilst we can write parallel-savvy code, if the hardware and software platform on which the code is running is not using a distributed architecture there will be few – if any – benefits from the parallel code.

Mike talked about two common paradigms for distributed computing: message passing (which can lead to Race conditions) and shared memory.

Mike recommended reading J.B. Pedersen’s Classification of Parallel Programming Errors book (seems to be out of print and unavailable on Amazon).

He recommended that we adopt diverse static analysis techniques and think about design patterns and policies in our tests.  From a tools perspective we should consider which tools we can use at an architectural level to gain most beneft.

As always I thoroughly enjoyed the afternoon and felt I benefitted from the talks.  Things that I will take away from the talk include thinking more about the language I use to describe the testing that I am carrying out and thinking more about static analysis as a technique for checking out our distributed code.

Many thanks Paul Gerrard and Susan Windsor from Gerrard Consulting for hosting the event.  After the main forum talks we had a discussion about the future of the Test Management Forum and more information about the things we talked about and the decisions that have subsequently been made can be found at

The Prezi and PowerPoint slides from the two talks I attended are also available from

Comments welcome!

“Define structure”: my thoughts on James Bach’s challenge

10 October, 2010

Yesterday evening (in the UK anyway) James Bach set a challenge for Rob Lambert:

jamesmarcusbach:  @Rob_Lambert Quick tester challenge for Rob Lambert: Define “structure.” You have 10 minutes. #softwaretesting

It got me thinking too and my brain went into overdrive so I thought I would set out my own thoughts on the matter in a blog post.

In his response Rob highlighted several different types of structure and brought out that structures are in essence a ‘system’.  I like to think of structures as providing a framework or a set boundaries within which people, or things, should operate.

For example:

  • The laws of the land – a set of constraints governing how people in the country are to behave;
  • Buildings – the walls of the building define the space available to its occupants;
  • Skeletons – defines the shape and provides the basis for growth of the person or animal it belongs to;
  • DNA – defines the characteristics of the living organism containing that DNA sequence;
  • Roads – show us where we should and should not drive.

Some of these structures are more flexible than others; they are easier to change than others.  For example there are some fish and animals which, because of their DNA, can change their shape or their colouring to exactly match their surroundings and thus evade predators.  This change can happen in an instant.  Buildings can be extended but someone has to do something to make that happen and it requires hard work.  The impact of structural change can be quite dramatic.  If a road is re-routed the impact on the natural environment can be huge and get people quite upset.

Some structures are clearly vital for us: what we know of as the laws of physics attempt to document the way the universe works.  It would be horrendous if what we know as gravity stopped behaving in a constant fashion; if the earth’s rotation round the sun stopped we would have massive problems.

In computing we rely on certain structures.  For example networking: could you imagine trying to test a network application if there was no defined way of communicating between two computers and every manufacturer did something completely different?

There seemed to be a lot of emphasis at Agile Testing Days this year on education.  Our education provides us with a structure and I believe we are each responsible to ensure we keep ourselves up-to-date, that we self-education and strive to be as flexible and adaptable to our environments as possible.  We will not do this by sticking rigidly to the requirements of a particular certification body, we have to go out there and do what is right for us and for our clients.  We need to be like those creatures that can change in an instant to blend into their surroundings.

Your thoughts and comments are, as always, welcome…

SIGiST 16 September 2010

17 September, 2010

I attended the SIGiST yesterday entitled “A Testing Toolbox” and found it to be, as usual, an excellent conference with lots of thought-provoking talks.

The Irrational Tester

The opening keynote from James Lyndsay was focussing on the biases we all have built into us and need to avoid to be effective testers.

James used the headings:

Confirmation bias – where we find what we expect to find so don’t look for situations where we might find unexpected behaviour;

The “Endowment effect” – where people will often demand much more to give up something they have acquired than they got it for;

A “failure to commit” – if work is broken up into small chunks with deadlines set for each we are more likely to make progress on our projects than if there was a single deadline set for the end of the project;

Illusion of control – where we fool ourselves into thinking that we have found the only cause of a problem and don’t think about whether there may be anything else that might cause the same defect; and

Broken windows – where acceptance of minor bugs might lead to an acceptance of other much more serious bugs in the system.

Talks where human psychology is discussed – especially how it affects groups of people and how they interact and behave – are really interesting to me and I thoroughly enjoyed James’ talk.

Application Security Awareness

The second talk was by Martin Knobloch from entitled “Application Security Awareness”. is a great starting point for getting information on security testing; it contains extensive documentation, code projects, conference details and is made up of over 100 Chapters worldwide (and still growing).

The main thrust of the talk was encouraging people to identify and thoroughly understand the weakest link in their systems.  Very often this is not a technical weakness: it can easily be a process or ‘people’ weakness that leads to systems being exploited.

We need to beware of the dangers of creating the illusion of security but not actually doing anything to really make our applications secure.

All applications have the same issues – the techniques discussed on the OWASP website can be applied equally to ‘normal’ applications as to websites and web applications.

Delight Your Organisation, Increase Your Job Satisfaction and Maximise Your Potential

The next talk was entitled “Delight your organisation, increase your job satisfaction and maximise your potential” and was given by John Isgrove.  This talk focussed on what characterises an Agile project and what does not and then discussed a methodology called DSDM Atern.  I had previously heard of DSDM but I had not encountered the ‘Atern’ variation on the theme.

DSDM Atern provides a framework for the management and delivery of an entire project with guidance for managers.  Scrum provides a one-size-fits-all process but contains little guidance for managers – yes, there are the Scrum Masters, but are they always the decision makers?

There seem to be a lot of benefits for organisations adopting the approach and I intend to study it a bit more and find out what other people within my organisation know about it and whether any of its principles can be adopted by us.

What I found interesting was the way the Features, Quality, Time, Cost triangle is turned on its head.  In a traditional environment Features and, to a certain extent, Quality are fixed and the Time and Cost elements are flexible.  With DSDM Atern, Time, Cost and Quality are all fixed and the Features to be implemented are flexible.

I found myself agreeing with James Windle’s comment at the end that it was one of the best talks I had heard on agile methodologies and the difficulties that must be overcome.  I am just sorry that there is neither the time nor the space to put a lot of detail on the talk in this blog post.

The excellent SIGIST lunch followed this talk and, as usual, it was great to network with other testers and see the tools and services exhibition.

Lessons From Data Warehouse Testing

The Sharepoint after lunch was interesting.  Peter Morgan shared his experiences of testing data warehouse applications

The New Role of The Tester: Becoming Agile

Stuart Taylor was next up with an inspirational experience report of how his organisation made move to an agile process.

Wholesale changes to the working environment (even moving from curved desks to straight desks arranged so paired working was easier and people could talk across the table) were made, testers were involved throughout the design, development and delivery processes, automating as much as possible in Java using test driven development techniques which allowed the dedicated testers to get on with manual Exploratory Testing (the stuff we all love to do).

As a result of moving to an agile process they have seen improvements to the quality of their software, their response to changes in business need and there is much more negotiation with schedules.

How to Suspend Testing and Still Succeed – A True Story

Graham Thomas gave an account of his experiences when testing had to be suspended.  Testing was suspended on this project because there was no way anything was going to be delivered with the way things were working at the time.  The biggest problems were with the systems integration risk which was accepted at an earlier stage in the development process and the automation infrastructure.

Initially there was progress made: Systems Integration testing was successfully completed (or at least it was as scoped in the 50-page test strategy) but testing was held up by slippage in code delivery from development, issues with the test automation infrastructure and a qualified exit on non-functional proving of the infrastructure (it also took 250% more time than it should have done).

8 weeks into a 12 week schedule is was estimated that at the current rate of progress, UAT was going to take over a year to complete and many of the issues being found were to do with the automation infrastructure and product configuration – i.e. the systems integration risk had matured.

They held a series of workshops with all stakeholders to get an idea of what was wrong and plan a resolution that would allow a resumption of testing.  Graham pointed out that it is very difficult to set effective Resumption Requirements without knowing the criteria by which testing was suspended.  It was also difficult to set Suspension Criteria without knowing what was going wrong.  This is at variance with IEEE 829 but, when you think about it, it is rather obvious!

So the remainder of the project was re-planned – bearing in mind that the go-live date was non-negotiable due to regulatory constraints – and amazingly the Resumption Requirements were met on time at the end of 4 weeks.

A daily war room meeting was set up at 13:00 at which attendance was mandatory for all the decision-makers and those actually doing the work.  Only directly grinding out the work to achieve the project’s aims and decision-makers were permitted at these meetings.

Graham made it all sound very easy but it was clear that it was a very painful process which caused a lot of heartache and irretrievable breakdowns in the professional relationships between people.

Graham’s talk was fascinating and gave a real insight into Suspension Criteria and Resumption Requirements and the effects that suspension can have on a project.

UAT: A Game for Three Players

The final keynote talk of the afternoon was “Acceptance Testing: A Game for Three Players” by James Windle.  This was another excellent talk in which James gave us a run-down on how he approaches UAT going right back to the definition of the Acceptance Test Criteria.  Whilst there was nothing really ‘new’ about James’ talk it served as a very helpful reminder of this critical part of testing.

The day ended, as usual, at the Volunteer on Baker Street enjoying further networking with testers.

A big thank you to the SIGIST committee for organising the event once again and congratulations to Graham Thomas and Mohinder Khosla on their respective appointments as Programme Secretary and Secretary of SIGIST.

European Weekend Testing – 31 July 2010

31 July, 2010

The 29th European Weekend Testers session was very enjoyable.  We were testing an infuriating application on the website which is supposed to allow you to generate ideas and show connections between them.  To me it was very similar to mind mapping but with the added twist of incorporating social networking as well.

We had a great discussion afterwards which went slightly off-tack talking about testing conferences and the Software Testing Club (linked on the right).  What I think is great about Weekend Testing is that it joins together testers – and today a wannabe-tester – with a common purpose to test an application and we can all learn from each other and share our knowledge and experience.

Today’s chat transcript has been posted up at  It is well worth a read through.

I highly recommend that testers get involved with Weekend Testing if they have the time because it is a very rewarding couple of hours.  To get involved all you need to do is ping EuropeTesters on Skype at about 15:30 UTC on a Saturday.  If you are new it is a good idea to let the facilitators know you intend to join either by e-mail or tweet @europetesters.