Archive for July, 2010

European Weekend Testing – 31 July 2010

31 July, 2010

The 29th European Weekend Testers session was very enjoyable.  We were testing an infuriating application on the website which is supposed to allow you to generate ideas and show connections between them.  To me it was very similar to mind mapping but with the added twist of incorporating social networking as well.

We had a great discussion afterwards which went slightly off-tack talking about testing conferences and the Software Testing Club (linked on the right).  What I think is great about Weekend Testing is that it joins together testers – and today a wannabe-tester – with a common purpose to test an application and we can all learn from each other and share our knowledge and experience.

Today’s chat transcript has been posted up at  It is well worth a read through.

I highly recommend that testers get involved with Weekend Testing if they have the time because it is a very rewarding couple of hours.  To get involved all you need to do is ping EuropeTesters on Skype at about 15:30 UTC on a Saturday.  If you are new it is a good idea to let the facilitators know you intend to join either by e-mail or tweet @europetesters.


UK Test Managers’ Forum – 28 July 2010

28 July, 2010

The 26th Test Managers’ Forum was held this afternoon at Balls Brothers Minster Court in London EC3 and as before was a really good afternoon.  It was good to catch up with testers who I have met on previous occasions and at other gatherings and exchange knowledge.

As usual there were six sessions on the agenda.  The first session I attended was run by Jonathan Pearson from Original Software and was entitled 10 Black Holes to avoid for a successful product delivery and was illustrated with examples from Star Wars.

The black holes we are to avoid are as follows:

  • Walking before you can crawl – before contemplating releasing a product we need to understand when we are finished, but we also need to avoid getting into a never-ending journey.  Jonathan asserted that there is a need for an Application Lifecycle Management (ALM) Strategy including a robust Test Strategy.   A centralised collaboration platform can give information about the progress of the projects which helps inform the decision making process.  Early involvement of QA in requirements and business rule reviews was encouraged as was automation where possible – particularly of regression tests.
  • Quality Assurance as a silo – this was an interesting one for me.  At what level in our organisations does testing have an influence?  I am very fortunate in that I have support at board level for testing and quality assurance but there is also a case from a reporting perspective that it can be better to have a reporting line into the business side of the organisation to aid decision making.
  • Lack of organisation – to avoid this requires tidyness (there is a need for centralised information); knowledge needs to be shared; we should aim to reuse wherever possible including test documents, data and environments.
  • Lack of control – the main point emphasised with this one was that avoiding this one is dependent on taking care to address the previous three points.  Without these there is a danger of a lack of control.
  • Lack of visibility and out of date information – this section focussed on Application Quality Management (AQM) techniques and Jonathan asserted that there are a number of metrics that are essential to understanding how well things are going in a project.  Metrics and ‘beancounting’ is something that I am not really sold on as far as value is concerned because I feel that so much time can be spent gathering metrics that the task of testing is overlooked.  I also worry that the metrics give something that can be grabbed hold of without an understanding of the context in which those figures were gathered and thus lead – inadvertently sometimes – to poor decisions being reached.  Examples of tools like Concerto and Sonar were suggested as ways of gathering data from projects.
  • Unnecessary rework – examples of wasteage in this area were suggested including project outlines and test data.  We should see to minimise the time we spend rebuilding test environments and test data.  It was suggested that we consider configuration management for test environments and an aim of regression testing could be to go to 100% automation.  I think we need to be very careful with the latter because we can easily get carried away with automation even when it is inappropriate in the context in which we work.
  • Hindering collaboration with overly technical tools – this was illustrated with the Keep It Simple Stupid (KISS) mneumonic.  It was recommended that we should aim for:
    • Central organisation
    • No coding
    • Flexibility
    • Scalability

We should avoid:

  • Technical expertise barriers
  • High maintenance processes
  • Use of disparate tools because these could increase complexity.
  • Imposition of methodology – for example using a tool or technique that ties you into a V-model development method or mandates that you only follow Agile methods.
  • Lack of cross-project visibility – the main point of this was visibility at an organisational level
  • Wasting knowledge and time – the encouragement was to share knowledge as much as possible.

During the talk there was good discussion amongst the group.  As always with sessions such as this it is great to get the reassurance that the vast majority of testers are working in the same way as you are and facing the same problems.  Sometimes, though, issues are flagged up which are trully mindblowing.  One such instance arose during this talk and centred on the ability to roll back a test environment or roll back test data to a consistent state.  I have used VMWare products for some time now and don’t really know how I could survive without the snapshotting facility.  It therefore amazed me that such a high proportion of testers do not seem to use such techniques.  I hope that they have some other way of achieving the same effect!

The second talk I went to was by James Wilson from Secerno entitled “Testing in an Agile with Scrum environment” which discussed difficulties associated with testing.  It was a very lively session as many of the points would be equally valid with any development cycle or project management technique.

One particular area of concern was a chart with quality, time and cost:  it was asserted that because time and cost are fixed in a sprint in an agile project the only thing that can move is quality therefore quality of the final product is likely to suffer.  James viewed the scope of changes made and the scope of testing as a part of quality in this argument.  It was pointed out that in an agile environment,  ‘quality’ is everyone’s ‘problem’ as such.

There was also quite a bit of discussion on what constitutes ‘release quality’ and how that meaning can change during the lifetime of a project.  It was great to listen to the ideas and suggestions being put forward by other testing practitioners in this regard.

For example there were three areas of concern for James:  Soak testing; Stress testing and Regression testing.  There was a lot of discussion about soak testing and stress testing and where in the cycle long running tests like this should sit.  One approach that was suggested was performing such tests outside of a sprint cycle altogether:  accept that a soak test is going to take three or four weeks – for example – to give meaningful results so perhaps run that as a separate project in parallel to the main one for developing the application.  It was also suggested that sometimes it is just as valid to run these sort of tests after the software has gone live – but be careful to make sure the risks of doing this have been accepted.

Unfortunately James did not have a chance to get onto regression testing but it was a great talk nonetheless and I gained a lot from listening to the discussions around the points he raised.

After the talks finished as usual we went upstairs for the traditional networking session.  I always find this very valuable and enjoyed meeting up with people again.

A big thank you to Paul Gerrard for organising the afternoon.  Rob Lambert has also blogged (with photos – eeek!):

European Weekend Testing 24 July 2010

25 July, 2010

I participated in my second European Weekend Testing session yesterday (Saturday 24 July 2010, EWT 28) where we were looking at teamwork – in particular working in a paired testing scenario – using the Fantastic Contraption game ( as the System Under Test.

For me the session highlighted how useful it is to be able to discuss strategies, techniques and approaches with others and is something I will take into work with me tomorrow morning.  During the discussion afterwards – the most valuable part for me at least – it became apparent that even though many of us do not pair formally with other testers and/or developers, we do on an informal basis and still derive much benefit from the exercise.

Although I did not get very far with the actual game, I still found the experience of working with other testers enjoyable.  I would encourage others – especially if you are the sole ‘tester’ in your organisation – to consider getting involved with Weekend Testing.  If nothing else it helps us learn from and encourage other craftspeople whilst sharing knowledge and experience for the good of the community.

Many thanks to Markus and Anna for facilitating the session and to the other testers who joined us from around the world for a great learning experience.  Commitments permitting, I hope to join you next week.

Testing Lessons from England’s Courthouses

18 July, 2010

I had an interesting day on Friday (16 July).  I decided to stay on in London for an extra day after the London Testers Gathering the previous evening and was glad I did.  During the course of the day I learned much that can be applied to the software testing craft.

I have always been interested in the law but it is many years since I sat in a Courthouse listening to cases.  I took the opportunity on Friday morning to visit the Old Bailey (the Central Criminal Court in London, famous because of the number of notable cases brought before it).

In the first case I sat in on, the Counsel for the Prosecution was summing up for the jury.  Note-taking would prove vital for recalling the facts when the jury comes to deliberate its verdict and it was encouraging to see the number of people jotting notes.  I think if I were sat on that jury I, too, would have needed to take copious notes to aid concentration to counter the dry monotones being used by the barrister!

The second case was also very instructive for me as a tester.  An expert witness was being cross-examined by the Counsel for the Defence but the answers being given were unclear.  I was fascinated listening to the way the barrister dealt with this.  He kept rephrasing the same question but probing slightly different angles.  I was reminded of the persistence with which we must explore the questions we try to answer by testing.  Do we just accept an answer which does not quite fit or do we explore other avenues of enquiry to understand what we have observed?

After the expert witness was allowed to stand down from the dock, a further witness was called.  The questioning style was altered to suite the witness and less technical language was used in the phrasing of the questions.  When we are testing do we go for a ‘one size fits all’ approach and test everything in the same way or are we careful to tailor our approach to the situation?  I trust that for all of us it is the latter.

I then spent an enjoyable afternoon at the Royal Courts of Justice where the higher Courts of the legal system in England and Wales sit.  I went into three cases there and it was interesting being reminded again of how persistent we must be to get to the bottom of some of the questions we need to answer by testing.  In one of the cases brought before the Court of Appeal an adjournment was being sought because new ways of interpreting and dealing with a piece of evidence had come to light.  Are we careful to re-consider our approaches to testing in the light of new ideas and evidence?

Whilst it is very good to read testing literature extensively it is also good to explore other disciplines and see what we can learn from them to help us in our day-to-day testing.  For me my day in London’s Courthouses was very educational and I will look forward to visiting again some time.

Testers’ Communication Revisited – Part 2: Context

9 July, 2010

In this post I continue my series following on from a recent Dilbert storyline ( and which got me thinking again about communication in the workplace.

In the previous post ( I talked about the Purpose and Audience for our communications; in this instalment I discuss the third aspect: Context and how it relates to our job as testers.

The context in which we communicate – and that in which our message is received – is vital to reduce the risk of misunderstanding.

I believe we need to consider the following about the context of our communications:

  • Environment
  • Availability of resources
  • Timing


We are living in a world where we are increasingly expected to be available constantly.  This can lead to many problems.

The advent of the mobile phone and Blackberry devices means that we are increasingly likely to be communicating when we are on the move – when we cannot fully concentrate on the conversation.  Our judgement may be clouded subconsciously by the fact that we have only a few minutes to make a connection (for example), we might suffer the effects of a poor quality connection to the mobile phone network and lose the odd word here and there.

We need to be aware of our hearers being in situations where their concentration may not be fully on the conversation with us.  It might be a good idea to arrange to call back at a more convenient time if the matter is of critical importance.  We need to be aware ourselves when we are in situations where our judgement may be impaired.  It’s great that we can be available for work when we are on the train but is the train really the best place to work?

For some people, it is easier to concentrate when there is a bit of music on in the background but for others this would be a distraction.  We need to work out the most effective environment for ourselves and those with whom we are communicating.

Availability of Resources

We often see conversations around the coffee machine on Dilbert strips.  I love the fact that Wally is asked about revised budget estimates when he is at the coffee machine: it is a place where he is unlikely to be able to produce the figures, he is likely to forget all about them by the time he gets back to his desk and the enquirer is left none-the-wiser.

It astonishes me how distracting the simple act of rifling through a stack of papers or finding a pen is.  It is really easy to lose the train of one’s thoughts simply because everything needed for the conversation to go well is not to hand.

Similarly a great deal of time can be wasted in meetings by not having all the information needed to make a decision.  How well do we, as testers, prepare for meetings?  Do we make sure we have read and understood any documents that are up for discussion?  We will gain respect and a reputation for thoroughness if we arrive at a meeting able to contribute intelligently to the conversation.


Sometimes it really is neither the time nor the place to have a particular conversation.  If either or both parties are stressed or they are having a bad day things can be said – or written – which do not convey the right message.  Fortunately with e-mail and written communication we can re-read before sending the message but are we sensitive enough to know that we may be writing something we later regret?  Do we recognise how our listeners/readers might receive and interpret our message?  The luxury of stopping and thinking is not necessarily there for us once we start speaking our mind though!

Another issue is recognising when it is appropriate to raise an issue.  It might be that a decision has been made that unless there is a defect which stops the product from working at all, we will be shipping the next day.  Once this is known it is questionable whether a big issue should be made of what will be perceived as less significant problems.  It might be better to raise them as issues in the bug tracker with enough notes to enable suitable prioritisation to take place later.

In future posts I will discuss other aspects of communication but I hope this has whetted your appetite for looking into this fascinating area a bit more.