UK Test Managers’ Forum – 28 July 2010

28 July, 2010

The 26th Test Managers’ Forum was held this afternoon at Balls Brothers Minster Court in London EC3 and as before was a really good afternoon.  It was good to catch up with testers who I have met on previous occasions and at other gatherings and exchange knowledge.

As usual there were six sessions on the agenda.  The first session I attended was run by Jonathan Pearson from Original Software and was entitled 10 Black Holes to avoid for a successful product delivery and was illustrated with examples from Star Wars.

The black holes we are to avoid are as follows:

  • Walking before you can crawl – before contemplating releasing a product we need to understand when we are finished, but we also need to avoid getting into a never-ending journey.  Jonathan asserted that there is a need for an Application Lifecycle Management (ALM) Strategy including a robust Test Strategy.   A centralised collaboration platform can give information about the progress of the projects which helps inform the decision making process.  Early involvement of QA in requirements and business rule reviews was encouraged as was automation where possible – particularly of regression tests.
  • Quality Assurance as a silo – this was an interesting one for me.  At what level in our organisations does testing have an influence?  I am very fortunate in that I have support at board level for testing and quality assurance but there is also a case from a reporting perspective that it can be better to have a reporting line into the business side of the organisation to aid decision making.
  • Lack of organisation – to avoid this requires tidyness (there is a need for centralised information); knowledge needs to be shared; we should aim to reuse wherever possible including test documents, data and environments.
  • Lack of control – the main point emphasised with this one was that avoiding this one is dependent on taking care to address the previous three points.  Without these there is a danger of a lack of control.
  • Lack of visibility and out of date information – this section focussed on Application Quality Management (AQM) techniques and Jonathan asserted that there are a number of metrics that are essential to understanding how well things are going in a project.  Metrics and ‘beancounting’ is something that I am not really sold on as far as value is concerned because I feel that so much time can be spent gathering metrics that the task of testing is overlooked.  I also worry that the metrics give something that can be grabbed hold of without an understanding of the context in which those figures were gathered and thus lead – inadvertently sometimes – to poor decisions being reached.  Examples of tools like Concerto and Sonar were suggested as ways of gathering data from projects.
  • Unnecessary rework – examples of wasteage in this area were suggested including project outlines and test data.  We should see to minimise the time we spend rebuilding test environments and test data.  It was suggested that we consider configuration management for test environments and an aim of regression testing could be to go to 100% automation.  I think we need to be very careful with the latter because we can easily get carried away with automation even when it is inappropriate in the context in which we work.
  • Hindering collaboration with overly technical tools – this was illustrated with the Keep It Simple Stupid (KISS) mneumonic.  It was recommended that we should aim for:
    • Central organisation
    • No coding
    • Flexibility
    • Scalability

We should avoid:

  • Technical expertise barriers
  • High maintenance processes
  • Use of disparate tools because these could increase complexity.
  • Imposition of methodology – for example using a tool or technique that ties you into a V-model development method or mandates that you only follow Agile methods.
  • Lack of cross-project visibility – the main point of this was visibility at an organisational level
  • Wasting knowledge and time – the encouragement was to share knowledge as much as possible.

During the talk there was good discussion amongst the group.  As always with sessions such as this it is great to get the reassurance that the vast majority of testers are working in the same way as you are and facing the same problems.  Sometimes, though, issues are flagged up which are trully mindblowing.  One such instance arose during this talk and centred on the ability to roll back a test environment or roll back test data to a consistent state.  I have used VMWare products for some time now and don’t really know how I could survive without the snapshotting facility.  It therefore amazed me that such a high proportion of testers do not seem to use such techniques.  I hope that they have some other way of achieving the same effect!

The second talk I went to was by James Wilson from Secerno entitled “Testing in an Agile with Scrum environment” which discussed difficulties associated with testing.  It was a very lively session as many of the points would be equally valid with any development cycle or project management technique.

One particular area of concern was a chart with quality, time and cost:  it was asserted that because time and cost are fixed in a sprint in an agile project the only thing that can move is quality therefore quality of the final product is likely to suffer.  James viewed the scope of changes made and the scope of testing as a part of quality in this argument.  It was pointed out that in an agile environment,  ‘quality’ is everyone’s ‘problem’ as such.

There was also quite a bit of discussion on what constitutes ‘release quality’ and how that meaning can change during the lifetime of a project.  It was great to listen to the ideas and suggestions being put forward by other testing practitioners in this regard.

For example there were three areas of concern for James:  Soak testing; Stress testing and Regression testing.  There was a lot of discussion about soak testing and stress testing and where in the cycle long running tests like this should sit.  One approach that was suggested was performing such tests outside of a sprint cycle altogether:  accept that a soak test is going to take three or four weeks – for example – to give meaningful results so perhaps run that as a separate project in parallel to the main one for developing the application.  It was also suggested that sometimes it is just as valid to run these sort of tests after the software has gone live – but be careful to make sure the risks of doing this have been accepted.

Unfortunately James did not have a chance to get onto regression testing but it was a great talk nonetheless and I gained a lot from listening to the discussions around the points he raised.

After the talks finished as usual we went upstairs for the traditional networking session.  I always find this very valuable and enjoyed meeting up with people again.

A big thank you to Paul Gerrard for organising the afternoon.  Rob Lambert has also blogged (with photos – eeek!):


European Weekend Testing 24 July 2010

25 July, 2010

I participated in my second European Weekend Testing session yesterday (Saturday 24 July 2010, EWT 28) where we were looking at teamwork – in particular working in a paired testing scenario – using the Fantastic Contraption game ( as the System Under Test.

For me the session highlighted how useful it is to be able to discuss strategies, techniques and approaches with others and is something I will take into work with me tomorrow morning.  During the discussion afterwards – the most valuable part for me at least – it became apparent that even though many of us do not pair formally with other testers and/or developers, we do on an informal basis and still derive much benefit from the exercise.

Although I did not get very far with the actual game, I still found the experience of working with other testers enjoyable.  I would encourage others – especially if you are the sole ‘tester’ in your organisation – to consider getting involved with Weekend Testing.  If nothing else it helps us learn from and encourage other craftspeople whilst sharing knowledge and experience for the good of the community.

Many thanks to Markus and Anna for facilitating the session and to the other testers who joined us from around the world for a great learning experience.  Commitments permitting, I hope to join you next week.

Testing Lessons from England’s Courthouses

18 July, 2010

I had an interesting day on Friday (16 July).  I decided to stay on in London for an extra day after the London Testers Gathering the previous evening and was glad I did.  During the course of the day I learned much that can be applied to the software testing craft.

I have always been interested in the law but it is many years since I sat in a Courthouse listening to cases.  I took the opportunity on Friday morning to visit the Old Bailey (the Central Criminal Court in London, famous because of the number of notable cases brought before it).

In the first case I sat in on, the Counsel for the Prosecution was summing up for the jury.  Note-taking would prove vital for recalling the facts when the jury comes to deliberate its verdict and it was encouraging to see the number of people jotting notes.  I think if I were sat on that jury I, too, would have needed to take copious notes to aid concentration to counter the dry monotones being used by the barrister!

The second case was also very instructive for me as a tester.  An expert witness was being cross-examined by the Counsel for the Defence but the answers being given were unclear.  I was fascinated listening to the way the barrister dealt with this.  He kept rephrasing the same question but probing slightly different angles.  I was reminded of the persistence with which we must explore the questions we try to answer by testing.  Do we just accept an answer which does not quite fit or do we explore other avenues of enquiry to understand what we have observed?

After the expert witness was allowed to stand down from the dock, a further witness was called.  The questioning style was altered to suite the witness and less technical language was used in the phrasing of the questions.  When we are testing do we go for a ‘one size fits all’ approach and test everything in the same way or are we careful to tailor our approach to the situation?  I trust that for all of us it is the latter.

I then spent an enjoyable afternoon at the Royal Courts of Justice where the higher Courts of the legal system in England and Wales sit.  I went into three cases there and it was interesting being reminded again of how persistent we must be to get to the bottom of some of the questions we need to answer by testing.  In one of the cases brought before the Court of Appeal an adjournment was being sought because new ways of interpreting and dealing with a piece of evidence had come to light.  Are we careful to re-consider our approaches to testing in the light of new ideas and evidence?

Whilst it is very good to read testing literature extensively it is also good to explore other disciplines and see what we can learn from them to help us in our day-to-day testing.  For me my day in London’s Courthouses was very educational and I will look forward to visiting again some time.

Testers’ Communication Revisited – Part 2: Context

9 July, 2010

In this post I continue my series following on from a recent Dilbert storyline ( and which got me thinking again about communication in the workplace.

In the previous post ( I talked about the Purpose and Audience for our communications; in this instalment I discuss the third aspect: Context and how it relates to our job as testers.

The context in which we communicate – and that in which our message is received – is vital to reduce the risk of misunderstanding.

I believe we need to consider the following about the context of our communications:

  • Environment
  • Availability of resources
  • Timing


We are living in a world where we are increasingly expected to be available constantly.  This can lead to many problems.

The advent of the mobile phone and Blackberry devices means that we are increasingly likely to be communicating when we are on the move – when we cannot fully concentrate on the conversation.  Our judgement may be clouded subconsciously by the fact that we have only a few minutes to make a connection (for example), we might suffer the effects of a poor quality connection to the mobile phone network and lose the odd word here and there.

We need to be aware of our hearers being in situations where their concentration may not be fully on the conversation with us.  It might be a good idea to arrange to call back at a more convenient time if the matter is of critical importance.  We need to be aware ourselves when we are in situations where our judgement may be impaired.  It’s great that we can be available for work when we are on the train but is the train really the best place to work?

For some people, it is easier to concentrate when there is a bit of music on in the background but for others this would be a distraction.  We need to work out the most effective environment for ourselves and those with whom we are communicating.

Availability of Resources

We often see conversations around the coffee machine on Dilbert strips.  I love the fact that Wally is asked about revised budget estimates when he is at the coffee machine: it is a place where he is unlikely to be able to produce the figures, he is likely to forget all about them by the time he gets back to his desk and the enquirer is left none-the-wiser.

It astonishes me how distracting the simple act of rifling through a stack of papers or finding a pen is.  It is really easy to lose the train of one’s thoughts simply because everything needed for the conversation to go well is not to hand.

Similarly a great deal of time can be wasted in meetings by not having all the information needed to make a decision.  How well do we, as testers, prepare for meetings?  Do we make sure we have read and understood any documents that are up for discussion?  We will gain respect and a reputation for thoroughness if we arrive at a meeting able to contribute intelligently to the conversation.


Sometimes it really is neither the time nor the place to have a particular conversation.  If either or both parties are stressed or they are having a bad day things can be said – or written – which do not convey the right message.  Fortunately with e-mail and written communication we can re-read before sending the message but are we sensitive enough to know that we may be writing something we later regret?  Do we recognise how our listeners/readers might receive and interpret our message?  The luxury of stopping and thinking is not necessarily there for us once we start speaking our mind though!

Another issue is recognising when it is appropriate to raise an issue.  It might be that a decision has been made that unless there is a defect which stops the product from working at all, we will be shipping the next day.  Once this is known it is questionable whether a big issue should be made of what will be perceived as less significant problems.  It might be better to raise them as issues in the bug tracker with enough notes to enable suitable prioritisation to take place later.

In future posts I will discuss other aspects of communication but I hope this has whetted your appetite for looking into this fascinating area a bit more.

Testers’ Communication Revisited

26 June, 2010

For various reasons I have abandoned a series of posts I had been planning about my daily work hence all has been quiet here of late.  However a story line this week on Dilbert ( and has got me thinking again about communication in the workplace.

Clearly Wally does understand what is being asked of him but he pretends to have misinterpreted what was said.  It got me thinking again about testers reporting defects to programmers and/or the business – are we being as clear as we need to be in the ways we express ourselves?  Conversely do we, testers, listen and properly understand the feedback we are being given?

Very often testers form the middle ground between development and the business and it is crucial that we are able to deliver our message appropriately.  Rob Lambert blogged last year about three of the crucial things to be aware of when we are communicating:  Purpose, Audience and Context (see and their relationship to testing.  I believe it is worth revisiting this topic because changing communications media – and attitudes towards them – open up new possibilities and challenges that we need to be aware of.  In this post I take a brief look at ‘Purpose’ before focussing on ‘Audience’.  ‘Context’, I believe is so vital it deserves a post of its own.


As testers we often have to inform different stakeholders about issues so they can made decisions based on that information.  We also need to learn about the System under Test (interestingly we can also summarise some of the things we are trying to learn as the system’s Purpose, its Audience and the Context in which it will be used).


Our audience can be diverse: ranging from programmers and systems analysts through to the end users of the system.  Each group in the audience has different needs.  If we use the same language in talking to the end users as we do the programmers, we may find that misunderstandings arise and we may also alienate a valuable source of information for our own education.

We must also be aware when we are being spoken to by highly technical people that they will use terminology that an end user might not.  Thus when the same word is used by two different groups of people it might just mean something different.  A typical example of this is when a new or emerging piece of technology is being used.  There is tremendous scope for confusion as people become familiar with the new technology.  Do we really understand the message we are being given?

When talking to people face-to-face we need to be on the watch out for body language and other visual and audible clues as to how our message is being received.  If our audience looks bored – they probably are!  Albert Mehrabian did a study on this and found that only 7% of the impact of our face-to-face communication is the words we use:  With e-mail – and this blog – the words used are the only means we have to express our meaning.

Our audience can also give us valuable clues as to whether or not they are the right audience to receive our message by their body language.  We can avoid wasting a lot of time if we get the appropriate audience right at the beginning.

Next time, I will post some thoughts on Context…  In the meantime, your thoughts and comments would be most welcome.

European Weekend Testing 1 May 2010

1 May, 2010

In another first I have participated in my first European Weekend Testing session.  Unfortunately I had to leave part way through the discussion after testing finished but I saved off the chat transcript to read through later.

We were testing a Flash application ( which, to all appearances, was a simple tool for creating a barcode using personal data.  Our mission was to find the highest value barcode possible.  If you follow the link above you get to the value associated with the barcode by clicking on ‘Scan’ once the final barcode is displayed.

Once we got started I made the classic schoolboy mistake of not searching for documentation about how the application works and only noticed that the FAQ went into quite a bit of detail late on in the session.  As a result I could not work out how the pricing was calculated.  Lesson number 1:  you need to establish what constitutes your test basis – and that should include documentation – before you can test efficiently.  This is something that we all know deep down but we need reminded of it sometimes.

The hour’s testing time seemed to go by really quickly but it was amazing how much ground could be covered during that time as a group.  The importance of being structured in the approach to testing – even a ‘simple’ application – came through for me very early on in the testing run when I was establishing the structure for a barcode.  I drew myself up a table on a piece of paper and filled it in with the parameter values I was using as I went along.  As I found patterns emerging, I highlighted them with highlighter pen.  Lesson number 2:  record what you do – the form doesn’t matter – it saves a lot of time and wracking brains later.

Unfortunately I do not have any tools at home to examine the source code behind the Flash movie so I could not do any static analysis to determine how the barcode and pricing had been implemented.  Lesson number 3:  if you have got the tools, use them to make your life easier.

I enjoyed the experience of investigating and exploring an application unlike any that I normally come across in my day-to-day work.  Reading through the chat transcript conversation moved on to the delights of TMAP with none other than Michael Bolton airing his views!  It reminded me of my first SIGIST where Michael warned us all on the danger of introducing too much ‘process’ to our testing.

I look forward to the next time I get a chance to attend a Weekend Testing Event.  With thanks to Anna Baik for encouraging me to join a session, Thomas Ponnet and all the other contributors for the welcome and opportunity to learn from others this afternoon.

UK Test Managers’ Forum – 28 April 2010

29 April, 2010

My first visit to the Test Managers’ Forum on 28 April was a positive experience for me.

There were six sessions on the programme with three running concurrently either side of a break.  The project work I am involved with at the moment involves a requirement to measure the performance of our systems under load so I went to the two load testing talks.  The first of these was entitled “Effective Load Testers. Who are they? How are they created?” and was presented by Gordon McKeown from Facilita.

We were a small group and the session quickly became a discussion with lots of viewpoints put forward.  The main points from my perspective were:

  • Load testing is a highly skilled, but wide ranging, aspect of testing.  The wider environment in which the system operates must be understood and its impact needs to be considered.  Examples of the things to consider included:
    • the effect of changes to network infrastructure;
    • the physical processing power of the computers running tests (CPU, memory, hard disk speed, etc.); and
    • the operating system and software configuration on the system under test.
  • There are essentially four parts to load testing and the skills needed are rarely found in one person:
    • planning the test strategy;
    • writing and executing the test scripts;
    • analysing the data coming out of the test; and
    • closure activities such as reporting the results.
  • At the analysis stage it is very easy to become bogged down by too many numbers being thrown at you.  A wealth of experience in interpreting the numbers is crucial.
  • Good load testers often start from a development background or some other specialism and then move into testing once they have acquired a great deal of experience.  Others’ experience suggested that it is impractical for a new starter straight from university to go straight into load testing because they do not have the wide experience necessary to understand what they are seeing.
  • People moving into load testing need to have the right mindset to be successful.  They need to have an enquiring mind and know how to get at and ask the right questions to get a feel for the risks they are trying to address.
  • A good way of getting into load testing is to get experience with performance monitoring tools built into Windows – for example Perfmon – and get an understanding of what the different counters mean and what affect different applications and/or processes have on those counters.  They should then seek to get experience in a company with a good training philosophy to build up the requisite knowledge of their chosen subject area – be it hardware diagnostics, network infrastructure or plain old development.  From there they will build up the experience needed to get into effective load testing.

After the break we met up, again a small group, for discussions under the heading of “Performance management: from black art to process”.  This discussion was facilitated by Peter Holditch from Dynatrace Software.  Dynatrace produce an application called Dynatrace which allows load testers to carry out end-to-end analysis of the path transactions take through their systems.

One of the main benefits of software like this is that it enables testers to see where the bottlenecks are in fine granularity – for example they can see whether there is a hold up in one particular server – and can drill down to see the individual services affected.

As before, there was a lot of debate about the process supported by Dynatrace’s software.  It was emphasised that care would need to be taken to avoid information overload.  This is, after all, a tool to help the load tester do his/her job.  The skill is in knowing which counters to start with and how to interpret them.  A starter for ten was suggested:

  • Memory usage
  • CPU utilisation
  • Network traffic
  • Disk queue

From here, testers could identify further avenues of exploration to home in on discrepancies.

I could see a great deal of benefit from having such software and I could see the benefits that it could bring as part of a wider strategy to understand the performance quality attribute of our software.  It would be of limited use for me personally because I simply do not have the detailed knowledge of all the inwards of our network infrastructure and computers to make good use of it.  It is something that I am slowly but surely learning though and I hope to become proficient enough to make use of such a tool.

After the talks we adjourned to the bar area upstairs where a tab had been set up and the networking continued.

I thoroughly enjoyed the afternoon and feel that I benefitted greatly from going.  I hope to make this a regular fixture in my calendar along with the SIGiST.  The TMF is a different type of event all together to SIGiST and is aimed primarily at testing managers and experienced practitioners.  The format is much more geared towards networking and discussions whereas the SIGiST tends to be that bit more formal – at least that is the impression I got from this time round!

For any test managers that have not been, I highly recommend attendance.  More information about the forum is available at and I understand the slides will be made available from all the talks later on.

Thank you very much to all those who facilitated and especially to Paul Gerrard from Gerrard Consulting who organised the whole thing.

Usability and Process Control

23 April, 2010

A blog post by James Christie ( has prompted me to post a few thoughts about recent experiences I have had where illogical (at least to the user) processes have apparently gone unchallenged.

The Utility Company

I received a letter one morning from my electricity and gas supplier to express their sorrow that I was leaving them – but I was not intending to leave them at all!  I contacted the company and explained the situation.  The operator looked up my account details and said that there was no record of me leaving them.

A couple of months later I attempted to log on to my online account management system but I was unable to do so because I no longer had an account with the company.  I contacted the helpdesk again and was told that another utility company was supplying my electricity (not my gas – that was still with them).

What followed was a lengthy process to get my account reinstated.  I became very interested in establishing what went wrong when the same thing happened a second time.  This is what I found out:

  • A representative of another company had mistyped the meter number of a property.
  • At no point was a check carried out to make sure the installation and billing addresses matched.
  • My energy company had no means of verifying that the details matched because insufficient information was sent to enable them to do so.
  • Energy companies are reliant on people noticing that they have been erroneously transferred and informing them.
  • Having erroneously transferred my account to another supplier it was impossible for my original dual fuel account number and information to be reinstated because only the one fuel type had been switched.  This is despite a regulatory requirement for them to do so.

I would be very interested to hear which testing techniques were employed by the suppliers and which test types they used.  I suspect there were a lot of automation and manual scripts with very little in the way of exploratory testing carried out.

To me this emphasises the importance of elements of process auditing to testing.  As an ISO 9001 auditor I have to examine the business processes my employers have in place.  Testers need to think about the wider processes of which the software they are testing is just a part.

The Mobile Phone Company

I have experienced the opposite effect to James with my present mobile phone operator.  Far from being able to have an account with multiple telephone numbers and/or services, I have to set up a separate on-line account for each number/service.

Worse still the website for the company concerned is only fully functional in Internet Explorer.  There is a help forum which does not work at all in Firefox; one of the most commonly used web browsers.

Registering numbers/services on your account involves:

  • Entering your details on the website
  • Submitting the form
  • Waiting for a text message to come through to the telephone
  • Entering the code into the website – whilst hoping the session and/or code has not timed out in the meantime.

Once this has been done, the number/service can be administered online.

The other day my father purchased a new telephone from this company and tried to go through the registration process.  He was unfamiliar with the handset and is not adept at text messaging.  This meant he found the whole process time-consuming and frustrating and in the end called on me for assistance.

I can understand that the company wants to make sure the telephone really does belong to the account holder but there must be better ways of achieving the same effect.  Also, does it really matter that much to them?

I am sure this whole process works as designed – it sounds like a wonderfully technical solution to the desire to have users register their telephones but, again, a bit of process auditing logic could have been applied to say “this is giving a bad user experience”.