The 26th Test Managers’ Forum was held this afternoon at Balls Brothers Minster Court in London EC3 and as before was a really good afternoon. It was good to catch up with testers who I have met on previous occasions and at other gatherings and exchange knowledge.
As usual there were six sessions on the agenda. The first session I attended was run by Jonathan Pearson from Original Software and was entitled 10 Black Holes to avoid for a successful product delivery and was illustrated with examples from Star Wars.
The black holes we are to avoid are as follows:
- Walking before you can crawl – before contemplating releasing a product we need to understand when we are finished, but we also need to avoid getting into a never-ending journey. Jonathan asserted that there is a need for an Application Lifecycle Management (ALM) Strategy including a robust Test Strategy. A centralised collaboration platform can give information about the progress of the projects which helps inform the decision making process. Early involvement of QA in requirements and business rule reviews was encouraged as was automation where possible – particularly of regression tests.
- Quality Assurance as a silo – this was an interesting one for me. At what level in our organisations does testing have an influence? I am very fortunate in that I have support at board level for testing and quality assurance but there is also a case from a reporting perspective that it can be better to have a reporting line into the business side of the organisation to aid decision making.
- Lack of organisation – to avoid this requires tidyness (there is a need for centralised information); knowledge needs to be shared; we should aim to reuse wherever possible including test documents, data and environments.
- Lack of control – the main point emphasised with this one was that avoiding this one is dependent on taking care to address the previous three points. Without these there is a danger of a lack of control.
- Lack of visibility and out of date information – this section focussed on Application Quality Management (AQM) techniques and Jonathan asserted that there are a number of metrics that are essential to understanding how well things are going in a project. Metrics and ‘beancounting’ is something that I am not really sold on as far as value is concerned because I feel that so much time can be spent gathering metrics that the task of testing is overlooked. I also worry that the metrics give something that can be grabbed hold of without an understanding of the context in which those figures were gathered and thus lead – inadvertently sometimes – to poor decisions being reached. Examples of tools like Concerto and Sonar were suggested as ways of gathering data from projects.
- Unnecessary rework – examples of wasteage in this area were suggested including project outlines and test data. We should see to minimise the time we spend rebuilding test environments and test data. It was suggested that we consider configuration management for test environments and an aim of regression testing could be to go to 100% automation. I think we need to be very careful with the latter because we can easily get carried away with automation even when it is inappropriate in the context in which we work.
- Hindering collaboration with overly technical tools – this was illustrated with the Keep It Simple Stupid (KISS) mneumonic. It was recommended that we should aim for:
- Central organisation
- No coding
We should avoid:
- Technical expertise barriers
- High maintenance processes
- Use of disparate tools because these could increase complexity.
- Imposition of methodology – for example using a tool or technique that ties you into a V-model development method or mandates that you only follow Agile methods.
- Lack of cross-project visibility – the main point of this was visibility at an organisational level
- Wasting knowledge and time – the encouragement was to share knowledge as much as possible.
During the talk there was good discussion amongst the group. As always with sessions such as this it is great to get the reassurance that the vast majority of testers are working in the same way as you are and facing the same problems. Sometimes, though, issues are flagged up which are trully mindblowing. One such instance arose during this talk and centred on the ability to roll back a test environment or roll back test data to a consistent state. I have used VMWare products for some time now and don’t really know how I could survive without the snapshotting facility. It therefore amazed me that such a high proportion of testers do not seem to use such techniques. I hope that they have some other way of achieving the same effect!
The second talk I went to was by James Wilson from Secerno entitled “Testing in an Agile with Scrum environment” which discussed difficulties associated with testing. It was a very lively session as many of the points would be equally valid with any development cycle or project management technique.
One particular area of concern was a chart with quality, time and cost: it was asserted that because time and cost are fixed in a sprint in an agile project the only thing that can move is quality therefore quality of the final product is likely to suffer. James viewed the scope of changes made and the scope of testing as a part of quality in this argument. It was pointed out that in an agile environment, ‘quality’ is everyone’s ‘problem’ as such.
There was also quite a bit of discussion on what constitutes ‘release quality’ and how that meaning can change during the lifetime of a project. It was great to listen to the ideas and suggestions being put forward by other testing practitioners in this regard.
For example there were three areas of concern for James: Soak testing; Stress testing and Regression testing. There was a lot of discussion about soak testing and stress testing and where in the cycle long running tests like this should sit. One approach that was suggested was performing such tests outside of a sprint cycle altogether: accept that a soak test is going to take three or four weeks – for example – to give meaningful results so perhaps run that as a separate project in parallel to the main one for developing the application. It was also suggested that sometimes it is just as valid to run these sort of tests after the software has gone live – but be careful to make sure the risks of doing this have been accepted.
Unfortunately James did not have a chance to get onto regression testing but it was a great talk nonetheless and I gained a lot from listening to the discussions around the points he raised.
After the talks finished as usual we went upstairs for the traditional networking session. I always find this very valuable and enjoyed meeting up with people again.
A big thank you to Paul Gerrard for organising the afternoon. Rob Lambert has also blogged (with photos – eeek!): http://thesocialtester.posterous.com/july-uktmf