Posts Tagged ‘sigist’

SIGiST Conference 13/03/2013

14 March, 2013

Good session at the Special Interest Group in Software Testing (SIGiST) conference yesterday run, as usual, by the BCS (British Computer Society) with a good representation of testers from across the different project lifecycles.

Matt Robson from Mastek gave an opening keynote under the title “Be Agile or Do Agile” and gave some salutary warnings on the dangers of testing becoming an ‘ology’. It is very easy to become set in our ways and dogmatic about our approaches to testing and that is harmful. To be ‘agile’ does not just mean that we adopt the Agile Manifesto (http://agilemanifesto.org/) and follow an agile approach to project management; it means we think and act in a way that embraces change and adapts to the situations we are in.

Very often we forget the ‘people’ side of software development and the example was given of a company where senior management turned round one day and said ‘we’re going to go agile and this is how you’re going to work in future’ but didn’t get the staff on board with them. The consequences on staff morale were horrendous and as a result software quality dipped.

One of the ways we can do this is to think in terms of business goals and outcomes because that has meaning for people. For instance instead of saying ‘the registration widget looks broken; I advise against going live’ approach it more as ‘we have found instances where sales staff might not be able to register new customers on our system; I advise against going live’.

What was particularly good was the talk was done with no PowerPoint slides so it concentrated the mind far more. I think this is an area that testers really have to get good in but it is also an area that can easily go horribly wrong.

Next we had a talk from George Wallace on systems challenges going from an R & D product straight into production. The project was for a very large and complex system and it was being developed in a very traditional manner with testing entering the fray late on in the product’s lifecycle. Suffice to say that testing was supposed to take 3 months and they are already 9.5 months in and still going!

Sakis Ladopoulos from Intrasoft International talked to us about what he termed project agnostic test metrics for an independent test team. Essentially this was an attempt to measure the performance of testers working on projects but do so independently of how the whole project is performing. The way this was being approached was to normalise scores for whatever was being counted across all the projects the team was involved with. The ‘best tester’ was the one who found the highest number of bugs very often as a percentage of the time they had taken.

I was quite uncomfortable with the idea of detaching testing from the rest of the project team but I am just not used to working like that.

After lunch we had a talk on website testing from Balaji Iyer from Mindtree talking about the challenges faced by modern websites. In particular there was discussion around scripting challenges and how performance can be impacted by technologies such as Ajax (used extensively by Google), Flash (for instance You Tube) and JavaScript which is often used for making sites look ‘pretty’.

Mindtree have a module currently in development that works with JMeter (a popular open source load and performance testing tool operated under the Apache banner) to help testers parameterize their requests and correlate them.

Chris Comey and Davidson Devadoss from TSG (Testing Solutions Group) then gave a fascinating talk on test strategies and how we can write a strategy which looks great on paper but does not work at all in practice because we have written our strategy looking in on testing and have not thought about dependencies on other parts of the business and how to deal with the issues that arise as a result.

It was a great talk because both test strategies that were used as examples were good strategies; they just weren’t the right ones for the job in hand. There is little point in doing a Post Project Review either if, as in the case of one of the projects, you are just going to type it up then stick it on a shelf somewhere and not learn the lessons. All the failings in one of the projects looked at had been raised in a previous ‘lessons learned’ document. Perhaps it would have been better to have called it a ‘Lessons NOT learned’ document!

The closing keynote from Martin Mudge from Bugfinders.com was great and was talking about crowd sourcing services to get testing done quicker and perhaps with greater coverage. With some audience participation from 3 people all had different paths that they would take through a functionality diagram.

The testers come in from all over the world via registration and are selected for projects based on their skills, experience and ability. Defects that are raised are all re-tested to verify that they are repeatable as recorded and genuine issues. Testers receive training materials if there are problems with their testing.

This seems to be a particularly good way for small teams wanting to quickly catch user interface issues but it would certainly be dangerous to rely on crowd-sourcing for deeper level testing (and I can just see some companies getting the impression that is the way to go)!

Overall a good conference and plenty to take away and think about.