Archive for the ‘Conferences’ Category

Dutch Exploratory Workshop in Testing (DEWT) 3

22 April, 2013

This is an expansion on my post published on the Allies Computing Ltd blog:


The third Dutch Exploratory Workshop in Testing took place over the weekend 20 – 21 April 2013 after an informal gathering the previous evening for drinks and testing chat.

The theme for the weekend was Systems Thinking and I was glad I had taken the time to read Gerald Weinberg’s book “An Introduction to General Systems Thinking” to prepare myself. I also had the opportunity to discuss General Systems Thinking with James Bach on Friday evening before the conference and to reflect overnight on our conversation. This proved very useful mental preparation for the day ahead so thank you James!

The Saturday started with a check in where we introduced ourselves and explained any impediments that there might be to our ability to contribute or concerns that we had about the weekend. James Bach also gave us a primer explaining for everyone what General Systems Thinking is.

Having established ground rules for the weekend, appointed Joris Meerts as programme chair/content owner, and agreed facilitation points of order Rik Marselis kicked the main part of the conference off with a discussion of things that he had learned about General Systems Thinking and some examples of situations he had witnessed in different organisations which led to quite a lot of discussion.

After lunch we were shown an excellent model by Ruud Cox of the stakeholders in a couple of projects he has been involved with, their interactions and their areas of interest. Ruud explained how the model helps him establish test ideas and shows him areas the system where problems might be less tolerable (in someone’s opinion).

We also had an experience report from Derk-Jan de Grood on a force field model he used to help him visualise stakeholders in a project he was involved with and remind himself of whom it is important for him to maintain contact with.

James Bach followed this up with a further experience report showing us how he and Anne-Marie Charrett have applied Systems Thinking to coaching testers. It was fascinating to see how quickly a model of something ‘simple’ could expose so many different surfaces of the system and interfaces that could easily pass you by. One that struck me particularly was a feedback loop that applied to both the coach and student labelled ‘Self Evaluation’. It is something that could easily be overlooked but it is happening subconsciously all the time and is critical, in my view, to how well such a coaching training system evolves.

After voting on topics to discuss in presentations on Sunday we broke off for the day finishing with dinner, some testing challenges and more drinks.

Sunday started off with an experience report from Michael Phillips on some of the dogmas that he has seen potentially arising in the companies he has recent experience with. The attitudes he gave as examples were twofold:

  • Testers are seen as not being able to keep up with the pace of development; and
  • Testers are seen as a danger to a deployment because they might disrupt a continuous integration cycle.

James Bach made the suggestion that the first could be countered strongly by turning the argument round and saying that developers were not keeping up with the risks being introduced. The other important thing testers can do in this and many other testing situations is work hard on their credibility and building their reputation.

Joris Meerts gave an excellent presentation on what it means to be a good tester and ways we can know we are a good tester. Much of this focussed again on reputation, skill and influencing and impressing the right people.

This tied in very nicely to James Bach’s talk after lunch on how he built credibility by gaining repute as someone technical but also by upholding professionalism.

Next we had a report from Huib Schoots on the recruitment of testers and the things he looks for when he is hiring. For example what are they doing in the wider testing community? Are they tweeting, blogging, writing articles relating to testing? It was suggested that interesting insights might be gained by asking candidates what is on their minds at the moment.

All in all the lessons I have learned from the weekend:

  • The ability to do Systems Thinking is one of the most important skills for a tester to master;
  • Do not just strive to be good at something – go for excellence;
  • Think about the people involved in designing, building and using the systems we are working on;
  • Discussing testing with passionate people and getting to know them over a weekend is very valuable and rewarding for me personally; and
  • I need to spend more time reading and thinking about General Systems Thinking.

In conclusion I would like to thank the Dutch and Belgian testers – particularly the founding DEWTs – for inviting me to Holland to join their discussions. It was a privilege to get to know you all and gain some credibility amongst such a group. I hope you will consider inviting me again in the future!


Observations from EuroSTAR 2011: Looking to the Future

27 November, 2011

I intend to write a few blog posts over the coming weeks following on from my experiences at EuroSTAR 2011 in Manchester. I want to start with a post addressing the general theme from the keynotes and my own thoughts on the matters raised.

The Speakers’ Views on The Future of Software Testing (with a few comments from me)

A recurring topic of conversation was the ‘death of software testing’. I do not think that software testing is dead at all – if anything it is growing in importance. Speedy information dissemination will become more important as project teams become better at agile practices.

This is where skilled Exploratory Testing comes into play. Note that word – skilled – testing is a highly skilled craft and not everyone has the mindset to apply those skills.

The first keynote on Tuesday, from Richard Sykes, told us that ‘quality assurance’ is all about giving management confidence in the product or service being produced. I dislike the term ‘quality assurance’ because I do not believe we ‘assure’ anything – that is the programmers’ job. To me testing is all about finding information and passing that on to the relevant decision makers for them to draw their own conclusions.

Gojko Adzic, in his keynote on Tuesday afternoon, made a very important point: he said that we run the risk of losing a very good tester and gaining a very poor coder if we insist on testers coding.

In his keynote on Wednesday morning, James Whittaker, from Google, disagreed and told us that at Google ‘Tester’ has disappeared from people’s job titles. People who were ‘testers’ are now ‘developers’ and are expected to code regularly. I feel this is a dangerous path to go down: developers and testers think about things differently. In my experience developers find different problems to testers and both are needed.

Wednesday afternoon’s keynote told the story of Deutsche Bank’s use of communities. Daryl Elfield explained how groups were formed in various parts of the world for the various divisions within the company. It had nothing to do with testing and was all about people making changes in a centralised way: people could not go off and build their own communities – they were joined to a community by management.

Ben Waters from Microsoft talked to us on Thursday morning about how Microsoft creates customer value through testing and it started off as a very inspirational talk. Unfortunately it degenerated into testing being a phase.

Isabel Evans talked to us about the work she has been doing at Dolphin Computer Access where she has sought to improve quality processes throughout the organisation to enable better testing. I think we need to be careful not to make testing a ‘process’. Testing is a set of skills; it should happen naturally and not be something that is seen as a nuisance that has to be got through at some stage of the project.

My View of the Future of Software Testing

I see a bright future for software testing which is centred on people, skills, adaptability and passion.

Just as there are many aspects to a project – e.g. the product owner sees some, the developers see others, the infrastructure architects see some and the business users see yet more – so testing within a project has many aspects. We should be using techniques and tools appropriate to the individual project we are working on – and that will change from company to company. We have to adapt to the changing needs of our businesses.

I see different expertise being needed to test comprehensively which is why everybody needs to be involved. Testing is a hard job, though, and requires a lot of skill which needs to be honed and practiced. We need to use those skills to shed light on areas of the project nobody else has seen the significance of. We need to use the rest of the team’s knowledge to help our investigations.

We need to keep enhancing our skills and take responsibility for our own education. Having a network of people we can learn new skills from helps in this. Outside work I have been privileged to work alongside Rosie Sherry, Rob Lambert and Phil Kirkham at the Software Testing Club and the community that has been built up there is incredible.

We need to be passionate about our craft. We should seek out the skills that we need to best serve the projects we are working on. We need to have an interest in making the projects we work on great; do not ignore something you have noticed thinking it is someone else’s problem – bring it to their attention.

I am going to discuss some of the other things I learned at the conference in future posts. Specifically I want to write about automation and performance testing. I hope this generates some comment and discussion from the community!

Test Management Forum – 27 July 2011

27 July, 2011

The thirtieth Test Management Forum took place today (27 July 2011) at Balls Brothers in London, EC3 and as usual there was a varied programme.  The first talk I attended was by Andy Redwood who was telling us about the psychology of testing and what makes a ‘tester’ tick.  Amongst the things Andy was telling us about was the importance of what he called “Social Construction” – how society works in different cultures and the significance of this in today’s global business culture.

Andy also told us how he goes about forming role descriptions and an experiment where he asked people to describe themselves.  He found that the statements so generated fitted into five categories:  Social Role; Personality; Interests and Tastes; Attitudes; and Current State.

An issue I had with Andy’s talk, though, was that he seemed to be adopting a narrow view of testing and appeared to ignore context a lot in what he was saying.  One example of this was that he was describing testing as just being about ‘breaking things’.  There was an assertion that all tests should be designed so that they fail.  The idea behind this, I think, was to mitigate the ‘absence of defects’ fallacy where people claim that the software must be right because there are no defects.  However, what about the different objectives of testing?  What if the objective of testing is to show that a feature works in a particular way?  What if you want to prove that a particular area of functionality is present?  I am not saying that you would not do some detailed tests to make sure that the software is not behaving in certain ways but I think you would major on simply proving the existence or behaviour of the function or feature to start with.

Andy also reminded us, though, that we find it difficult to find more than 30% of our own mistakes when we are proof-reading or checking our own work.  That is why it is very important to have peer reviews and solicit the opinions of others.

After the break I attended a talk by Steve Allott from Electromind entitled “Agile in the large”.  Steve discussed a rebadged incarnation of DSDM (Dynamic Systems Development Methodology) called DSDM Atern which has four underpinning principles:  Process; People; Products; and Practices.

The general consensus of opinion seemed to be that, while it is not impossible, it is difficult to port pure agile practices over to large projects.  Typically this is because on large projects there tends to be challenges such as geographical and cultural separation and much higher numbers of people involved.  It would be difficult, practically, to run a Scrum team with a team of one hundred people for example.

I intend to do some more reading up on DSDM Atern because I know very little about it.

I thoroughly enjoyed the afternoon and my special thanks to Paul Gerrard and Susan Windsor from Gerrard Consulting for organising the event for us.

SIGiST 21 June 2011

21 June, 2011

SIGiST took place on 21 June 2011 in, as usual, the Royal College of Obstetricians and Gynaecologists in London.

The theme of the day was “What is testing?” and started with an opening keynote from Chris Ambler from Testing Stuff. In his talk he discussed whether we were guardians of quality, innovators, reporters, interpreters or problem solvers.  His conclusion was that we are all five.  In response to a challenge from Fiona Charles (@FionaCCharles) he conceded that testing and quality assurance were two different roles. After this yours truly got to plug to delegates and advertise the Testing Planet and local area meetups.

Like Fiona I found myself in agreement with all his other assertions but I really do feel that the role of testing is to act as a beacon highlighting issues to those who actually are in a position to make a release/don’t release decision. I do not believe testers should have responsibility for quality assurance except as part of a wider team. I am part of a Program Team myself so I have a responsibility for helping make a release/don’t release decision but my opinion is just one of several people’s opinions on the matter.

After the mid-morning break we had a short talk from ILEx on their latest on-line training courses and courseware. This was followed by a talk from Neil Thompson (@neilttweet) entitled “The Science of Software Testing”. There were a lot of slides to go with this talk but what I particularly valued were Neil’s slides and explanations on quality being value to some person(s) (quoting Gerry Weinberg) and his descriptions of Exploratory Testing being more than unstructured ad-hoc testing. I found it refreshing to be at a SIGiST conference and having such views expressed so clearly.

I think there was a lot of food for thought from this presentation and I will be looking over my notes again because I suspect I may have missed some of the points that were raised. I find the science behind how we test and the psychology of testing very interesting because it is not true that ‘anyone’ can test so how can we make ourselves better testers? I believe by understanding what makes testers tick we are likely to be able to use these insights in our own education and continuing professional development.

After Neil’s talk, Nathalie Van Delft (@FunTESTic) and Dorothy Graham (@DorothyGraham) co-hosted a Testing Ethics debate during which we debated five statements ‘House of Commons’ style:

  • You can break the law in order to meet your test goals
  • You must always tell the truth
  • You must always be able to use privacy-sensitive data to test
  • A tester may be responsible for acceptance
  • As a tester you should set aside your own standards and values to test thoroughly

It took a question or two for everyone to get into the swing of this but the debate was great fun. It was great to hear the differing views from people of differing testing backgrounds and in the end it all added up to an extremely lively debate.

After breaking for the excellent SIGiST lunch, Andy Glover, @CartoonTester, challenged us all to use word replacement to better understand a quotation from James Bach. [Added after initial publication – thanks for sending it to me, Andy: “Testing is the infinite process of comparing the invisible to the ambiguous so as to avoid the unthinkable happening to the anonymous”].

Following on from this we had a short series of lightning talks from Dot Graham (@DorothyGraham) on ‘What is Coverage?’, Neil Thompson (@neilttweet) on ‘What is Risk’, Nathalie Van Delft on ‘What Else is Testing?’ and Stuart Reid on ‘What is a testing professional?’.  The lightning talk format was great with each talk lasting about 10 minutes. I felt it really suited the after-lunch spot well because with the rapid change-over of speakers you were more jnclined to stay awake. If I were to choose two talks that I particularly enjoyed I would have to go for Dot’s and Neil’s because they are both areas that are close to my heart. Dot’s talk, in particular, struck a nerve because ‘coverage’ is such a misused and misunderstood term frequently bandied about by managers.

Before the afternoon break Stevan Zivanovic (@StevanZivanovic) gave an interesting talk on leadership in an agile context. In particular he focused on how we, as individuals, can and should take responsibility for leadership whether we have ‘manager’ in our job titles or not. He also emphasised that just being obeyed does not constitute being a ‘leader’. Obedience has no place in an agile team was one of his points.

After the break we had our closing keynote from Dot Graham (@DorothyGraham). The subject of this talk was “Things managers think they know about test automation – but don’t”. Many of the pitfalls she identified resonated with me because they are things that I am trying to avoid myself in trying to work with our developers to introduce more test automation into the business.

All in all I thought it was a great conference and I think Bernard Melson did a great job bringing the programme together. Given Bernard’s background there was a fair bit of talk of training and tester education in a formal environment which was understandable. Critically, though, it did not overpower the conference which I had feared it might, initially.

The next SIGiST conference will be in September 2011.

SIGiST 8 December 2010

9 December, 2010

The final SIGiST (Special Interest Group in Software Testing) conference took place yesterday in London and, as usual, was well worth attending.  The theme for the day was “Keynotes – Six of the best” and consisted of talks only on this occasion: six keynotes and one short talk after lunch.  Unlike other SIGiST conferences I have been to there were no workshops which I always enjoy but I still found the day inspiring.

Four talks stood out for me as being excellent:

Les Hatton from Kingston University gave a brilliant talk in which he cited the systems controlling space shuttles as an example of excellently engineered systems and then went on to talk about systems which “should never have been allowed to see the light of day”.

One of the ‘highlights’ (that should probably read ‘lowlights’) included the story of his passage through Heathrow Airport earlier this year.  He had printed his boarding card at self-check-in but the systems at security could not read the card; SAS (the airline he travelled with) could not issue a new boarding card because he already had one unless he gave them the assurance he was who he said he was (!); he was then unable to get through security because he had two boarding cards…  As if it could not get any worse the public information displays in the departure lounge had crashed.  As a keen traveller I could really identify with Les’ frustrations here!

The final part of Les’ talk encouraged us to focus on systems thinking and take some of our cues from the laws of physics.  Once you find a bug in a particular area of the system you are likely to find more bugs in that same area.  Don’t give up was the message.

Gojko Adzic gave his excellent talk on Specification by Example.  Once again he made very good use of Prezi and encouraged us to use clear and concise language that our colleagues and customers will understand.   Too much time is wasted by misused terminology.  In a later talk mention was made of test cases and test conditions – actually they could both have been referring to the same thing – so why distinguish between them?

As usual Gojko had lots of examples to illustrate the success of the technique.  I find the concept of ‘living documentation’ particularly valuable and I liked the example of customer service staff referring to the tests that had been run to help answer customer queries.  It makes the tests very powerful because each test is directly addressing a particular problem being faced.

In the afternoon Fran O’Hara from Sogeti Ireland gave a talk on Scrum.  Included in the delegate pack for the conference was a Scrum cheat sheet illustrating the different components of a Scrum project and showing how they fit together.  I took it into work today and our Project Manager has found it very helpful. I thoroughly enjoyed Fran’s talk and particularly liked the idea of having two definitions of ‘done’: one definition describes what it means to be ‘potentially shippable’ and the other defines what ‘done’ means in terms of the current sprint.  There are many projects where it is not feasible to produce a potentially shippable product after one or two three-week cycles and this helps to deal with that.

Susan Windsor from Gerrard Consulting finished the day talking about how we develop ourselves and what it means to be a really good tester.  Susan challenged each of us to become testing gurus, super-testers in our organisations.  This will pay dividends because of the tremendous knowledge that we can bring to the table of how our projects are really going whether we are working in a traditional or more agile context.

Susan discussed the certification issue briefly and reminded us that although we can get a sense of achievement out of having a certificate one of the biggest problems with certification is the fact that it is used as a screening mechanism by people who really know very little (if anything) about testing when hiring staff.  Personally, I would add that the syllabus is too restricted in its scope and is based on very traditional testing processes which have been shown to be less efficient than the agile methods being adopted more and more.

Other talks included a career progression report from Erkki Poyhonen where he experienced a paradigm shift without a clutch (cue a Dilbert cartoon), a report of an entity-relationship modelling exercise for testing effectiveness from John Kent and a short talk from Geoff Thompson on the things that have influenced him in testing.

As always the day ended in the Volunteer where I enjoyed continuing chatting to fellow testers about our exciting craft.  The best thing for me about SIGiST is the networking and getting to know other testers.  As a result of attending these and other conferences I have built up a network of people with whom I communicate regularly and it has really expanded by knowledge of my chosen craft.  I would encourage everyone to get involved with testing conferences in their various locations because together we can learn a tremendous amount.

UK Test Management Forum – 27 October 2010

12 November, 2010

This is my write-up of the UK Test Management Forum meeting on 27 October 2010.  Sorry it’s been so long in coming but things have been pretty hectic of late.

As usual there were three tracks running in parallel with two talks apiece.

The first talk I went to was led by Gojko Adzic entitled “Continuous Validation, Living Documentation and other tales from the dark side”.  Gojko discussed the fact that we often use different names for the same thing or use the same word but mean something different each time.  He highlighted various examples of this and proposed some solutions which make the terms more meaningful for people.  Graham Thomas pointed out that although this process has happened before – most notably about 25 years ago in the structured software development world – we still need to keep reviewing our terminology.

Gojko is writing a book on this subject and has a website to run alongside the book.  See for more details.

We had great discussions within the session.  I think we could all see that there is a need to address the confusion that we create by our use of terminology in the industry.   As Gojko pointed out legacy technical names do confuse people and create barriers which can hold people back from embracing change and adopting new processes.

The second talk I went to was entitled “The testing challenges associated with distributed processing” and was by Mike Bartley from TVS.  Mike was talking about the challenges we face with the rise of multi-core processors.  Whilst we can write parallel-savvy code, if the hardware and software platform on which the code is running is not using a distributed architecture there will be few – if any – benefits from the parallel code.

Mike talked about two common paradigms for distributed computing: message passing (which can lead to Race conditions) and shared memory.

Mike recommended reading J.B. Pedersen’s Classification of Parallel Programming Errors book (seems to be out of print and unavailable on Amazon).

He recommended that we adopt diverse static analysis techniques and think about design patterns and policies in our tests.  From a tools perspective we should consider which tools we can use at an architectural level to gain most beneft.

As always I thoroughly enjoyed the afternoon and felt I benefitted from the talks.  Things that I will take away from the talk include thinking more about the language I use to describe the testing that I am carrying out and thinking more about static analysis as a technique for checking out our distributed code.

Many thanks Paul Gerrard and Susan Windsor from Gerrard Consulting for hosting the event.  After the main forum talks we had a discussion about the future of the Test Management Forum and more information about the things we talked about and the decisions that have subsequently been made can be found at

The Prezi and PowerPoint slides from the two talks I attended are also available from

Comments welcome!

SIGiST 16 September 2010

17 September, 2010

I attended the SIGiST yesterday entitled “A Testing Toolbox” and found it to be, as usual, an excellent conference with lots of thought-provoking talks.

The Irrational Tester

The opening keynote from James Lyndsay was focussing on the biases we all have built into us and need to avoid to be effective testers.

James used the headings:

Confirmation bias – where we find what we expect to find so don’t look for situations where we might find unexpected behaviour;

The “Endowment effect” – where people will often demand much more to give up something they have acquired than they got it for;

A “failure to commit” – if work is broken up into small chunks with deadlines set for each we are more likely to make progress on our projects than if there was a single deadline set for the end of the project;

Illusion of control – where we fool ourselves into thinking that we have found the only cause of a problem and don’t think about whether there may be anything else that might cause the same defect; and

Broken windows – where acceptance of minor bugs might lead to an acceptance of other much more serious bugs in the system.

Talks where human psychology is discussed – especially how it affects groups of people and how they interact and behave – are really interesting to me and I thoroughly enjoyed James’ talk.

Application Security Awareness

The second talk was by Martin Knobloch from entitled “Application Security Awareness”. is a great starting point for getting information on security testing; it contains extensive documentation, code projects, conference details and is made up of over 100 Chapters worldwide (and still growing).

The main thrust of the talk was encouraging people to identify and thoroughly understand the weakest link in their systems.  Very often this is not a technical weakness: it can easily be a process or ‘people’ weakness that leads to systems being exploited.

We need to beware of the dangers of creating the illusion of security but not actually doing anything to really make our applications secure.

All applications have the same issues – the techniques discussed on the OWASP website can be applied equally to ‘normal’ applications as to websites and web applications.

Delight Your Organisation, Increase Your Job Satisfaction and Maximise Your Potential

The next talk was entitled “Delight your organisation, increase your job satisfaction and maximise your potential” and was given by John Isgrove.  This talk focussed on what characterises an Agile project and what does not and then discussed a methodology called DSDM Atern.  I had previously heard of DSDM but I had not encountered the ‘Atern’ variation on the theme.

DSDM Atern provides a framework for the management and delivery of an entire project with guidance for managers.  Scrum provides a one-size-fits-all process but contains little guidance for managers – yes, there are the Scrum Masters, but are they always the decision makers?

There seem to be a lot of benefits for organisations adopting the approach and I intend to study it a bit more and find out what other people within my organisation know about it and whether any of its principles can be adopted by us.

What I found interesting was the way the Features, Quality, Time, Cost triangle is turned on its head.  In a traditional environment Features and, to a certain extent, Quality are fixed and the Time and Cost elements are flexible.  With DSDM Atern, Time, Cost and Quality are all fixed and the Features to be implemented are flexible.

I found myself agreeing with James Windle’s comment at the end that it was one of the best talks I had heard on agile methodologies and the difficulties that must be overcome.  I am just sorry that there is neither the time nor the space to put a lot of detail on the talk in this blog post.

The excellent SIGIST lunch followed this talk and, as usual, it was great to network with other testers and see the tools and services exhibition.

Lessons From Data Warehouse Testing

The Sharepoint after lunch was interesting.  Peter Morgan shared his experiences of testing data warehouse applications

The New Role of The Tester: Becoming Agile

Stuart Taylor was next up with an inspirational experience report of how his organisation made move to an agile process.

Wholesale changes to the working environment (even moving from curved desks to straight desks arranged so paired working was easier and people could talk across the table) were made, testers were involved throughout the design, development and delivery processes, automating as much as possible in Java using test driven development techniques which allowed the dedicated testers to get on with manual Exploratory Testing (the stuff we all love to do).

As a result of moving to an agile process they have seen improvements to the quality of their software, their response to changes in business need and there is much more negotiation with schedules.

How to Suspend Testing and Still Succeed – A True Story

Graham Thomas gave an account of his experiences when testing had to be suspended.  Testing was suspended on this project because there was no way anything was going to be delivered with the way things were working at the time.  The biggest problems were with the systems integration risk which was accepted at an earlier stage in the development process and the automation infrastructure.

Initially there was progress made: Systems Integration testing was successfully completed (or at least it was as scoped in the 50-page test strategy) but testing was held up by slippage in code delivery from development, issues with the test automation infrastructure and a qualified exit on non-functional proving of the infrastructure (it also took 250% more time than it should have done).

8 weeks into a 12 week schedule is was estimated that at the current rate of progress, UAT was going to take over a year to complete and many of the issues being found were to do with the automation infrastructure and product configuration – i.e. the systems integration risk had matured.

They held a series of workshops with all stakeholders to get an idea of what was wrong and plan a resolution that would allow a resumption of testing.  Graham pointed out that it is very difficult to set effective Resumption Requirements without knowing the criteria by which testing was suspended.  It was also difficult to set Suspension Criteria without knowing what was going wrong.  This is at variance with IEEE 829 but, when you think about it, it is rather obvious!

So the remainder of the project was re-planned – bearing in mind that the go-live date was non-negotiable due to regulatory constraints – and amazingly the Resumption Requirements were met on time at the end of 4 weeks.

A daily war room meeting was set up at 13:00 at which attendance was mandatory for all the decision-makers and those actually doing the work.  Only directly grinding out the work to achieve the project’s aims and decision-makers were permitted at these meetings.

Graham made it all sound very easy but it was clear that it was a very painful process which caused a lot of heartache and irretrievable breakdowns in the professional relationships between people.

Graham’s talk was fascinating and gave a real insight into Suspension Criteria and Resumption Requirements and the effects that suspension can have on a project.

UAT: A Game for Three Players

The final keynote talk of the afternoon was “Acceptance Testing: A Game for Three Players” by James Windle.  This was another excellent talk in which James gave us a run-down on how he approaches UAT going right back to the definition of the Acceptance Test Criteria.  Whilst there was nothing really ‘new’ about James’ talk it served as a very helpful reminder of this critical part of testing.

The day ended, as usual, at the Volunteer on Baker Street enjoying further networking with testers.

A big thank you to the SIGIST committee for organising the event once again and congratulations to Graham Thomas and Mohinder Khosla on their respective appointments as Programme Secretary and Secretary of SIGIST.

UK Test Managers’ Forum – 28 July 2010

28 July, 2010

The 26th Test Managers’ Forum was held this afternoon at Balls Brothers Minster Court in London EC3 and as before was a really good afternoon.  It was good to catch up with testers who I have met on previous occasions and at other gatherings and exchange knowledge.

As usual there were six sessions on the agenda.  The first session I attended was run by Jonathan Pearson from Original Software and was entitled 10 Black Holes to avoid for a successful product delivery and was illustrated with examples from Star Wars.

The black holes we are to avoid are as follows:

  • Walking before you can crawl – before contemplating releasing a product we need to understand when we are finished, but we also need to avoid getting into a never-ending journey.  Jonathan asserted that there is a need for an Application Lifecycle Management (ALM) Strategy including a robust Test Strategy.   A centralised collaboration platform can give information about the progress of the projects which helps inform the decision making process.  Early involvement of QA in requirements and business rule reviews was encouraged as was automation where possible – particularly of regression tests.
  • Quality Assurance as a silo – this was an interesting one for me.  At what level in our organisations does testing have an influence?  I am very fortunate in that I have support at board level for testing and quality assurance but there is also a case from a reporting perspective that it can be better to have a reporting line into the business side of the organisation to aid decision making.
  • Lack of organisation – to avoid this requires tidyness (there is a need for centralised information); knowledge needs to be shared; we should aim to reuse wherever possible including test documents, data and environments.
  • Lack of control – the main point emphasised with this one was that avoiding this one is dependent on taking care to address the previous three points.  Without these there is a danger of a lack of control.
  • Lack of visibility and out of date information – this section focussed on Application Quality Management (AQM) techniques and Jonathan asserted that there are a number of metrics that are essential to understanding how well things are going in a project.  Metrics and ‘beancounting’ is something that I am not really sold on as far as value is concerned because I feel that so much time can be spent gathering metrics that the task of testing is overlooked.  I also worry that the metrics give something that can be grabbed hold of without an understanding of the context in which those figures were gathered and thus lead – inadvertently sometimes – to poor decisions being reached.  Examples of tools like Concerto and Sonar were suggested as ways of gathering data from projects.
  • Unnecessary rework – examples of wasteage in this area were suggested including project outlines and test data.  We should see to minimise the time we spend rebuilding test environments and test data.  It was suggested that we consider configuration management for test environments and an aim of regression testing could be to go to 100% automation.  I think we need to be very careful with the latter because we can easily get carried away with automation even when it is inappropriate in the context in which we work.
  • Hindering collaboration with overly technical tools – this was illustrated with the Keep It Simple Stupid (KISS) mneumonic.  It was recommended that we should aim for:
    • Central organisation
    • No coding
    • Flexibility
    • Scalability

We should avoid:

  • Technical expertise barriers
  • High maintenance processes
  • Use of disparate tools because these could increase complexity.
  • Imposition of methodology – for example using a tool or technique that ties you into a V-model development method or mandates that you only follow Agile methods.
  • Lack of cross-project visibility – the main point of this was visibility at an organisational level
  • Wasting knowledge and time – the encouragement was to share knowledge as much as possible.

During the talk there was good discussion amongst the group.  As always with sessions such as this it is great to get the reassurance that the vast majority of testers are working in the same way as you are and facing the same problems.  Sometimes, though, issues are flagged up which are trully mindblowing.  One such instance arose during this talk and centred on the ability to roll back a test environment or roll back test data to a consistent state.  I have used VMWare products for some time now and don’t really know how I could survive without the snapshotting facility.  It therefore amazed me that such a high proportion of testers do not seem to use such techniques.  I hope that they have some other way of achieving the same effect!

The second talk I went to was by James Wilson from Secerno entitled “Testing in an Agile with Scrum environment” which discussed difficulties associated with testing.  It was a very lively session as many of the points would be equally valid with any development cycle or project management technique.

One particular area of concern was a chart with quality, time and cost:  it was asserted that because time and cost are fixed in a sprint in an agile project the only thing that can move is quality therefore quality of the final product is likely to suffer.  James viewed the scope of changes made and the scope of testing as a part of quality in this argument.  It was pointed out that in an agile environment,  ‘quality’ is everyone’s ‘problem’ as such.

There was also quite a bit of discussion on what constitutes ‘release quality’ and how that meaning can change during the lifetime of a project.  It was great to listen to the ideas and suggestions being put forward by other testing practitioners in this regard.

For example there were three areas of concern for James:  Soak testing; Stress testing and Regression testing.  There was a lot of discussion about soak testing and stress testing and where in the cycle long running tests like this should sit.  One approach that was suggested was performing such tests outside of a sprint cycle altogether:  accept that a soak test is going to take three or four weeks – for example – to give meaningful results so perhaps run that as a separate project in parallel to the main one for developing the application.  It was also suggested that sometimes it is just as valid to run these sort of tests after the software has gone live – but be careful to make sure the risks of doing this have been accepted.

Unfortunately James did not have a chance to get onto regression testing but it was a great talk nonetheless and I gained a lot from listening to the discussions around the points he raised.

After the talks finished as usual we went upstairs for the traditional networking session.  I always find this very valuable and enjoyed meeting up with people again.

A big thank you to Paul Gerrard for organising the afternoon.  Rob Lambert has also blogged (with photos – eeek!):

UK Test Managers’ Forum – 28 April 2010

29 April, 2010

My first visit to the Test Managers’ Forum on 28 April was a positive experience for me.

There were six sessions on the programme with three running concurrently either side of a break.  The project work I am involved with at the moment involves a requirement to measure the performance of our systems under load so I went to the two load testing talks.  The first of these was entitled “Effective Load Testers. Who are they? How are they created?” and was presented by Gordon McKeown from Facilita.

We were a small group and the session quickly became a discussion with lots of viewpoints put forward.  The main points from my perspective were:

  • Load testing is a highly skilled, but wide ranging, aspect of testing.  The wider environment in which the system operates must be understood and its impact needs to be considered.  Examples of the things to consider included:
    • the effect of changes to network infrastructure;
    • the physical processing power of the computers running tests (CPU, memory, hard disk speed, etc.); and
    • the operating system and software configuration on the system under test.
  • There are essentially four parts to load testing and the skills needed are rarely found in one person:
    • planning the test strategy;
    • writing and executing the test scripts;
    • analysing the data coming out of the test; and
    • closure activities such as reporting the results.
  • At the analysis stage it is very easy to become bogged down by too many numbers being thrown at you.  A wealth of experience in interpreting the numbers is crucial.
  • Good load testers often start from a development background or some other specialism and then move into testing once they have acquired a great deal of experience.  Others’ experience suggested that it is impractical for a new starter straight from university to go straight into load testing because they do not have the wide experience necessary to understand what they are seeing.
  • People moving into load testing need to have the right mindset to be successful.  They need to have an enquiring mind and know how to get at and ask the right questions to get a feel for the risks they are trying to address.
  • A good way of getting into load testing is to get experience with performance monitoring tools built into Windows – for example Perfmon – and get an understanding of what the different counters mean and what affect different applications and/or processes have on those counters.  They should then seek to get experience in a company with a good training philosophy to build up the requisite knowledge of their chosen subject area – be it hardware diagnostics, network infrastructure or plain old development.  From there they will build up the experience needed to get into effective load testing.

After the break we met up, again a small group, for discussions under the heading of “Performance management: from black art to process”.  This discussion was facilitated by Peter Holditch from Dynatrace Software.  Dynatrace produce an application called Dynatrace which allows load testers to carry out end-to-end analysis of the path transactions take through their systems.

One of the main benefits of software like this is that it enables testers to see where the bottlenecks are in fine granularity – for example they can see whether there is a hold up in one particular server – and can drill down to see the individual services affected.

As before, there was a lot of debate about the process supported by Dynatrace’s software.  It was emphasised that care would need to be taken to avoid information overload.  This is, after all, a tool to help the load tester do his/her job.  The skill is in knowing which counters to start with and how to interpret them.  A starter for ten was suggested:

  • Memory usage
  • CPU utilisation
  • Network traffic
  • Disk queue

From here, testers could identify further avenues of exploration to home in on discrepancies.

I could see a great deal of benefit from having such software and I could see the benefits that it could bring as part of a wider strategy to understand the performance quality attribute of our software.  It would be of limited use for me personally because I simply do not have the detailed knowledge of all the inwards of our network infrastructure and computers to make good use of it.  It is something that I am slowly but surely learning though and I hope to become proficient enough to make use of such a tool.

After the talks we adjourned to the bar area upstairs where a tab had been set up and the networking continued.

I thoroughly enjoyed the afternoon and feel that I benefitted greatly from going.  I hope to make this a regular fixture in my calendar along with the SIGiST.  The TMF is a different type of event all together to SIGiST and is aimed primarily at testing managers and experienced practitioners.  The format is much more geared towards networking and discussions whereas the SIGiST tends to be that bit more formal – at least that is the impression I got from this time round!

For any test managers that have not been, I highly recommend attendance.  More information about the forum is available at and I understand the slides will be made available from all the talks later on.

Thank you very much to all those who facilitated and especially to Paul Gerrard from Gerrard Consulting who organised the whole thing.