Archive for April, 2013

Some Thoughts on the “Anyone can test” Fallacy and Education about Testing

24 April, 2013

I would like to know what the testing community thinks about this:

At DEWT3 last weekend I was thinking a lot about what happens when human interactions with systems are ignored. If we think of software development as a system we must think about how people are involved and the characteristics of those people if we are to get a rounded view of our organisation’s software development system.

Most of the books written about software development – and in particular software testing – seem to me to ignore the characteristics of the people involved. By doing so they can avoid having to deal with how people think, ways of combating cognitive bias, inattentional blindness, etc. I wonder to what extent this might have contributed to the ‘anyone can test, can’t they?’ fallacy we sometimes hear being bandied about.

The convenience of not having to think about the vagaries of people and how they think, behave and their needs at particular points in time might also lead people to over-simplify and then produce a load of, ultimately, meaningless metrics.

My own view is that a failure to recognise and consider the human aspects of software development projects leads to a lot of problems and faulty logic. Many of these shallow texts are used as course material in University Computer Science courses all over the world and training material for would-be developers and project managers comes from these same materials.

What can we do about this? The professional things for us to do include making sure we read and study widely and continuously aim for excellence in our craft; making sure our minds are broadened; taking responsibility for our own learning; and working on our credibility so when we make statements we are taken seriously by our peers in the craft and our colleagues that we work with on a daily basis. Having done that we can lead by example and help others ‘see the light’ as it were because they are not likely to learn about these things from most of the books currently available!

In the interests of balance I give you a list of some of the books that I have read recently or am currently reading and highly recommend to help get a good perspective on testing:

  • “Lessons learned in Software Testing” by James Bach;
  • “Secrets of a Buccaneer Scholar” by James Bach;
  • “Introduction to General Systems Thinking” by Gerald M. Weinberg;
  • “Perfect Software and Other Illusions about Testing” by Gerald M. Weinberg; and
  • “The Black Swan – the impact of the highly improbable” by Nassim Taleb.

So, what do you think? Is the current crop of ‘official’ texts on software testing and the shallowness of teaching about testing for would be developers and project managers in Computer Science degrees and the like contributing as much as I think they are to the debacle surrounding our craft? Comments appreciated!

Dutch Exploratory Workshop in Testing (DEWT) 3

22 April, 2013

This is an expansion on my post published on the Allies Computing Ltd blog: http://www.alliescomputing.com/blog/general-systems-thinking-dutch-exploratory-workshop-testing/

 

The third Dutch Exploratory Workshop in Testing took place over the weekend 20 – 21 April 2013 after an informal gathering the previous evening for drinks and testing chat.

The theme for the weekend was Systems Thinking and I was glad I had taken the time to read Gerald Weinberg’s book “An Introduction to General Systems Thinking” to prepare myself. I also had the opportunity to discuss General Systems Thinking with James Bach on Friday evening before the conference and to reflect overnight on our conversation. This proved very useful mental preparation for the day ahead so thank you James!

The Saturday started with a check in where we introduced ourselves and explained any impediments that there might be to our ability to contribute or concerns that we had about the weekend. James Bach also gave us a primer explaining for everyone what General Systems Thinking is.

Having established ground rules for the weekend, appointed Joris Meerts as programme chair/content owner, and agreed facilitation points of order Rik Marselis kicked the main part of the conference off with a discussion of things that he had learned about General Systems Thinking and some examples of situations he had witnessed in different organisations which led to quite a lot of discussion.

After lunch we were shown an excellent model by Ruud Cox of the stakeholders in a couple of projects he has been involved with, their interactions and their areas of interest. Ruud explained how the model helps him establish test ideas and shows him areas the system where problems might be less tolerable (in someone’s opinion).

We also had an experience report from Derk-Jan de Grood on a force field model he used to help him visualise stakeholders in a project he was involved with and remind himself of whom it is important for him to maintain contact with.

James Bach followed this up with a further experience report showing us how he and Anne-Marie Charrett have applied Systems Thinking to coaching testers. It was fascinating to see how quickly a model of something ‘simple’ could expose so many different surfaces of the system and interfaces that could easily pass you by. One that struck me particularly was a feedback loop that applied to both the coach and student labelled ‘Self Evaluation’. It is something that could easily be overlooked but it is happening subconsciously all the time and is critical, in my view, to how well such a coaching training system evolves.

After voting on topics to discuss in presentations on Sunday we broke off for the day finishing with dinner, some testing challenges and more drinks.

Sunday started off with an experience report from Michael Phillips on some of the dogmas that he has seen potentially arising in the companies he has recent experience with. The attitudes he gave as examples were twofold:

  • Testers are seen as not being able to keep up with the pace of development; and
  • Testers are seen as a danger to a deployment because they might disrupt a continuous integration cycle.

James Bach made the suggestion that the first could be countered strongly by turning the argument round and saying that developers were not keeping up with the risks being introduced. The other important thing testers can do in this and many other testing situations is work hard on their credibility and building their reputation.

Joris Meerts gave an excellent presentation on what it means to be a good tester and ways we can know we are a good tester. Much of this focussed again on reputation, skill and influencing and impressing the right people.

This tied in very nicely to James Bach’s talk after lunch on how he built credibility by gaining repute as someone technical but also by upholding professionalism.

Next we had a report from Huib Schoots on the recruitment of testers and the things he looks for when he is hiring. For example what are they doing in the wider testing community? Are they tweeting, blogging, writing articles relating to testing? It was suggested that interesting insights might be gained by asking candidates what is on their minds at the moment.

All in all the lessons I have learned from the weekend:

  • The ability to do Systems Thinking is one of the most important skills for a tester to master;
  • Do not just strive to be good at something – go for excellence;
  • Think about the people involved in designing, building and using the systems we are working on;
  • Discussing testing with passionate people and getting to know them over a weekend is very valuable and rewarding for me personally; and
  • I need to spend more time reading and thinking about General Systems Thinking.

In conclusion I would like to thank the Dutch and Belgian testers – particularly the founding DEWTs – for inviting me to Holland to join their discussions. It was a privilege to get to know you all and gain some credibility amongst such a group. I hope you will consider inviting me again in the future!