Some Thoughts on the “Anyone can test” Fallacy and Education about Testing

24 April, 2013

I would like to know what the testing community thinks about this:

At DEWT3 last weekend I was thinking a lot about what happens when human interactions with systems are ignored. If we think of software development as a system we must think about how people are involved and the characteristics of those people if we are to get a rounded view of our organisation’s software development system.

Most of the books written about software development – and in particular software testing – seem to me to ignore the characteristics of the people involved. By doing so they can avoid having to deal with how people think, ways of combating cognitive bias, inattentional blindness, etc. I wonder to what extent this might have contributed to the ‘anyone can test, can’t they?’ fallacy we sometimes hear being bandied about.

The convenience of not having to think about the vagaries of people and how they think, behave and their needs at particular points in time might also lead people to over-simplify and then produce a load of, ultimately, meaningless metrics.

My own view is that a failure to recognise and consider the human aspects of software development projects leads to a lot of problems and faulty logic. Many of these shallow texts are used as course material in University Computer Science courses all over the world and training material for would-be developers and project managers comes from these same materials.

What can we do about this? The professional things for us to do include making sure we read and study widely and continuously aim for excellence in our craft; making sure our minds are broadened; taking responsibility for our own learning; and working on our credibility so when we make statements we are taken seriously by our peers in the craft and our colleagues that we work with on a daily basis. Having done that we can lead by example and help others ‘see the light’ as it were because they are not likely to learn about these things from most of the books currently available!

In the interests of balance I give you a list of some of the books that I have read recently or am currently reading and highly recommend to help get a good perspective on testing:

  • “Lessons learned in Software Testing” by James Bach;
  • “Secrets of a Buccaneer Scholar” by James Bach;
  • “Introduction to General Systems Thinking” by Gerald M. Weinberg;
  • “Perfect Software and Other Illusions about Testing” by Gerald M. Weinberg; and
  • “The Black Swan – the impact of the highly improbable” by Nassim Taleb.

So, what do you think? Is the current crop of ‘official’ texts on software testing and the shallowness of teaching about testing for would be developers and project managers in Computer Science degrees and the like contributing as much as I think they are to the debacle surrounding our craft? Comments appreciated!

Dutch Exploratory Workshop in Testing (DEWT) 3

22 April, 2013

This is an expansion on my post published on the Allies Computing Ltd blog: http://www.alliescomputing.com/blog/general-systems-thinking-dutch-exploratory-workshop-testing/

 

The third Dutch Exploratory Workshop in Testing took place over the weekend 20 – 21 April 2013 after an informal gathering the previous evening for drinks and testing chat.

The theme for the weekend was Systems Thinking and I was glad I had taken the time to read Gerald Weinberg’s book “An Introduction to General Systems Thinking” to prepare myself. I also had the opportunity to discuss General Systems Thinking with James Bach on Friday evening before the conference and to reflect overnight on our conversation. This proved very useful mental preparation for the day ahead so thank you James!

The Saturday started with a check in where we introduced ourselves and explained any impediments that there might be to our ability to contribute or concerns that we had about the weekend. James Bach also gave us a primer explaining for everyone what General Systems Thinking is.

Having established ground rules for the weekend, appointed Joris Meerts as programme chair/content owner, and agreed facilitation points of order Rik Marselis kicked the main part of the conference off with a discussion of things that he had learned about General Systems Thinking and some examples of situations he had witnessed in different organisations which led to quite a lot of discussion.

After lunch we were shown an excellent model by Ruud Cox of the stakeholders in a couple of projects he has been involved with, their interactions and their areas of interest. Ruud explained how the model helps him establish test ideas and shows him areas the system where problems might be less tolerable (in someone’s opinion).

We also had an experience report from Derk-Jan de Grood on a force field model he used to help him visualise stakeholders in a project he was involved with and remind himself of whom it is important for him to maintain contact with.

James Bach followed this up with a further experience report showing us how he and Anne-Marie Charrett have applied Systems Thinking to coaching testers. It was fascinating to see how quickly a model of something ‘simple’ could expose so many different surfaces of the system and interfaces that could easily pass you by. One that struck me particularly was a feedback loop that applied to both the coach and student labelled ‘Self Evaluation’. It is something that could easily be overlooked but it is happening subconsciously all the time and is critical, in my view, to how well such a coaching training system evolves.

After voting on topics to discuss in presentations on Sunday we broke off for the day finishing with dinner, some testing challenges and more drinks.

Sunday started off with an experience report from Michael Phillips on some of the dogmas that he has seen potentially arising in the companies he has recent experience with. The attitudes he gave as examples were twofold:

  • Testers are seen as not being able to keep up with the pace of development; and
  • Testers are seen as a danger to a deployment because they might disrupt a continuous integration cycle.

James Bach made the suggestion that the first could be countered strongly by turning the argument round and saying that developers were not keeping up with the risks being introduced. The other important thing testers can do in this and many other testing situations is work hard on their credibility and building their reputation.

Joris Meerts gave an excellent presentation on what it means to be a good tester and ways we can know we are a good tester. Much of this focussed again on reputation, skill and influencing and impressing the right people.

This tied in very nicely to James Bach’s talk after lunch on how he built credibility by gaining repute as someone technical but also by upholding professionalism.

Next we had a report from Huib Schoots on the recruitment of testers and the things he looks for when he is hiring. For example what are they doing in the wider testing community? Are they tweeting, blogging, writing articles relating to testing? It was suggested that interesting insights might be gained by asking candidates what is on their minds at the moment.

All in all the lessons I have learned from the weekend:

  • The ability to do Systems Thinking is one of the most important skills for a tester to master;
  • Do not just strive to be good at something – go for excellence;
  • Think about the people involved in designing, building and using the systems we are working on;
  • Discussing testing with passionate people and getting to know them over a weekend is very valuable and rewarding for me personally; and
  • I need to spend more time reading and thinking about General Systems Thinking.

In conclusion I would like to thank the Dutch and Belgian testers – particularly the founding DEWTs – for inviting me to Holland to join their discussions. It was a privilege to get to know you all and gain some credibility amongst such a group. I hope you will consider inviting me again in the future!

Rapid Software Testing with James Bach – Final Day

21 March, 2013

My employer, Allies Computing Ltd, has hosted a condensed version of this post: http://www.alliescomputing.com/blog/testing-psychology-rapid-software-testing-training-james-bach/

 

Well, that’s Rapid Software Testing concluded and I feel enthused about the future. In my previous post I explained some of the things I learned on the first two days of the course. Today we looked at so-called Exploratory Testing. In reality all testing by humans is exploratory to a degree and scripted to a degree and this can be shown on a continuum. Even if we are running through a set of steps we are interpreting those steps in a certain way and two people may well interpret those steps differently. (The term ‘Exploratory Testing’ was coined to distinguish the approach to testing where test design and test execution happen simultaneously so there’s no need to write scripts and ‘Scripted Testing’ where test design is a separate process to executing the tests and often happens at a different time making scripting a necessity.)

I thoroughly enjoyed the various testing exercises that we undertook during the day but there was one in particular where I wished my normal laptop had not been broken as Visual Studio would have been VERY useful as I couldn’t get Excel to generate the data I wanted quickly enough but never mind the important thing was the thought process that got me to that point.

In testing as in life there are many different ways of doing things. As a tester it is important that what we do stands up to scrutiny. We should equip ourselves to talk or write about our testing in a clear, coherent manner and that will help us gain the confidence and respect of our peers in the industry. Sometimes we take actions or make decisions which take us away from where we want to be and the important thing is that we learn from doing that and use that experience to get ourselves back on track.

I did not expect this course to have so many applications from the cognitive sciences – particularly psychology – but one of the things that I have been very occupied with for several months now has been understanding what makes me do the things that I do and, conversely, why I don’t do certain other things. What makes me see some things and blindingly obvious and miss other things completely? By practicing the things I have learned on this course and just absorbing the phenomenal amount of information I have received over the past three days I hope that all of this helps me become a better tester and a better person.

Rapid Software Testing with James Bach – First Two Days

20 March, 2013

My employer, Allies Computing Ltd, has hosted a condensed version of this post: http://www.alliescomputing.com/blog/testing-psychology-rapid-software-testing-training-james-bach/

 

Rapid Software Testing is a course I have wanted to do for a few years now having heard really good things about how it has improved various people’s testing and the way they think. We are two days in now and I am really excited about the things I have learned.

Thinking like a tester

This is a particularly challenging area for me. I tend to build ideas then narrow them down by critically analysing them however they can be quite haphazard and this could lead to an accusation that I’m not testing properly. I am going to have a go at categorising what I do in a similar way to James does with the Heuristic Test Strategy Model – where he has headings such as:

  • Structure
  • Function
  • Data
  • Interface
  • Platform
  • Operations
  • Time

I will likely have different headings (some may be the same) but by organising my thoughts and ideas in this way will give me confidence that I’m considering the right things and a much more professional image.

The other way I think this will help me is that it will channel my thoughts and make me less inclined to get stressed about what I am doing and whether I am doing it right. Critical analysis is good; it is what makes a good tester but sometimes striking a middle ground is a good thing. We spoke a lot about System 1 and System 2 thinking – with System 1 thinking we tend to be much more off-the-cuff and emotive and this is great for generating ideas quickly but the more measured, time-consuming approach is System 2.

I find this very useful because I often don’t recognise when I need to switch mode – and maybe even when the way I am thinking is counter-productive to the situation I am in.

Testability

Another big thing for me was the importance of testability and seeking things that will help me test very well. This was brought home to me during one particular exercise involving a sphere. James played a customer in a top secret problem domain and was unable to tell us things because we were not allowed to know these things! For me this is all part of honing our problem solving abilities.

A Bit of Magic

We had magic tricks to help us see the importance of having a broad mind and giving us the imagination to conceive of our ideas. It showed me the importance of what is lurking in my blind spots of which I have many. It is really important to recognise our limitations and the problems these limitations bring to us as testers.

Oracles

It is really important to recognise who, what and where our oracles are as it is they which will provide some help answering the question about whether or not we have a problem. I realised that I have oracles hidden away in all sorts of surprising places!

I am really looking forward to tomorrow when we will be looking at Exploratory Testing and I am excited to play the famous dice game!

SIGiST Conference 13/03/2013

14 March, 2013

Good session at the Special Interest Group in Software Testing (SIGiST) conference yesterday run, as usual, by the BCS (British Computer Society) with a good representation of testers from across the different project lifecycles.

Matt Robson from Mastek gave an opening keynote under the title “Be Agile or Do Agile” and gave some salutary warnings on the dangers of testing becoming an ‘ology’. It is very easy to become set in our ways and dogmatic about our approaches to testing and that is harmful. To be ‘agile’ does not just mean that we adopt the Agile Manifesto (http://agilemanifesto.org/) and follow an agile approach to project management; it means we think and act in a way that embraces change and adapts to the situations we are in.

Very often we forget the ‘people’ side of software development and the example was given of a company where senior management turned round one day and said ‘we’re going to go agile and this is how you’re going to work in future’ but didn’t get the staff on board with them. The consequences on staff morale were horrendous and as a result software quality dipped.

One of the ways we can do this is to think in terms of business goals and outcomes because that has meaning for people. For instance instead of saying ‘the registration widget looks broken; I advise against going live’ approach it more as ‘we have found instances where sales staff might not be able to register new customers on our system; I advise against going live’.

What was particularly good was the talk was done with no PowerPoint slides so it concentrated the mind far more. I think this is an area that testers really have to get good in but it is also an area that can easily go horribly wrong.

Next we had a talk from George Wallace on systems challenges going from an R & D product straight into production. The project was for a very large and complex system and it was being developed in a very traditional manner with testing entering the fray late on in the product’s lifecycle. Suffice to say that testing was supposed to take 3 months and they are already 9.5 months in and still going!

Sakis Ladopoulos from Intrasoft International talked to us about what he termed project agnostic test metrics for an independent test team. Essentially this was an attempt to measure the performance of testers working on projects but do so independently of how the whole project is performing. The way this was being approached was to normalise scores for whatever was being counted across all the projects the team was involved with. The ‘best tester’ was the one who found the highest number of bugs very often as a percentage of the time they had taken.

I was quite uncomfortable with the idea of detaching testing from the rest of the project team but I am just not used to working like that.

After lunch we had a talk on website testing from Balaji Iyer from Mindtree talking about the challenges faced by modern websites. In particular there was discussion around scripting challenges and how performance can be impacted by technologies such as Ajax (used extensively by Google), Flash (for instance You Tube) and JavaScript which is often used for making sites look ‘pretty’.

Mindtree have a module currently in development that works with JMeter (a popular open source load and performance testing tool operated under the Apache banner) to help testers parameterize their requests and correlate them.

Chris Comey and Davidson Devadoss from TSG (Testing Solutions Group) then gave a fascinating talk on test strategies and how we can write a strategy which looks great on paper but does not work at all in practice because we have written our strategy looking in on testing and have not thought about dependencies on other parts of the business and how to deal with the issues that arise as a result.

It was a great talk because both test strategies that were used as examples were good strategies; they just weren’t the right ones for the job in hand. There is little point in doing a Post Project Review either if, as in the case of one of the projects, you are just going to type it up then stick it on a shelf somewhere and not learn the lessons. All the failings in one of the projects looked at had been raised in a previous ‘lessons learned’ document. Perhaps it would have been better to have called it a ‘Lessons NOT learned’ document!

The closing keynote from Martin Mudge from Bugfinders.com was great and was talking about crowd sourcing services to get testing done quicker and perhaps with greater coverage. With some audience participation from 3 people all had different paths that they would take through a functionality diagram.

The testers come in from all over the world via registration and are selected for projects based on their skills, experience and ability. Defects that are raised are all re-tested to verify that they are repeatable as recorded and genuine issues. Testers receive training materials if there are problems with their testing.

This seems to be a particularly good way for small teams wanting to quickly catch user interface issues but it would certainly be dangerous to rely on crowd-sourcing for deeper level testing (and I can just see some companies getting the impression that is the way to go)!

Overall a good conference and plenty to take away and think about.

Of Marathons and Testers

19 March, 2012

A friend of mine, Simon Alexander, has recently started training to run in the London Marathon in just over a month’s time.

He has started a fund-raising page at VirginMoney and any donations would be much appreciated by – not just by Simon and I – but the staff and children at the hospices too.

It made me think about the relationship with my own chosen craft though and the comparisons that can be drawn:

Preparation

As testers it is important that we are prepared for the testing we are to carry out. We have to keep practicing and honing our skills so that when we test we do so efficiently. It is important that we understand the problem domain that we are testing (subject to the usual time constraints of course) so we have to do our research. In this regard I love James Bach’s story, in Secrets of a Buchaneer Scholar, talks about how he applied himself to learning about patents in order to win a court case.

Similarly, Simon must adequately prepare himself for the task he has set himself. It is no good expecting to be able to turn up and ‘just’ run the marathon; he has to ensure he understands the messages his body is giving him about whether he can run faster, needs to slow down, is going at a steady pace; how does he cope going up and down inclines; what should ‘steady going’ feel like. To monitor Simon’s progress see ReachforEACH.

The Marathon Itself

A marathon is a long hard slog. Just like testing persistence and dedication are key. Once we are on the trail of a bug or something in the System under Test does not seem right we need to be dogged in our determination to get to the bottom of what is going on. We need, as much as possible, to ensure we know enough to be able to help our colleagues who have to make a go/don’t go decision make that decision.

When we are testing we need to be organised in what we are doing. There are various approaches for this: we can have a mental model that we are working through; a detailed Gantt chart showing exactly how long to spend on each component of the System under Test; a mind map such as those created by Darren McMillan on a regular basis; a prose-based plan.

We also have to keep our concentration levels up. It is important that we keep our minds focussed on the task in hand otherwise we risk missing serious defects

If Simon does not apply his training and not keep going on the day he will fail in his mission to complete the marathon. He has various strategies at his disposal for completing the run for example he could gradually build up speed across the distance; run steadily for the first 25 miles and then speed up; run at a steady speed throughout; etc.

He also has to maintain high concentration levels to ensure he keeps up the pace and keeps up with his fellow runners.

Afterwards…

Once testing has finished on a project it is important to reflect and learn lessons from the exercise. Make sure any lessons are learned that mean next time you test you do so more efficiently.

Simon needs to keep his fitness levels up so that he does not go back to square one in his trainng.

If you can please give generously and may I take this opportunity to wish Simon the very best of luck. Good luck Simon!

Observations from EuroSTAR 2011: Looking to the Future

27 November, 2011

I intend to write a few blog posts over the coming weeks following on from my experiences at EuroSTAR 2011 in Manchester. I want to start with a post addressing the general theme from the keynotes and my own thoughts on the matters raised.

The Speakers’ Views on The Future of Software Testing (with a few comments from me)

A recurring topic of conversation was the ‘death of software testing’. I do not think that software testing is dead at all – if anything it is growing in importance. Speedy information dissemination will become more important as project teams become better at agile practices.

This is where skilled Exploratory Testing comes into play. Note that word – skilled – testing is a highly skilled craft and not everyone has the mindset to apply those skills.

The first keynote on Tuesday, from Richard Sykes, told us that ‘quality assurance’ is all about giving management confidence in the product or service being produced. I dislike the term ‘quality assurance’ because I do not believe we ‘assure’ anything – that is the programmers’ job. To me testing is all about finding information and passing that on to the relevant decision makers for them to draw their own conclusions.

Gojko Adzic, in his keynote on Tuesday afternoon, made a very important point: he said that we run the risk of losing a very good tester and gaining a very poor coder if we insist on testers coding.

In his keynote on Wednesday morning, James Whittaker, from Google, disagreed and told us that at Google ‘Tester’ has disappeared from people’s job titles. People who were ‘testers’ are now ‘developers’ and are expected to code regularly. I feel this is a dangerous path to go down: developers and testers think about things differently. In my experience developers find different problems to testers and both are needed.

Wednesday afternoon’s keynote told the story of Deutsche Bank’s use of communities. Daryl Elfield explained how groups were formed in various parts of the world for the various divisions within the company. It had nothing to do with testing and was all about people making changes in a centralised way: people could not go off and build their own communities – they were joined to a community by management.

Ben Waters from Microsoft talked to us on Thursday morning about how Microsoft creates customer value through testing and it started off as a very inspirational talk. Unfortunately it degenerated into testing being a phase.

Isabel Evans talked to us about the work she has been doing at Dolphin Computer Access where she has sought to improve quality processes throughout the organisation to enable better testing. I think we need to be careful not to make testing a ‘process’. Testing is a set of skills; it should happen naturally and not be something that is seen as a nuisance that has to be got through at some stage of the project.

My View of the Future of Software Testing

I see a bright future for software testing which is centred on people, skills, adaptability and passion.

Just as there are many aspects to a project – e.g. the product owner sees some, the developers see others, the infrastructure architects see some and the business users see yet more – so testing within a project has many aspects. We should be using techniques and tools appropriate to the individual project we are working on – and that will change from company to company. We have to adapt to the changing needs of our businesses.

I see different expertise being needed to test comprehensively which is why everybody needs to be involved. Testing is a hard job, though, and requires a lot of skill which needs to be honed and practiced. We need to use those skills to shed light on areas of the project nobody else has seen the significance of. We need to use the rest of the team’s knowledge to help our investigations.

We need to keep enhancing our skills and take responsibility for our own education. Having a network of people we can learn new skills from helps in this. Outside work I have been privileged to work alongside Rosie Sherry, Rob Lambert and Phil Kirkham at the Software Testing Club and the community that has been built up there is incredible.

We need to be passionate about our craft. We should seek out the skills that we need to best serve the projects we are working on. We need to have an interest in making the projects we work on great; do not ignore something you have noticed thinking it is someone else’s problem – bring it to their attention.

I am going to discuss some of the other things I learned at the conference in future posts. Specifically I want to write about automation and performance testing. I hope this generates some comment and discussion from the community!

Test Management Forum – 27 July 2011

27 July, 2011

The thirtieth Test Management Forum took place today (27 July 2011) at Balls Brothers in London, EC3 and as usual there was a varied programme.  The first talk I attended was by Andy Redwood who was telling us about the psychology of testing and what makes a ‘tester’ tick.  Amongst the things Andy was telling us about was the importance of what he called “Social Construction” – how society works in different cultures and the significance of this in today’s global business culture.

Andy also told us how he goes about forming role descriptions and an experiment where he asked people to describe themselves.  He found that the statements so generated fitted into five categories:  Social Role; Personality; Interests and Tastes; Attitudes; and Current State.

An issue I had with Andy’s talk, though, was that he seemed to be adopting a narrow view of testing and appeared to ignore context a lot in what he was saying.  One example of this was that he was describing testing as just being about ‘breaking things’.  There was an assertion that all tests should be designed so that they fail.  The idea behind this, I think, was to mitigate the ‘absence of defects’ fallacy where people claim that the software must be right because there are no defects.  However, what about the different objectives of testing?  What if the objective of testing is to show that a feature works in a particular way?  What if you want to prove that a particular area of functionality is present?  I am not saying that you would not do some detailed tests to make sure that the software is not behaving in certain ways but I think you would major on simply proving the existence or behaviour of the function or feature to start with.

Andy also reminded us, though, that we find it difficult to find more than 30% of our own mistakes when we are proof-reading or checking our own work.  That is why it is very important to have peer reviews and solicit the opinions of others.

After the break I attended a talk by Steve Allott from Electromind entitled “Agile in the large”.  Steve discussed a rebadged incarnation of DSDM (Dynamic Systems Development Methodology) called DSDM Atern which has four underpinning principles:  Process; People; Products; and Practices.

The general consensus of opinion seemed to be that, while it is not impossible, it is difficult to port pure agile practices over to large projects.  Typically this is because on large projects there tends to be challenges such as geographical and cultural separation and much higher numbers of people involved.  It would be difficult, practically, to run a Scrum team with a team of one hundred people for example.

I intend to do some more reading up on DSDM Atern because I know very little about it.

I thoroughly enjoyed the afternoon and my special thanks to Paul Gerrard and Susan Windsor from Gerrard Consulting for organising the event for us.

SIGiST 21 June 2011

21 June, 2011

SIGiST took place on 21 June 2011 in, as usual, the Royal College of Obstetricians and Gynaecologists in London.

The theme of the day was “What is testing?” and started with an opening keynote from Chris Ambler from Testing Stuff. In his talk he discussed whether we were guardians of quality, innovators, reporters, interpreters or problem solvers.  His conclusion was that we are all five.  In response to a challenge from Fiona Charles (@FionaCCharles) he conceded that testing and quality assurance were two different roles. After this yours truly got to plug http://www.softwaretestingclub.com to delegates and advertise the Testing Planet and local area meetups.

Like Fiona I found myself in agreement with all his other assertions but I really do feel that the role of testing is to act as a beacon highlighting issues to those who actually are in a position to make a release/don’t release decision. I do not believe testers should have responsibility for quality assurance except as part of a wider team. I am part of a Program Team myself so I have a responsibility for helping make a release/don’t release decision but my opinion is just one of several people’s opinions on the matter.

After the mid-morning break we had a short talk from ILEx on their latest on-line training courses and courseware. This was followed by a talk from Neil Thompson (@neilttweet) entitled “The Science of Software Testing”. There were a lot of slides to go with this talk but what I particularly valued were Neil’s slides and explanations on quality being value to some person(s) (quoting Gerry Weinberg) and his descriptions of Exploratory Testing being more than unstructured ad-hoc testing. I found it refreshing to be at a SIGiST conference and having such views expressed so clearly.

I think there was a lot of food for thought from this presentation and I will be looking over my notes again because I suspect I may have missed some of the points that were raised. I find the science behind how we test and the psychology of testing very interesting because it is not true that ‘anyone’ can test so how can we make ourselves better testers? I believe by understanding what makes testers tick we are likely to be able to use these insights in our own education and continuing professional development.

After Neil’s talk, Nathalie Van Delft (@FunTESTic) and Dorothy Graham (@DorothyGraham) co-hosted a Testing Ethics debate during which we debated five statements ‘House of Commons’ style:

  • You can break the law in order to meet your test goals
  • You must always tell the truth
  • You must always be able to use privacy-sensitive data to test
  • A tester may be responsible for acceptance
  • As a tester you should set aside your own standards and values to test thoroughly

It took a question or two for everyone to get into the swing of this but the debate was great fun. It was great to hear the differing views from people of differing testing backgrounds and in the end it all added up to an extremely lively debate.

After breaking for the excellent SIGiST lunch, Andy Glover, @CartoonTester, challenged us all to use word replacement to better understand a quotation from James Bach. [Added after initial publication – thanks for sending it to me, Andy: “Testing is the infinite process of comparing the invisible to the ambiguous so as to avoid the unthinkable happening to the anonymous”].

Following on from this we had a short series of lightning talks from Dot Graham (@DorothyGraham) on ‘What is Coverage?’, Neil Thompson (@neilttweet) on ‘What is Risk’, Nathalie Van Delft on ‘What Else is Testing?’ and Stuart Reid on ‘What is a testing professional?’.  The lightning talk format was great with each talk lasting about 10 minutes. I felt it really suited the after-lunch spot well because with the rapid change-over of speakers you were more jnclined to stay awake. If I were to choose two talks that I particularly enjoyed I would have to go for Dot’s and Neil’s because they are both areas that are close to my heart. Dot’s talk, in particular, struck a nerve because ‘coverage’ is such a misused and misunderstood term frequently bandied about by managers.

Before the afternoon break Stevan Zivanovic (@StevanZivanovic) gave an interesting talk on leadership in an agile context. In particular he focused on how we, as individuals, can and should take responsibility for leadership whether we have ‘manager’ in our job titles or not. He also emphasised that just being obeyed does not constitute being a ‘leader’. Obedience has no place in an agile team was one of his points.

After the break we had our closing keynote from Dot Graham (@DorothyGraham). The subject of this talk was “Things managers think they know about test automation – but don’t”. Many of the pitfalls she identified resonated with me because they are things that I am trying to avoid myself in trying to work with our developers to introduce more test automation into the business.

All in all I thought it was a great conference and I think Bernard Melson did a great job bringing the programme together. Given Bernard’s background there was a fair bit of talk of training and tester education in a formal environment which was understandable. Critically, though, it did not overpower the conference which I had feared it might, initially.

The next SIGiST conference will be in September 2011.

2010: A Pedant’s Review

29 December, 2010

I have been reminiscing this evening about the passing of another year.  It has been a year during which I have learned a great deal about myself and my chosen craft.  Many of my experiences have strengthened my understanding of why certain things work well for me in my situation and why other things don’t work so well.

I have been blogging and tweeting more and more during the year as I have realised that the thought processes I am going through seem to be of interest to others which has both surprised and inspired me further.

One of the subjects which has been giving me a great deal of food for thought has been how we get to understand requirements and a lot of this boils down to the way in which we communicate with each other (as members of development teams) and our customers and other stakeholders in the business.  I gained some insight into how to go about uncovering some of these hidden requirements from outside the software testing field: I went to the Old Bailey (the Central Criminal Court in London is in a building called the Old Bailey for any unfamiliar with the term) and listened to some cases being heard.  I followed this up with a visit to the Royal Courts of Justice which is one of the higher law courts in the English and Welsh legal system.  It was fascinating to me to listen to the proceedings and observe the way questioning was pursued.

The testing community throughout the world has been a tremendous source of encouragement and it has been great to read about the proceedings of the various conferences that have gone on during the year.  I have mainly centred my attendance on the Software Testing Special Interest Group (SIGiST) conferences arranged through the BCS (formerly known as the British Computer Society) and the UK Test Management Forums.  These events have all been very valuable to me in my learning and I am grateful to the SIGiST organising committee and Paul Gerrard of Gerrard Consulting respectively for continuing to arrange these events.

Besides the formally arranged conferences a big thank you must go to Tony Bruce for his sterling work organising the London Tester Gatherings.  What tremendous events these are!  It is great to be able to meet up with fellow testing professionals to discuss our craft in an informl setting.

It has been a privilege during the year to help out with proof-reading and reviewing articles for The Testing Planet, the newspaper produced by the good folks at the Software Testing Club.  Again, this is another vibrant community of testers from all over the world and it has been great to be associated with this.

European Weekend Testing, organised by Anna Baik and Markus Gärtner, has provided a safe place in which to practice the software testing craft.  The missions on a weekly basis have always been challenging and a great way to hone existing skills and learn new ones.  Unfortunately time has not been available to keep these going on a weekly basis but I hope to get along to future sessions whenever I can.  The other weekend I attended a Weekend Testing Americas session hosted by Michael Larsen.  It is a great way to learn from others and become better craftspeople while we are at it.

It was from one of the European Weekend Testing sessions that I realised I needed more help with understanding a technique called ‘Transpection’ which I had read about on James Bach’s website.  I contacted James on Skype to ask for his help and I was able to add that to my armoury for further use.

Through all of these events and activities I have met and chatted online to some amazing people – you all know who you are – thank you for all your support over the past year.  I would like to take this opportunity to wish you all a very happy and prosperous 2011.

Happy testing!


Follow

Get every new post delivered to your Inbox.

Join 765 other followers