Archive for the ‘Testing’ Category

SIGiST 8 December 2010

9 December, 2010

The final SIGiST (Special Interest Group in Software Testing) conference took place yesterday in London and, as usual, was well worth attending.  The theme for the day was “Keynotes – Six of the best” and consisted of talks only on this occasion: six keynotes and one short talk after lunch.  Unlike other SIGiST conferences I have been to there were no workshops which I always enjoy but I still found the day inspiring.

Four talks stood out for me as being excellent:

Les Hatton from Kingston University gave a brilliant talk in which he cited the systems controlling space shuttles as an example of excellently engineered systems and then went on to talk about systems which “should never have been allowed to see the light of day”.

One of the ‘highlights’ (that should probably read ‘lowlights’) included the story of his passage through Heathrow Airport earlier this year.  He had printed his boarding card at self-check-in but the systems at security could not read the card; SAS (the airline he travelled with) could not issue a new boarding card because he already had one unless he gave them the assurance he was who he said he was (!); he was then unable to get through security because he had two boarding cards…  As if it could not get any worse the public information displays in the departure lounge had crashed.  As a keen traveller I could really identify with Les’ frustrations here!

The final part of Les’ talk encouraged us to focus on systems thinking and take some of our cues from the laws of physics.  Once you find a bug in a particular area of the system you are likely to find more bugs in that same area.  Don’t give up was the message.

Gojko Adzic gave his excellent talk on Specification by Example.  Once again he made very good use of Prezi and encouraged us to use clear and concise language that our colleagues and customers will understand.   Too much time is wasted by misused terminology.  In a later talk mention was made of test cases and test conditions – actually they could both have been referring to the same thing – so why distinguish between them?

As usual Gojko had lots of examples to illustrate the success of the technique.  I find the concept of ‘living documentation’ particularly valuable and I liked the example of customer service staff referring to the tests that had been run to help answer customer queries.  It makes the tests very powerful because each test is directly addressing a particular problem being faced.

In the afternoon Fran O’Hara from Sogeti Ireland gave a talk on Scrum.  Included in the delegate pack for the conference was a Scrum cheat sheet illustrating the different components of a Scrum project and showing how they fit together.  I took it into work today and our Project Manager has found it very helpful. I thoroughly enjoyed Fran’s talk and particularly liked the idea of having two definitions of ‘done’: one definition describes what it means to be ‘potentially shippable’ and the other defines what ‘done’ means in terms of the current sprint.  There are many projects where it is not feasible to produce a potentially shippable product after one or two three-week cycles and this helps to deal with that.

Susan Windsor from Gerrard Consulting finished the day talking about how we develop ourselves and what it means to be a really good tester.  Susan challenged each of us to become testing gurus, super-testers in our organisations.  This will pay dividends because of the tremendous knowledge that we can bring to the table of how our projects are really going whether we are working in a traditional or more agile context.

Susan discussed the certification issue briefly and reminded us that although we can get a sense of achievement out of having a certificate one of the biggest problems with certification is the fact that it is used as a screening mechanism by people who really know very little (if anything) about testing when hiring staff.  Personally, I would add that the syllabus is too restricted in its scope and is based on very traditional testing processes which have been shown to be less efficient than the agile methods being adopted more and more.

Other talks included a career progression report from Erkki Poyhonen where he experienced a paradigm shift without a clutch (cue a Dilbert cartoon), a report of an entity-relationship modelling exercise for testing effectiveness from John Kent and a short talk from Geoff Thompson on the things that have influenced him in testing.

As always the day ended in the Volunteer where I enjoyed continuing chatting to fellow testers about our exciting craft.  The best thing for me about SIGiST is the networking and getting to know other testers.  As a result of attending these and other conferences I have built up a network of people with whom I communicate regularly and it has really expanded by knowledge of my chosen craft.  I would encourage everyone to get involved with testing conferences in their various locations because together we can learn a tremendous amount.

Transpection Explored

6 December, 2010

Transpection_Skype_Chat_20101205

 

I had a great learning experience last night.  Those of you who were at the European Weekend Testers session or who have read my write-up of the session (here) will know that I attempted a technique called Transpection which I had read about previously at http://www.satisfice.com/blog/archives/62.

I was not very happy with my attempt: it just did not feel right.  One of the great things about Weekend Testing sessions is that you can try new things in an environment where it does not really matter and everything becomes part of the learning process.

I decided to solicit the help and advice of James Bach to see where I went wrong and understand what I should have been doing so contacted him on Skype.  He readily agreed to help me and demonstrate Transpection for me.  I am publishing the full transcript of that Skype session (see link above) at James’ suggestion because we think it will be of help to other testers.

I have only lightly edited the transcript to put some of the statements into paragraphs to aid readability but the content is all there for you to see.  You will see my own learning process through this and hopefully, for those that want to know more about the technique, understand this really useful aid better.  You will even see where I mistakenly thought James was trying to bring proceedings to a close!

I would like to thank James for his time yesterday evening and for supporting me in this quest.

Feel free to make comments or ask me any questions…

European Weekend Testing – 4 December 2010

4 December, 2010

Due to various commitments over recent months I have not been as regular an attendee at the Weekend Testing sessions as I would have liked.  However the session this afternoon was a great one to come back on.

Our mission was to devise a ‘cheat sheet’ to be used by Helpdesk staff to help them improve the quality of their defect reports.  A lot of questions were asked to clarify what the problems were at the moment, what sort of environments the Helpdesk staff had available to them, whether there were any language issues to be considered, etc.

Ajay Balamurugadas (http://www.enjoytesting.blogspot.com/ and http://twitter.com/ajay184f) suggested working on this in teams and I had the pleasure of working with him during the course of the afternoon.  We quickly got down to drafting our cheat sheet, starting off by each typing our ideas for what should go into the sheet using a brilliant tool which I had not seen before, http://typewith.me, which allowed us to both work on the same document and see what each of us was doing in realtime.

Ajay asked me to note down the sort of information that I would ask for if he called me for technical support.  My answers spurred us both on and we became much more productive in thinking up the things that would need to go onto the cheat sheet.  Similarly I asked Ajay about how he deals with severity.  In some ways our conversation reminded me of James Bach and Michael Bolton’s work on Transpection (http://www.satisfice.com/blog/archives/62) which is a technique I am trying to master.  I really need more practice…

We categorised and smartened up the cheat sheet and Ajay prepared it all as a PDF that we could share with the other weekend testers.

Following this we had the de-briefing session which, as always, was as informative – if not more so – than the actual mission itself.  I find I learn so much from hearing about how others have tackled problems and finding out how they have put their knowledge of the testing craft to good use.   We had all taken slightly different approaches depending on how we each viewed the audience and what they were trying to do.  Ajay and I had focussed on the Helpdesk staff writing bug reports but others had concentrated on helping the Helpdesk staff get the right information out of the customer in the first place.

The whole session was really enjoyable.  “Thank you” to Anna Baik for facilitating the session and to all the contributors for their help during the afternoon.  I look forward to joining future sessions as time and circumstances permit.

London Tester Gathering – 2 November 2010

12 November, 2010

I really enjoyed the London Tester Gathering on 2 November.  It was good to finally meet Darren McMillan (http://www.bettertesting.com) after several online conversations and Sharath Byregowda (http://testtotester.blogspot.com/).

Michael Bolton (http://www.developsense.com/blog) gave us a short talk entitled “Burning Issues in Software Testing” which was appropriate with Bonfire Night being just round the corner.  As always this was an inspirational talk full of the Michael Bolton sense of humour which I – and most of the audience – appreciated.

There have been many good blogs on the night including Darren McMillan’s write-up so I will leave my own summary at that.  Can I just say, though, a big “thank you” once again to Tony Bruce for once again organising a great evening.  I am just sorry that I could not stay for longer but I was staying in an unfamiliar part of town overnight.

Until next time…

UK Test Management Forum – 27 October 2010

12 November, 2010

This is my write-up of the UK Test Management Forum meeting on 27 October 2010.  Sorry it’s been so long in coming but things have been pretty hectic of late.

As usual there were three tracks running in parallel with two talks apiece.

The first talk I went to was led by Gojko Adzic entitled “Continuous Validation, Living Documentation and other tales from the dark side”.  Gojko discussed the fact that we often use different names for the same thing or use the same word but mean something different each time.  He highlighted various examples of this and proposed some solutions which make the terms more meaningful for people.  Graham Thomas pointed out that although this process has happened before – most notably about 25 years ago in the structured software development world – we still need to keep reviewing our terminology.

Gojko is writing a book on this subject and has a website to run alongside the book.  See http://specificationbyexample.com for more details.

We had great discussions within the session.  I think we could all see that there is a need to address the confusion that we create by our use of terminology in the industry.   As Gojko pointed out legacy technical names do confuse people and create barriers which can hold people back from embracing change and adopting new processes.

The second talk I went to was entitled “The testing challenges associated with distributed processing” and was by Mike Bartley from TVS.  Mike was talking about the challenges we face with the rise of multi-core processors.  Whilst we can write parallel-savvy code, if the hardware and software platform on which the code is running is not using a distributed architecture there will be few – if any – benefits from the parallel code.

Mike talked about two common paradigms for distributed computing: message passing (which can lead to Race conditions) and shared memory.

Mike recommended reading J.B. Pedersen’s Classification of Parallel Programming Errors book (seems to be out of print and unavailable on Amazon).

He recommended that we adopt diverse static analysis techniques and think about design patterns and policies in our tests.  From a tools perspective we should consider which tools we can use at an architectural level to gain most beneft.

As always I thoroughly enjoyed the afternoon and felt I benefitted from the talks.  Things that I will take away from the talk include thinking more about the language I use to describe the testing that I am carrying out and thinking more about static analysis as a technique for checking out our distributed code.

Many thanks Paul Gerrard and Susan Windsor from Gerrard Consulting for hosting the event.  After the main forum talks we had a discussion about the future of the Test Management Forum and more information about the things we talked about and the decisions that have subsequently been made can be found at http://www.uktmf.com.

The Prezi and PowerPoint slides from the two talks I attended are also available from http://www.uktmf.com.

Comments welcome!

“Define structure”: my thoughts on James Bach’s challenge

10 October, 2010

Yesterday evening (in the UK anyway) James Bach set a challenge for Rob Lambert:

jamesmarcusbach:  @Rob_Lambert Quick tester challenge for Rob Lambert: Define “structure.” You have 10 minutes. #softwaretesting

It got me thinking too and my brain went into overdrive so I thought I would set out my own thoughts on the matter in a blog post.

In his response Rob highlighted several different types of structure and brought out that structures are in essence a ‘system’.  I like to think of structures as providing a framework or a set boundaries within which people, or things, should operate.

For example:

  • The laws of the land – a set of constraints governing how people in the country are to behave;
  • Buildings – the walls of the building define the space available to its occupants;
  • Skeletons – defines the shape and provides the basis for growth of the person or animal it belongs to;
  • DNA – defines the characteristics of the living organism containing that DNA sequence;
  • Roads – show us where we should and should not drive.

Some of these structures are more flexible than others; they are easier to change than others.  For example there are some fish and animals which, because of their DNA, can change their shape or their colouring to exactly match their surroundings and thus evade predators.  This change can happen in an instant.  Buildings can be extended but someone has to do something to make that happen and it requires hard work.  The impact of structural change can be quite dramatic.  If a road is re-routed the impact on the natural environment can be huge and get people quite upset.

Some structures are clearly vital for us: what we know of as the laws of physics attempt to document the way the universe works.  It would be horrendous if what we know as gravity stopped behaving in a constant fashion; if the earth’s rotation round the sun stopped we would have massive problems.

In computing we rely on certain structures.  For example networking: could you imagine trying to test a network application if there was no defined way of communicating between two computers and every manufacturer did something completely different?

There seemed to be a lot of emphasis at Agile Testing Days this year on education.  Our education provides us with a structure and I believe we are each responsible to ensure we keep ourselves up-to-date, that we self-education and strive to be as flexible and adaptable to our environments as possible.  We will not do this by sticking rigidly to the requirements of a particular certification body, we have to go out there and do what is right for us and for our clients.  We need to be like those creatures that can change in an instant to blend into their surroundings.

Your thoughts and comments are, as always, welcome…

SIGiST 16 September 2010

17 September, 2010

I attended the SIGiST yesterday entitled “A Testing Toolbox” and found it to be, as usual, an excellent conference with lots of thought-provoking talks.

The Irrational Tester

The opening keynote from James Lyndsay was focussing on the biases we all have built into us and need to avoid to be effective testers.

James used the headings:

Confirmation bias – where we find what we expect to find so don’t look for situations where we might find unexpected behaviour;

The “Endowment effect” – where people will often demand much more to give up something they have acquired than they got it for;

A “failure to commit” – if work is broken up into small chunks with deadlines set for each we are more likely to make progress on our projects than if there was a single deadline set for the end of the project;

Illusion of control – where we fool ourselves into thinking that we have found the only cause of a problem and don’t think about whether there may be anything else that might cause the same defect; and

Broken windows – where acceptance of minor bugs might lead to an acceptance of other much more serious bugs in the system.

Talks where human psychology is discussed – especially how it affects groups of people and how they interact and behave – are really interesting to me and I thoroughly enjoyed James’ talk.

Application Security Awareness

The second talk was by Martin Knobloch from OWASP.org entitled “Application Security Awareness”.  OWASP.org is a great starting point for getting information on security testing; it contains extensive documentation, code projects, conference details and is made up of over 100 Chapters worldwide (and still growing).

The main thrust of the talk was encouraging people to identify and thoroughly understand the weakest link in their systems.  Very often this is not a technical weakness: it can easily be a process or ‘people’ weakness that leads to systems being exploited.

We need to beware of the dangers of creating the illusion of security but not actually doing anything to really make our applications secure.

All applications have the same issues – the techniques discussed on the OWASP website can be applied equally to ‘normal’ applications as to websites and web applications.

Delight Your Organisation, Increase Your Job Satisfaction and Maximise Your Potential

The next talk was entitled “Delight your organisation, increase your job satisfaction and maximise your potential” and was given by John Isgrove.  This talk focussed on what characterises an Agile project and what does not and then discussed a methodology called DSDM Atern.  I had previously heard of DSDM but I had not encountered the ‘Atern’ variation on the theme.

DSDM Atern provides a framework for the management and delivery of an entire project with guidance for managers.  Scrum provides a one-size-fits-all process but contains little guidance for managers – yes, there are the Scrum Masters, but are they always the decision makers?

There seem to be a lot of benefits for organisations adopting the approach and I intend to study it a bit more and find out what other people within my organisation know about it and whether any of its principles can be adopted by us.

What I found interesting was the way the Features, Quality, Time, Cost triangle is turned on its head.  In a traditional environment Features and, to a certain extent, Quality are fixed and the Time and Cost elements are flexible.  With DSDM Atern, Time, Cost and Quality are all fixed and the Features to be implemented are flexible.

I found myself agreeing with James Windle’s comment at the end that it was one of the best talks I had heard on agile methodologies and the difficulties that must be overcome.  I am just sorry that there is neither the time nor the space to put a lot of detail on the talk in this blog post.

The excellent SIGIST lunch followed this talk and, as usual, it was great to network with other testers and see the tools and services exhibition.

Lessons From Data Warehouse Testing

The Sharepoint after lunch was interesting.  Peter Morgan shared his experiences of testing data warehouse applications

The New Role of The Tester: Becoming Agile

Stuart Taylor was next up with an inspirational experience report of how his organisation made move to an agile process.

Wholesale changes to the working environment (even moving from curved desks to straight desks arranged so paired working was easier and people could talk across the table) were made, testers were involved throughout the design, development and delivery processes, automating as much as possible in Java using test driven development techniques which allowed the dedicated testers to get on with manual Exploratory Testing (the stuff we all love to do).

As a result of moving to an agile process they have seen improvements to the quality of their software, their response to changes in business need and there is much more negotiation with schedules.

How to Suspend Testing and Still Succeed – A True Story

Graham Thomas gave an account of his experiences when testing had to be suspended.  Testing was suspended on this project because there was no way anything was going to be delivered with the way things were working at the time.  The biggest problems were with the systems integration risk which was accepted at an earlier stage in the development process and the automation infrastructure.

Initially there was progress made: Systems Integration testing was successfully completed (or at least it was as scoped in the 50-page test strategy) but testing was held up by slippage in code delivery from development, issues with the test automation infrastructure and a qualified exit on non-functional proving of the infrastructure (it also took 250% more time than it should have done).

8 weeks into a 12 week schedule is was estimated that at the current rate of progress, UAT was going to take over a year to complete and many of the issues being found were to do with the automation infrastructure and product configuration – i.e. the systems integration risk had matured.

They held a series of workshops with all stakeholders to get an idea of what was wrong and plan a resolution that would allow a resumption of testing.  Graham pointed out that it is very difficult to set effective Resumption Requirements without knowing the criteria by which testing was suspended.  It was also difficult to set Suspension Criteria without knowing what was going wrong.  This is at variance with IEEE 829 but, when you think about it, it is rather obvious!

So the remainder of the project was re-planned – bearing in mind that the go-live date was non-negotiable due to regulatory constraints – and amazingly the Resumption Requirements were met on time at the end of 4 weeks.

A daily war room meeting was set up at 13:00 at which attendance was mandatory for all the decision-makers and those actually doing the work.  Only directly grinding out the work to achieve the project’s aims and decision-makers were permitted at these meetings.

Graham made it all sound very easy but it was clear that it was a very painful process which caused a lot of heartache and irretrievable breakdowns in the professional relationships between people.

Graham’s talk was fascinating and gave a real insight into Suspension Criteria and Resumption Requirements and the effects that suspension can have on a project.

UAT: A Game for Three Players

The final keynote talk of the afternoon was “Acceptance Testing: A Game for Three Players” by James Windle.  This was another excellent talk in which James gave us a run-down on how he approaches UAT going right back to the definition of the Acceptance Test Criteria.  Whilst there was nothing really ‘new’ about James’ talk it served as a very helpful reminder of this critical part of testing.

The day ended, as usual, at the Volunteer on Baker Street enjoying further networking with testers.

A big thank you to the SIGIST committee for organising the event once again and congratulations to Graham Thomas and Mohinder Khosla on their respective appointments as Programme Secretary and Secretary of SIGIST.

European Weekend Testing – 31 July 2010

31 July, 2010

The 29th European Weekend Testers session was very enjoyable.  We were testing an infuriating application on the website http://cohere.open.ac.uk/ which is supposed to allow you to generate ideas and show connections between them.  To me it was very similar to mind mapping but with the added twist of incorporating social networking as well.

We had a great discussion afterwards which went slightly off-tack talking about testing conferences and the Software Testing Club (linked on the right).  What I think is great about Weekend Testing is that it joins together testers – and today a wannabe-tester – with a common purpose to test an application and we can all learn from each other and share our knowledge and experience.

Today’s chat transcript has been posted up at http://weekendtesting.com/archives/1361.  It is well worth a read through.

I highly recommend that testers get involved with Weekend Testing if they have the time because it is a very rewarding couple of hours.  To get involved all you need to do is ping EuropeTesters on Skype at about 15:30 UTC on a Saturday.  If you are new it is a good idea to let the facilitators know you intend to join either by e-mail or tweet @europetesters.

UK Test Managers’ Forum – 28 July 2010

28 July, 2010

The 26th Test Managers’ Forum was held this afternoon at Balls Brothers Minster Court in London EC3 and as before was a really good afternoon.  It was good to catch up with testers who I have met on previous occasions and at other gatherings and exchange knowledge.

As usual there were six sessions on the agenda.  The first session I attended was run by Jonathan Pearson from Original Software and was entitled 10 Black Holes to avoid for a successful product delivery and was illustrated with examples from Star Wars.

The black holes we are to avoid are as follows:

  • Walking before you can crawl – before contemplating releasing a product we need to understand when we are finished, but we also need to avoid getting into a never-ending journey.  Jonathan asserted that there is a need for an Application Lifecycle Management (ALM) Strategy including a robust Test Strategy.   A centralised collaboration platform can give information about the progress of the projects which helps inform the decision making process.  Early involvement of QA in requirements and business rule reviews was encouraged as was automation where possible – particularly of regression tests.
  • Quality Assurance as a silo – this was an interesting one for me.  At what level in our organisations does testing have an influence?  I am very fortunate in that I have support at board level for testing and quality assurance but there is also a case from a reporting perspective that it can be better to have a reporting line into the business side of the organisation to aid decision making.
  • Lack of organisation – to avoid this requires tidyness (there is a need for centralised information); knowledge needs to be shared; we should aim to reuse wherever possible including test documents, data and environments.
  • Lack of control – the main point emphasised with this one was that avoiding this one is dependent on taking care to address the previous three points.  Without these there is a danger of a lack of control.
  • Lack of visibility and out of date information – this section focussed on Application Quality Management (AQM) techniques and Jonathan asserted that there are a number of metrics that are essential to understanding how well things are going in a project.  Metrics and ‘beancounting’ is something that I am not really sold on as far as value is concerned because I feel that so much time can be spent gathering metrics that the task of testing is overlooked.  I also worry that the metrics give something that can be grabbed hold of without an understanding of the context in which those figures were gathered and thus lead – inadvertently sometimes – to poor decisions being reached.  Examples of tools like Concerto and Sonar were suggested as ways of gathering data from projects.
  • Unnecessary rework – examples of wasteage in this area were suggested including project outlines and test data.  We should see to minimise the time we spend rebuilding test environments and test data.  It was suggested that we consider configuration management for test environments and an aim of regression testing could be to go to 100% automation.  I think we need to be very careful with the latter because we can easily get carried away with automation even when it is inappropriate in the context in which we work.
  • Hindering collaboration with overly technical tools – this was illustrated with the Keep It Simple Stupid (KISS) mneumonic.  It was recommended that we should aim for:
    • Central organisation
    • No coding
    • Flexibility
    • Scalability

We should avoid:

  • Technical expertise barriers
  • High maintenance processes
  • Use of disparate tools because these could increase complexity.
  • Imposition of methodology – for example using a tool or technique that ties you into a V-model development method or mandates that you only follow Agile methods.
  • Lack of cross-project visibility – the main point of this was visibility at an organisational level
  • Wasting knowledge and time – the encouragement was to share knowledge as much as possible.

During the talk there was good discussion amongst the group.  As always with sessions such as this it is great to get the reassurance that the vast majority of testers are working in the same way as you are and facing the same problems.  Sometimes, though, issues are flagged up which are trully mindblowing.  One such instance arose during this talk and centred on the ability to roll back a test environment or roll back test data to a consistent state.  I have used VMWare products for some time now and don’t really know how I could survive without the snapshotting facility.  It therefore amazed me that such a high proportion of testers do not seem to use such techniques.  I hope that they have some other way of achieving the same effect!

The second talk I went to was by James Wilson from Secerno entitled “Testing in an Agile with Scrum environment” which discussed difficulties associated with testing.  It was a very lively session as many of the points would be equally valid with any development cycle or project management technique.

One particular area of concern was a chart with quality, time and cost:  it was asserted that because time and cost are fixed in a sprint in an agile project the only thing that can move is quality therefore quality of the final product is likely to suffer.  James viewed the scope of changes made and the scope of testing as a part of quality in this argument.  It was pointed out that in an agile environment,  ‘quality’ is everyone’s ‘problem’ as such.

There was also quite a bit of discussion on what constitutes ‘release quality’ and how that meaning can change during the lifetime of a project.  It was great to listen to the ideas and suggestions being put forward by other testing practitioners in this regard.

For example there were three areas of concern for James:  Soak testing; Stress testing and Regression testing.  There was a lot of discussion about soak testing and stress testing and where in the cycle long running tests like this should sit.  One approach that was suggested was performing such tests outside of a sprint cycle altogether:  accept that a soak test is going to take three or four weeks – for example – to give meaningful results so perhaps run that as a separate project in parallel to the main one for developing the application.  It was also suggested that sometimes it is just as valid to run these sort of tests after the software has gone live – but be careful to make sure the risks of doing this have been accepted.

Unfortunately James did not have a chance to get onto regression testing but it was a great talk nonetheless and I gained a lot from listening to the discussions around the points he raised.

After the talks finished as usual we went upstairs for the traditional networking session.  I always find this very valuable and enjoyed meeting up with people again.

A big thank you to Paul Gerrard for organising the afternoon.  Rob Lambert has also blogged (with photos – eeek!): http://thesocialtester.posterous.com/july-uktmf

Testing Lessons from England’s Courthouses

18 July, 2010

I had an interesting day on Friday (16 July).  I decided to stay on in London for an extra day after the London Testers Gathering the previous evening and was glad I did.  During the course of the day I learned much that can be applied to the software testing craft.

I have always been interested in the law but it is many years since I sat in a Courthouse listening to cases.  I took the opportunity on Friday morning to visit the Old Bailey (the Central Criminal Court in London, famous because of the number of notable cases brought before it).

In the first case I sat in on, the Counsel for the Prosecution was summing up for the jury.  Note-taking would prove vital for recalling the facts when the jury comes to deliberate its verdict and it was encouraging to see the number of people jotting notes.  I think if I were sat on that jury I, too, would have needed to take copious notes to aid concentration to counter the dry monotones being used by the barrister!

The second case was also very instructive for me as a tester.  An expert witness was being cross-examined by the Counsel for the Defence but the answers being given were unclear.  I was fascinated listening to the way the barrister dealt with this.  He kept rephrasing the same question but probing slightly different angles.  I was reminded of the persistence with which we must explore the questions we try to answer by testing.  Do we just accept an answer which does not quite fit or do we explore other avenues of enquiry to understand what we have observed?

After the expert witness was allowed to stand down from the dock, a further witness was called.  The questioning style was altered to suite the witness and less technical language was used in the phrasing of the questions.  When we are testing do we go for a ‘one size fits all’ approach and test everything in the same way or are we careful to tailor our approach to the situation?  I trust that for all of us it is the latter.

I then spent an enjoyable afternoon at the Royal Courts of Justice where the higher Courts of the legal system in England and Wales sit.  I went into three cases there and it was interesting being reminded again of how persistent we must be to get to the bottom of some of the questions we need to answer by testing.  In one of the cases brought before the Court of Appeal an adjournment was being sought because new ways of interpreting and dealing with a piece of evidence had come to light.  Are we careful to re-consider our approaches to testing in the light of new ideas and evidence?

Whilst it is very good to read testing literature extensively it is also good to explore other disciplines and see what we can learn from them to help us in our day-to-day testing.  For me my day in London’s Courthouses was very educational and I will look forward to visiting again some time.


Follow

Get every new post delivered to your Inbox.

Join 764 other followers