Archive for the ‘Testing’ Category

Some Thoughts on the “Anyone can test” Fallacy and Education about Testing

24 April, 2013

I would like to know what the testing community thinks about this:

At DEWT3 last weekend I was thinking a lot about what happens when human interactions with systems are ignored. If we think of software development as a system we must think about how people are involved and the characteristics of those people if we are to get a rounded view of our organisation’s software development system.

Most of the books written about software development – and in particular software testing – seem to me to ignore the characteristics of the people involved. By doing so they can avoid having to deal with how people think, ways of combating cognitive bias, inattentional blindness, etc. I wonder to what extent this might have contributed to the ‘anyone can test, can’t they?’ fallacy we sometimes hear being bandied about.

The convenience of not having to think about the vagaries of people and how they think, behave and their needs at particular points in time might also lead people to over-simplify and then produce a load of, ultimately, meaningless metrics.

My own view is that a failure to recognise and consider the human aspects of software development projects leads to a lot of problems and faulty logic. Many of these shallow texts are used as course material in University Computer Science courses all over the world and training material for would-be developers and project managers comes from these same materials.

What can we do about this? The professional things for us to do include making sure we read and study widely and continuously aim for excellence in our craft; making sure our minds are broadened; taking responsibility for our own learning; and working on our credibility so when we make statements we are taken seriously by our peers in the craft and our colleagues that we work with on a daily basis. Having done that we can lead by example and help others ‘see the light’ as it were because they are not likely to learn about these things from most of the books currently available!

In the interests of balance I give you a list of some of the books that I have read recently or am currently reading and highly recommend to help get a good perspective on testing:

  • “Lessons learned in Software Testing” by James Bach;
  • “Secrets of a Buccaneer Scholar” by James Bach;
  • “Introduction to General Systems Thinking” by Gerald M. Weinberg;
  • “Perfect Software and Other Illusions about Testing” by Gerald M. Weinberg; and
  • “The Black Swan – the impact of the highly improbable” by Nassim Taleb.

So, what do you think? Is the current crop of ‘official’ texts on software testing and the shallowness of teaching about testing for would be developers and project managers in Computer Science degrees and the like contributing as much as I think they are to the debacle surrounding our craft? Comments appreciated!

Dutch Exploratory Workshop in Testing (DEWT) 3

22 April, 2013

This is an expansion on my post published on the Allies Computing Ltd blog: http://www.alliescomputing.com/blog/general-systems-thinking-dutch-exploratory-workshop-testing/

 

The third Dutch Exploratory Workshop in Testing took place over the weekend 20 – 21 April 2013 after an informal gathering the previous evening for drinks and testing chat.

The theme for the weekend was Systems Thinking and I was glad I had taken the time to read Gerald Weinberg’s book “An Introduction to General Systems Thinking” to prepare myself. I also had the opportunity to discuss General Systems Thinking with James Bach on Friday evening before the conference and to reflect overnight on our conversation. This proved very useful mental preparation for the day ahead so thank you James!

The Saturday started with a check in where we introduced ourselves and explained any impediments that there might be to our ability to contribute or concerns that we had about the weekend. James Bach also gave us a primer explaining for everyone what General Systems Thinking is.

Having established ground rules for the weekend, appointed Joris Meerts as programme chair/content owner, and agreed facilitation points of order Rik Marselis kicked the main part of the conference off with a discussion of things that he had learned about General Systems Thinking and some examples of situations he had witnessed in different organisations which led to quite a lot of discussion.

After lunch we were shown an excellent model by Ruud Cox of the stakeholders in a couple of projects he has been involved with, their interactions and their areas of interest. Ruud explained how the model helps him establish test ideas and shows him areas the system where problems might be less tolerable (in someone’s opinion).

We also had an experience report from Derk-Jan de Grood on a force field model he used to help him visualise stakeholders in a project he was involved with and remind himself of whom it is important for him to maintain contact with.

James Bach followed this up with a further experience report showing us how he and Anne-Marie Charrett have applied Systems Thinking to coaching testers. It was fascinating to see how quickly a model of something ‘simple’ could expose so many different surfaces of the system and interfaces that could easily pass you by. One that struck me particularly was a feedback loop that applied to both the coach and student labelled ‘Self Evaluation’. It is something that could easily be overlooked but it is happening subconsciously all the time and is critical, in my view, to how well such a coaching training system evolves.

After voting on topics to discuss in presentations on Sunday we broke off for the day finishing with dinner, some testing challenges and more drinks.

Sunday started off with an experience report from Michael Phillips on some of the dogmas that he has seen potentially arising in the companies he has recent experience with. The attitudes he gave as examples were twofold:

  • Testers are seen as not being able to keep up with the pace of development; and
  • Testers are seen as a danger to a deployment because they might disrupt a continuous integration cycle.

James Bach made the suggestion that the first could be countered strongly by turning the argument round and saying that developers were not keeping up with the risks being introduced. The other important thing testers can do in this and many other testing situations is work hard on their credibility and building their reputation.

Joris Meerts gave an excellent presentation on what it means to be a good tester and ways we can know we are a good tester. Much of this focussed again on reputation, skill and influencing and impressing the right people.

This tied in very nicely to James Bach’s talk after lunch on how he built credibility by gaining repute as someone technical but also by upholding professionalism.

Next we had a report from Huib Schoots on the recruitment of testers and the things he looks for when he is hiring. For example what are they doing in the wider testing community? Are they tweeting, blogging, writing articles relating to testing? It was suggested that interesting insights might be gained by asking candidates what is on their minds at the moment.

All in all the lessons I have learned from the weekend:

  • The ability to do Systems Thinking is one of the most important skills for a tester to master;
  • Do not just strive to be good at something – go for excellence;
  • Think about the people involved in designing, building and using the systems we are working on;
  • Discussing testing with passionate people and getting to know them over a weekend is very valuable and rewarding for me personally; and
  • I need to spend more time reading and thinking about General Systems Thinking.

In conclusion I would like to thank the Dutch and Belgian testers – particularly the founding DEWTs – for inviting me to Holland to join their discussions. It was a privilege to get to know you all and gain some credibility amongst such a group. I hope you will consider inviting me again in the future!

Rapid Software Testing with James Bach – Final Day

21 March, 2013

My employer, Allies Computing Ltd, has hosted a condensed version of this post: http://www.alliescomputing.com/blog/testing-psychology-rapid-software-testing-training-james-bach/

 

Well, that’s Rapid Software Testing concluded and I feel enthused about the future. In my previous post I explained some of the things I learned on the first two days of the course. Today we looked at so-called Exploratory Testing. In reality all testing by humans is exploratory to a degree and scripted to a degree and this can be shown on a continuum. Even if we are running through a set of steps we are interpreting those steps in a certain way and two people may well interpret those steps differently. (The term ‘Exploratory Testing’ was coined to distinguish the approach to testing where test design and test execution happen simultaneously so there’s no need to write scripts and ‘Scripted Testing’ where test design is a separate process to executing the tests and often happens at a different time making scripting a necessity.)

I thoroughly enjoyed the various testing exercises that we undertook during the day but there was one in particular where I wished my normal laptop had not been broken as Visual Studio would have been VERY useful as I couldn’t get Excel to generate the data I wanted quickly enough but never mind the important thing was the thought process that got me to that point.

In testing as in life there are many different ways of doing things. As a tester it is important that what we do stands up to scrutiny. We should equip ourselves to talk or write about our testing in a clear, coherent manner and that will help us gain the confidence and respect of our peers in the industry. Sometimes we take actions or make decisions which take us away from where we want to be and the important thing is that we learn from doing that and use that experience to get ourselves back on track.

I did not expect this course to have so many applications from the cognitive sciences – particularly psychology – but one of the things that I have been very occupied with for several months now has been understanding what makes me do the things that I do and, conversely, why I don’t do certain other things. What makes me see some things and blindingly obvious and miss other things completely? By practicing the things I have learned on this course and just absorbing the phenomenal amount of information I have received over the past three days I hope that all of this helps me become a better tester and a better person.

Rapid Software Testing with James Bach – First Two Days

20 March, 2013

My employer, Allies Computing Ltd, has hosted a condensed version of this post: http://www.alliescomputing.com/blog/testing-psychology-rapid-software-testing-training-james-bach/

 

Rapid Software Testing is a course I have wanted to do for a few years now having heard really good things about how it has improved various people’s testing and the way they think. We are two days in now and I am really excited about the things I have learned.

Thinking like a tester

This is a particularly challenging area for me. I tend to build ideas then narrow them down by critically analysing them however they can be quite haphazard and this could lead to an accusation that I’m not testing properly. I am going to have a go at categorising what I do in a similar way to James does with the Heuristic Test Strategy Model – where he has headings such as:

  • Structure
  • Function
  • Data
  • Interface
  • Platform
  • Operations
  • Time

I will likely have different headings (some may be the same) but by organising my thoughts and ideas in this way will give me confidence that I’m considering the right things and a much more professional image.

The other way I think this will help me is that it will channel my thoughts and make me less inclined to get stressed about what I am doing and whether I am doing it right. Critical analysis is good; it is what makes a good tester but sometimes striking a middle ground is a good thing. We spoke a lot about System 1 and System 2 thinking – with System 1 thinking we tend to be much more off-the-cuff and emotive and this is great for generating ideas quickly but the more measured, time-consuming approach is System 2.

I find this very useful because I often don’t recognise when I need to switch mode – and maybe even when the way I am thinking is counter-productive to the situation I am in.

Testability

Another big thing for me was the importance of testability and seeking things that will help me test very well. This was brought home to me during one particular exercise involving a sphere. James played a customer in a top secret problem domain and was unable to tell us things because we were not allowed to know these things! For me this is all part of honing our problem solving abilities.

A Bit of Magic

We had magic tricks to help us see the importance of having a broad mind and giving us the imagination to conceive of our ideas. It showed me the importance of what is lurking in my blind spots of which I have many. It is really important to recognise our limitations and the problems these limitations bring to us as testers.

Oracles

It is really important to recognise who, what and where our oracles are as it is they which will provide some help answering the question about whether or not we have a problem. I realised that I have oracles hidden away in all sorts of surprising places!

I am really looking forward to tomorrow when we will be looking at Exploratory Testing and I am excited to play the famous dice game!

Observations from EuroSTAR 2011: Looking to the Future

27 November, 2011

I intend to write a few blog posts over the coming weeks following on from my experiences at EuroSTAR 2011 in Manchester. I want to start with a post addressing the general theme from the keynotes and my own thoughts on the matters raised.

The Speakers’ Views on The Future of Software Testing (with a few comments from me)

A recurring topic of conversation was the ‘death of software testing’. I do not think that software testing is dead at all – if anything it is growing in importance. Speedy information dissemination will become more important as project teams become better at agile practices.

This is where skilled Exploratory Testing comes into play. Note that word – skilled – testing is a highly skilled craft and not everyone has the mindset to apply those skills.

The first keynote on Tuesday, from Richard Sykes, told us that ‘quality assurance’ is all about giving management confidence in the product or service being produced. I dislike the term ‘quality assurance’ because I do not believe we ‘assure’ anything – that is the programmers’ job. To me testing is all about finding information and passing that on to the relevant decision makers for them to draw their own conclusions.

Gojko Adzic, in his keynote on Tuesday afternoon, made a very important point: he said that we run the risk of losing a very good tester and gaining a very poor coder if we insist on testers coding.

In his keynote on Wednesday morning, James Whittaker, from Google, disagreed and told us that at Google ‘Tester’ has disappeared from people’s job titles. People who were ‘testers’ are now ‘developers’ and are expected to code regularly. I feel this is a dangerous path to go down: developers and testers think about things differently. In my experience developers find different problems to testers and both are needed.

Wednesday afternoon’s keynote told the story of Deutsche Bank’s use of communities. Daryl Elfield explained how groups were formed in various parts of the world for the various divisions within the company. It had nothing to do with testing and was all about people making changes in a centralised way: people could not go off and build their own communities – they were joined to a community by management.

Ben Waters from Microsoft talked to us on Thursday morning about how Microsoft creates customer value through testing and it started off as a very inspirational talk. Unfortunately it degenerated into testing being a phase.

Isabel Evans talked to us about the work she has been doing at Dolphin Computer Access where she has sought to improve quality processes throughout the organisation to enable better testing. I think we need to be careful not to make testing a ‘process’. Testing is a set of skills; it should happen naturally and not be something that is seen as a nuisance that has to be got through at some stage of the project.

My View of the Future of Software Testing

I see a bright future for software testing which is centred on people, skills, adaptability and passion.

Just as there are many aspects to a project – e.g. the product owner sees some, the developers see others, the infrastructure architects see some and the business users see yet more – so testing within a project has many aspects. We should be using techniques and tools appropriate to the individual project we are working on – and that will change from company to company. We have to adapt to the changing needs of our businesses.

I see different expertise being needed to test comprehensively which is why everybody needs to be involved. Testing is a hard job, though, and requires a lot of skill which needs to be honed and practiced. We need to use those skills to shed light on areas of the project nobody else has seen the significance of. We need to use the rest of the team’s knowledge to help our investigations.

We need to keep enhancing our skills and take responsibility for our own education. Having a network of people we can learn new skills from helps in this. Outside work I have been privileged to work alongside Rosie Sherry, Rob Lambert and Phil Kirkham at the Software Testing Club and the community that has been built up there is incredible.

We need to be passionate about our craft. We should seek out the skills that we need to best serve the projects we are working on. We need to have an interest in making the projects we work on great; do not ignore something you have noticed thinking it is someone else’s problem – bring it to their attention.

I am going to discuss some of the other things I learned at the conference in future posts. Specifically I want to write about automation and performance testing. I hope this generates some comment and discussion from the community!

Test Management Forum – 27 July 2011

27 July, 2011

The thirtieth Test Management Forum took place today (27 July 2011) at Balls Brothers in London, EC3 and as usual there was a varied programme.  The first talk I attended was by Andy Redwood who was telling us about the psychology of testing and what makes a ‘tester’ tick.  Amongst the things Andy was telling us about was the importance of what he called “Social Construction” – how society works in different cultures and the significance of this in today’s global business culture.

Andy also told us how he goes about forming role descriptions and an experiment where he asked people to describe themselves.  He found that the statements so generated fitted into five categories:  Social Role; Personality; Interests and Tastes; Attitudes; and Current State.

An issue I had with Andy’s talk, though, was that he seemed to be adopting a narrow view of testing and appeared to ignore context a lot in what he was saying.  One example of this was that he was describing testing as just being about ‘breaking things’.  There was an assertion that all tests should be designed so that they fail.  The idea behind this, I think, was to mitigate the ‘absence of defects’ fallacy where people claim that the software must be right because there are no defects.  However, what about the different objectives of testing?  What if the objective of testing is to show that a feature works in a particular way?  What if you want to prove that a particular area of functionality is present?  I am not saying that you would not do some detailed tests to make sure that the software is not behaving in certain ways but I think you would major on simply proving the existence or behaviour of the function or feature to start with.

Andy also reminded us, though, that we find it difficult to find more than 30% of our own mistakes when we are proof-reading or checking our own work.  That is why it is very important to have peer reviews and solicit the opinions of others.

After the break I attended a talk by Steve Allott from Electromind entitled “Agile in the large”.  Steve discussed a rebadged incarnation of DSDM (Dynamic Systems Development Methodology) called DSDM Atern which has four underpinning principles:  Process; People; Products; and Practices.

The general consensus of opinion seemed to be that, while it is not impossible, it is difficult to port pure agile practices over to large projects.  Typically this is because on large projects there tends to be challenges such as geographical and cultural separation and much higher numbers of people involved.  It would be difficult, practically, to run a Scrum team with a team of one hundred people for example.

I intend to do some more reading up on DSDM Atern because I know very little about it.

I thoroughly enjoyed the afternoon and my special thanks to Paul Gerrard and Susan Windsor from Gerrard Consulting for organising the event for us.

SIGiST 21 June 2011

21 June, 2011

SIGiST took place on 21 June 2011 in, as usual, the Royal College of Obstetricians and Gynaecologists in London.

The theme of the day was “What is testing?” and started with an opening keynote from Chris Ambler from Testing Stuff. In his talk he discussed whether we were guardians of quality, innovators, reporters, interpreters or problem solvers.  His conclusion was that we are all five.  In response to a challenge from Fiona Charles (@FionaCCharles) he conceded that testing and quality assurance were two different roles. After this yours truly got to plug http://www.softwaretestingclub.com to delegates and advertise the Testing Planet and local area meetups.

Like Fiona I found myself in agreement with all his other assertions but I really do feel that the role of testing is to act as a beacon highlighting issues to those who actually are in a position to make a release/don’t release decision. I do not believe testers should have responsibility for quality assurance except as part of a wider team. I am part of a Program Team myself so I have a responsibility for helping make a release/don’t release decision but my opinion is just one of several people’s opinions on the matter.

After the mid-morning break we had a short talk from ILEx on their latest on-line training courses and courseware. This was followed by a talk from Neil Thompson (@neilttweet) entitled “The Science of Software Testing”. There were a lot of slides to go with this talk but what I particularly valued were Neil’s slides and explanations on quality being value to some person(s) (quoting Gerry Weinberg) and his descriptions of Exploratory Testing being more than unstructured ad-hoc testing. I found it refreshing to be at a SIGiST conference and having such views expressed so clearly.

I think there was a lot of food for thought from this presentation and I will be looking over my notes again because I suspect I may have missed some of the points that were raised. I find the science behind how we test and the psychology of testing very interesting because it is not true that ‘anyone’ can test so how can we make ourselves better testers? I believe by understanding what makes testers tick we are likely to be able to use these insights in our own education and continuing professional development.

After Neil’s talk, Nathalie Van Delft (@FunTESTic) and Dorothy Graham (@DorothyGraham) co-hosted a Testing Ethics debate during which we debated five statements ‘House of Commons’ style:

  • You can break the law in order to meet your test goals
  • You must always tell the truth
  • You must always be able to use privacy-sensitive data to test
  • A tester may be responsible for acceptance
  • As a tester you should set aside your own standards and values to test thoroughly

It took a question or two for everyone to get into the swing of this but the debate was great fun. It was great to hear the differing views from people of differing testing backgrounds and in the end it all added up to an extremely lively debate.

After breaking for the excellent SIGiST lunch, Andy Glover, @CartoonTester, challenged us all to use word replacement to better understand a quotation from James Bach. [Added after initial publication – thanks for sending it to me, Andy: “Testing is the infinite process of comparing the invisible to the ambiguous so as to avoid the unthinkable happening to the anonymous”].

Following on from this we had a short series of lightning talks from Dot Graham (@DorothyGraham) on ‘What is Coverage?’, Neil Thompson (@neilttweet) on ‘What is Risk’, Nathalie Van Delft on ‘What Else is Testing?’ and Stuart Reid on ‘What is a testing professional?’.  The lightning talk format was great with each talk lasting about 10 minutes. I felt it really suited the after-lunch spot well because with the rapid change-over of speakers you were more jnclined to stay awake. If I were to choose two talks that I particularly enjoyed I would have to go for Dot’s and Neil’s because they are both areas that are close to my heart. Dot’s talk, in particular, struck a nerve because ‘coverage’ is such a misused and misunderstood term frequently bandied about by managers.

Before the afternoon break Stevan Zivanovic (@StevanZivanovic) gave an interesting talk on leadership in an agile context. In particular he focused on how we, as individuals, can and should take responsibility for leadership whether we have ‘manager’ in our job titles or not. He also emphasised that just being obeyed does not constitute being a ‘leader’. Obedience has no place in an agile team was one of his points.

After the break we had our closing keynote from Dot Graham (@DorothyGraham). The subject of this talk was “Things managers think they know about test automation – but don’t”. Many of the pitfalls she identified resonated with me because they are things that I am trying to avoid myself in trying to work with our developers to introduce more test automation into the business.

All in all I thought it was a great conference and I think Bernard Melson did a great job bringing the programme together. Given Bernard’s background there was a fair bit of talk of training and tester education in a formal environment which was understandable. Critically, though, it did not overpower the conference which I had feared it might, initially.

The next SIGiST conference will be in September 2011.

2010: A Pedant’s Review

29 December, 2010

I have been reminiscing this evening about the passing of another year.  It has been a year during which I have learned a great deal about myself and my chosen craft.  Many of my experiences have strengthened my understanding of why certain things work well for me in my situation and why other things don’t work so well.

I have been blogging and tweeting more and more during the year as I have realised that the thought processes I am going through seem to be of interest to others which has both surprised and inspired me further.

One of the subjects which has been giving me a great deal of food for thought has been how we get to understand requirements and a lot of this boils down to the way in which we communicate with each other (as members of development teams) and our customers and other stakeholders in the business.  I gained some insight into how to go about uncovering some of these hidden requirements from outside the software testing field: I went to the Old Bailey (the Central Criminal Court in London is in a building called the Old Bailey for any unfamiliar with the term) and listened to some cases being heard.  I followed this up with a visit to the Royal Courts of Justice which is one of the higher law courts in the English and Welsh legal system.  It was fascinating to me to listen to the proceedings and observe the way questioning was pursued.

The testing community throughout the world has been a tremendous source of encouragement and it has been great to read about the proceedings of the various conferences that have gone on during the year.  I have mainly centred my attendance on the Software Testing Special Interest Group (SIGiST) conferences arranged through the BCS (formerly known as the British Computer Society) and the UK Test Management Forums.  These events have all been very valuable to me in my learning and I am grateful to the SIGiST organising committee and Paul Gerrard of Gerrard Consulting respectively for continuing to arrange these events.

Besides the formally arranged conferences a big thank you must go to Tony Bruce for his sterling work organising the London Tester Gatherings.  What tremendous events these are!  It is great to be able to meet up with fellow testing professionals to discuss our craft in an informl setting.

It has been a privilege during the year to help out with proof-reading and reviewing articles for The Testing Planet, the newspaper produced by the good folks at the Software Testing Club.  Again, this is another vibrant community of testers from all over the world and it has been great to be associated with this.

European Weekend Testing, organised by Anna Baik and Markus Gärtner, has provided a safe place in which to practice the software testing craft.  The missions on a weekly basis have always been challenging and a great way to hone existing skills and learn new ones.  Unfortunately time has not been available to keep these going on a weekly basis but I hope to get along to future sessions whenever I can.  The other weekend I attended a Weekend Testing Americas session hosted by Michael Larsen.  It is a great way to learn from others and become better craftspeople while we are at it.

It was from one of the European Weekend Testing sessions that I realised I needed more help with understanding a technique called ‘Transpection’ which I had read about on James Bach’s website.  I contacted James on Skype to ask for his help and I was able to add that to my armoury for further use.

Through all of these events and activities I have met and chatted online to some amazing people – you all know who you are – thank you for all your support over the past year.  I would like to take this opportunity to wish you all a very happy and prosperous 2011.

Happy testing!

The Testing Planet – Annual Subscriptions Available Now

21 December, 2010

A great and fun way to advance your education in the software testing craft is to read well written articles by well-respected testers.

The Software Testing Club has introduced an annual subscription priced at £15 for UK residents; £21 for residents in the rest of Europe; and £25 for the rest of the world.

For more information go to http://www.thetestingplanet.com/annual-subscription/ or e-mail thetestingplanet@softwaretestingclub.com.

Digital copies continue to be freely available for download but there is something nice about a printed copy though!

User Experience Testing: Communicating Through the User Interface

20 December, 2010

One of my many interests is how the individual parts of systems – whether they be software-driven or not – communicate with each other.  A lot of time and energy is spent on making sure that the software components work together in different situations but how much time do we devote to making sure that the systems all work together cohesively to form an entire process?  How much time is spent making sure that the process itself can work correctly?

One of the things that I think we need to plan more time for is testing the way humans interact with systems.  I know that this is not easy because time is short and there is a lot of pressure to make sure the computer software side of the system is working correctly – the rest can be handled with training so the argument goes – but I think as testers we should keep bringing the human side of systems to the table in meetings and discussions about the projects we are involved in.

The human side of systems is something that ‘just happens’ when everything is going well but when it all goes wrong the results can be spectacular.  My favourite example of this is the London Heathrow Terminal 5 opening debacle.  A lack of familiarity by staff and passengers about car park locations led to baggage build up because the people were not in place at the right times to move bags around the baggage system.  This in turn caused a heavy load on the baggage belts leading to a failure of the automated baggage delivery system and so on…  Testers, as the eyes and ears of a project, should be vigilant for situations that no-one else has thought of and raise them.  Of course it is possible that the testers on this project had asked these questions and nothing was done about mitigating the risks, but everybody did seem to be taken by surprise at the turn of events on T5’s opening day…

Let’s move on now to another aspect of human-computer interaction: messages and warnings.  I am sure we have all been bemused by the sight of an error message that just says: “An error occurred.” However, put yourself in a user’s shoes for a moment and think about how you would react to seeing the following (I have pulled this from my Application Event Log but the text is pretty much as I  remember it appearing on screen in the form of an error message):

Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d

Faulting module name: olmapi32.dll, version: 12.0.6538.5000, time stamp: 0x4bfc6ad9

Exception code: 0xc0000005

Fault offset: 0x00051c7c

Faulting process id: 0x1a18

Faulting application start time: 0x01cb9c9e50018c07

Faulting application path: C:\Program Files\Microsoft Office\Office12\OUTLOOK.EXE

Faulting module path: c:\progra~1\micros~2\office12\olmapi32.dll

Report Id: 0c733b39-0896-11e0-b5bf-00197ed8b39d

The practice of delivering such ‘techie’ messages to end users is common-place but in my opinion it is a bad approach.  Receiving messages like this is completely bewildering for novice computer users who are likely to panic and do something that really messes things up.  In my case I knew that it was an add-in that I had installed which was incompatible with Outlook 2007 and did not panic – I understood what I had to do and I got on with it but it made me think of my less experienced friends and colleagues and how disconcerting such a message would be for them.

Let me give you another example from my Application Event Log:

The application (Microsoft SQL Server 2005, from vendor Microsoft) has the following problem: After SQL Server Setup completes, you must apply SQL Server 2005 Service Pack 3 (SP3) or a later service pack before you run SQL Server 2005 on this version of Windows.

I would like to encourage you all to think carefully about the wording of error messages and warning that are displayed to users.  The above is not a ‘problem’ at all; I need to do something else before I can run SQL Server 2005 and there is no cause for alarm.  It might be argued that someone seeking to use SQL Server 2005 is bound to be a competent computer user and therefore does not need much help but I beg to differ.  I might have been given a task to do for which I am completely out of my depth and I do not need to be panicked further.

There is a fine balance to be reached between being able to give enough information so support professionals and developers can debug and understand how to fix a problem (which Microsoft may have done with their message from Outlook above – assuming they are all well versed in hexadecimal) and being informative to users.

If we get the user experience right we stand a much better chance of designing and implementing a system which really does work efficiently because people will not be wasting countless hours trying to understand cryptic messages coming back from the system; they will be less frustrated; and everyone will have a better perception of the system and the organisation that is using it.

I was in a supermarket a few weeks ago and I heard the remark from a fellow customer that “there are always problems at the tills here – nobody seems able to work them”.  Standing in the queue I could see where that perception would come from: two till operators and a supervisor were needed to make sense of a message that had come up on the screen.  Testers should be making more noise about human-computer interaction and user experience problems that they can foresee for the future good of our craft.

This is an area that I am striving to get better at and I hope there are other testers out there who give serious consideration to the user experience they are giving in their systems.


Follow

Get every new post delivered to your Inbox.

Join 765 other followers