Electronic surveys: pleasures and pitfalls

“Let’s just do a quick survey” and other misleading statements

Elizabeth Hodson

 

With the advent of web-based electronic surveys it is now possible, with just a few clicks, to create a professional-looking survey and send it out to hundreds of people all around the world, effortlessly garnering their views on a whole variety of topics.  This is a great boon for evaluators who no longer have to trudge to the remotest corners of the world to find out what an aid worker thinks of the latest global initiative, but can, from the comfort of their offices in Oxford, contact a whole group of people simultaneously and request their opinions in an entirely consistent manner, opinions which are then automatically collated and presented in useful graphs that, theoretically, can go straight into a report. 

However, the seeming ease of making and distributing the survey can lead the unwary to think that the whole process is speedy and simple.  In my recent experience of running two global electronic surveys for major UN agencies, the process was subject to numerous delays and significant time was spent in drafting and re-drafting the survey, waiting for late responses, cleaning the data, presenting the material and analysing the results – nothing like the sort of the time would have been spent on a paper-based survey, but then a paper-based survey would probably never have been contemplated.

In this article, I’ll examine a few misconceptions and misguided approaches in the hope that others can avoid some of these pitfalls.

Writing a survey is easy – just jot down a few quick questions, it shouldn’t take more than twenty minutes.

You draft the questions, you edit them, pass them to a colleague who re-writes them, adds a couple more questions, then to another colleague does the same, and then to the client themselves who has their own idea about what the survey should look like and requests numerous edits and additions. This process takes time – days rather than hours.  And so it should, getting the questions right is important.

However, the danger of multiple people drafting the questions is that the final product is longer than it should be and is written in a language that is comprehensible to the team, who are immersed in the evaluation, but may not be easily understood by the recipients.

Writing a jargon-laden survey that addresses evaluation questions may gain valuable answers from people already embedded in the specific area of that sector, but if you need a wider response then greater care must be taken to ensure the questions are easy to understand and elicit relevant information.

In addition, sending out a sample survey to a limited pool of recipients can be very useful, but allow at least another week in the timeline to do this.

We’ll write the survey first, then we’ll come up with a list of people to send it to.

As with a presentation, the audience is arguably the single most important factor, but, also as with a presentation, the tendency is to devote time to researching the subject matter rather than considering who will be on the receiving end.

Before you start drafting the survey, consider who will actually receive it and what they are likely to know or be in a position to comment on.  There is little point having questions which only a minority can answer – those sorts of questions may be better dealt with in other ways, such as an interview situation; a survey is designed to take a broad look at the subject, and will generally be complemented by more specific, detailed and evidence-based methods.

The survey should ideally cover everything. Let’s add a few questions on Topic B to confirm our findings from elsewhere.

The survey is likely to be just one instrument used, and it should be treated as such.  If you already have hard evidence for something, whether it is financial data or written documentation, then there is no need to add a question on it just because it is part of what you are assessing.

In contrast to interviews, which people may prepare for, may take time to reflect on the their answers to, and can be called to account if their answer is specious or inaccurate, the very ease and anonymity of electronic surveys, while conferring great benefits, also mean that they may be approached with less responsibility.  There is less incentive to think hard about the questions, to reflect on their true meaning, and to seek out evidence for any response given.  If the wrong tick-box is accidentally selected, the respondent will not be required to justify their choice.  Thus, surveys are useful for certain sorts of responses – good for getting an instinctive response, and perhaps even a more honest response due to the anonymity, but less likely to elicit a reflective response based on hard facts.

Where questions require research or referral to data for answers, I have found survey responses to be at odds with other evidence, and the other evidence to be more robust.  Survey results, although they may sound impressively quantitative with their percentages and, on occasion, confidence intervals, are often essentially an aggregate of people’s opinions.  Thus, perhaps better to stick to questions that people can answer readily, and save the “hard” questions for individual interviews or other forms of research.

Electronic surveys have lots of useful features which make your job much easier.

Questions can be made to appear only if a certain box or combination of boxes on prior questions has been selected, thereby tailoring the questionnaire to the answers that have gone before.  This is called “skip logic”, and appears to enhance the relevance of the questionnaire to the audience.

However, it means that only a subset of the respondents will answer such a question, which means the sample size for that particular question is smaller, which is generally a bad thing.

There are cases where functions such as this are genuinely useful, but where it is possible to instead phrase a question in a more general way and have all respondents answer it, this is likely to be preferable, both in terms of having a larger sample, and in terms of analysis.  You can always disaggregate the responses later at the analysis stage, if you wish.

Similarly, retain low expectations about the automatically generated graphics and results.  They can provide an at-a-glance check for your own analysis, but are rarely useful as client-ready outputs – data cleaning and thoughtful presentation of results are still required.

Because they’re administered over the web, you don’t need to worry about bias.

Bias is always an issue in questionnaires: sampling bias, framing bias, almost all the usual issues that occur in conventional survey can occur in electronic surveys, but because it is so “quick and easy” to make questionnaires, it can be overlooked.  Bias, however unintended, can render the results meaningless.  There are entire graduate-level courses on this subject, but to simplify: be aware of how you are sampling, be aware of the audience, use clear audience-appropriate language which is neutral in tone and structure and consider carefully how to present the options.

People don’t mind spending half an hour filling in a survey.

However quick or slow it is to create the survey, give consideration to how long it takes to complete.  Ticking the first few boxes of a survey can almost be fun, but after a couple of minutes it begins to grate and after seven minutes doing something else, anything else, appeals more.  It can feel like “wasted time” to the recipient, who, if they manage to complete the survey, may become increasingly haphazard and “click happy” in their responses, or may simply give up.

Thus, the length of time to answer a survey may impact both the response rate and the quality of the answers – cut the survey as ruthlessly as you dare, then cut it some more.

Doing a survey is just too administratively burdensome – we should not even consider it.

Having emphasised the idea that a survey is not quick thing to do, it is only fair to reiterate out how much quicker it is now than it was only a few years ago.  To write the survey should take as long as it did before, and drawing up a list of recipients still requires thought and consideration, but the process of distributing it and collating the results which previously would have taken weeks takes virtually no time.  Data cleaning can be minimal if the survey is simple, e.g. does not involve skip logic or multiple versions in different languages.

Any other comment?

Web-based surveys are indeed a great asset to an evaluator, at a single blow allowing access to people whom time and distance would never have allowed you access to before.  The automisation of results gathering is also a great strength of electronic surveys, taking away much of the monotonous data entry which was required with paper or telephone-based surveys, and the associated risk of clerical error.

However, an effective survey requires thought to make sure that the right questions are asked of the right people and in a way that elicits honest and useful responses to produce relevant and robust data.  The whole process remains quite an undertaking and, as in so many other areas, technology alone cannot substitute for a well thought out approach.

Download this article.

You must be logged in to post a comment.