Research methods unit IV Scholarly Activity and DQ question

Nurse Researcher Questionnaires Chailenging the reported disadvantages of e-questionnaires and addressing methodoiogicai issues of oniine data coUection Correspondence to Louise Hunter louise.hunter@uwl,ac.uk Louise Hunter MA (Oxon), BSc, RM is a midwifery lecturer at the University of West London, Brentford, UK Peer review This article has been subject to double-blind review and checked using antiplagiarism software Author guidelines wviiw.nurseresearcher.co.uk Cite this article as: Hunter L (2012) Challenging the reported disadvantages of e-questionnaires and addressing methodolological issues of online data coUection. Nurse Researcher. 20, 1,11-20.

Accepted: October 7 2010.

Abstract Aim To review the advantages and disadvantages of e-questionnaires, and question whether or not reported disadvantages remain valid or can be limited or circumvented.

Background The internet is likely to become the dominant medium for survey distribution, yet nurses and midwives have been slow to use online technology for research involving questionnaires. Relatively little is known about optimal methods of harnessing the internet's potential in health studies.

Data source A small e-questionnaire of health workers.

Review methods The Medline and Maternity and Infant Care databases were searched for articles containing the words 'web', 'online', or 'internet' and 'survey' or 'questionnaire'. The search was restricted to articles in English published since 2000. The reference lists of retrieved articles were also searched.

Discussion Reported disadvantages of online data collection, such as sample bias, psychometric distortions, 'technophobia' and lower response rates are discussed and challenged. The author reports her experience of conducting a survey with an e-questionnaire to contribute to the limited body of knowledge in this area, and suggests how to maximise the quantity and quality of responses to e-questionnaires.

Conclusion E-questionnaires offer the researcher an inexpensive, quick and convenient way to collect data.

Many of the reported disadvantages of the medium are no longer valid.

The science of conducting the perfect e-survey is emerging. However, the lessons learned in the author's study, together with other research, seem to suggest that satisfactory response rates and data quality can be achieved in a relatively short time if certain tactics are used.

Implications for research/practice To get the best results from e-questionnaires, it is suggested that the questionnaire recipients should be targeted carefully and that the value of their potential contribution to the project should be emphasised. E-questionnaires should be convenient, quick and easy to access, and be set out in a way that encourages full and complete responses.

Keywords E-questionnaires, questionnaires, web-based methodology Introduction QUESTIONNAIRE SURVEYS are arguably the most poptilar method of acquiring data in midwifery and nursing research, and their use has been escalating (Brindle et al 2005, Rattray and Jones 2007).

The internet is increasingly being used to recrtrit research participants and to distribute and complete questionnaires, and is set to become the dominant method of surveying individuals (Velez et al 2004, Greenlaw and Brown-Welty 2009), © RCN PUBLISHING / NURSE RESEARCHER September 2012 | Volume 20 | Number 1 Nurse Researcher Table 1 Reported advantages and disadvantages of e-questionnaires for researchers compared with paper questionniares Advantages Disadvantages Less expensive (Ahem 2005, Douglas ef al 2005, Sue and Ritter 2007).

Quicker to disseminate and respond to (Ahern 2005, Belingefa/2011).

Returned surveys can be seen immediately by the whole research team and are already in electronic form, avoiding data-entry errors and assisting with analysis. Web-based survey hosts (such as Surveymonkey or Zoomerang) usually provide analysis tools and allow data to be exported into spreadsheet software such as Excel or survey authoring and deployment software such as SPSS (Sue and Ritter 2007, Jones ei al 2008b).

Questionnaires can be customised and respondents can be guided to different questions, depending on their previous responses (Sue and Ritter 2007, Jones ef al 2008, Loescher ei at 2011).

No problems deciphering handwriting (Stewart 2003).

Easy to send reminder emails to email non-responders (Wharton ei at 2003).

Increased pool of study participants (Ahem 2005).

Allows access to hard-to-reach groups such as rural populations, shift workers, the house-bound and those who find reading common forms of print difficult (Whitehead 2007).

D Expensive and time-consuming to set up (Jones ei a/2008a, 2008b).

D May yield lower response rates (Aitken ei at 2008).

D Responding sample unlikely to be representative of the general population (as they need to be computer-literate) (Brindle ei at 2005).

O Survey instruments may not work in the same way online as on paper (Whitehead 2007).

O Respondents with older computers or slower internet connections may not take part if it takes too long to download the questionnaire (Stewart 2003).

n Inability to control responding population - cannot prevent survey being passed on to others (Whitehead 2007).

D Possibility of repeat participation, either deliberately or through pressing the 'send' button more than once (Whitehead 2007).

D Possibility of misrepresentation of self. People may adopt fictional identities on the web and use these when replying to surveys - although there is little evidence to support this (Whitehead 2007).

Ö May increase likelihood that not all questions will be answered (Velez ei al 2004).

D May elicit shorter answers than paper questionnaires (Velez ei al 2004).

For some time experienced market researchers have been finding that responses to paper- and phone-based surveys are decreasing (Roos 2002).

However, nurses and midwives have been slow to realise the potential of the internet as a tool for research (Stewart 2003, Whitehead 2007). To help prospective researchers overcome their reservations about using e-questionnaires, this paper revisits reported disadvantages of the method and questions whether they remain valid or are surmountable, as online use, technology and knowledge grow and develop. Furthermore, I will share my experience of conducting a small-scale e-survey of health professionals to add to the limited body of knowledge in this area and suggest further ways of maximising the quantity and quality of responses.

Online versus paper There are several ways the internet can be used to distribute and collect questionnaires: sending the questionnaire as a unk or an attachment to an emau; sending a letter by post and inviting people to access the questionnaire online; and advertising the questionnaire using unks, banners and pop-ups on other sites (Roos 2002). Responses can be returned by email or via the web.

AH these alternatives are generally considered together when discussing the advantages and disadvantages of the medium.

Published literature outlines the advantages and disadvantages for the researcher and the respondent of e-questionnaires compared with paper questionnaires (Table 1 and Table 2). The mciin advantages for the respondent are convenience and ease of access (Brindle ef al 2005, Douglas ef at 2005, Sue and Ritter 2007, Jones et al 2008b, Katz ef at 2008). The anonymity and sense of social distance created by the internet also appear to help with the discussion of sensitive subjects (Wharton ef at 2003, CantreU and Lupinacci 2007).

For example, Mackellar ef at (2011) recruited 946 men who had sex with men but who had not been tested for HIV, for an internet study of the reasons for not having the test.

September 2012 | Volume 20 | Number 1 © RCN PUBUSHING / NURSE RESEARCHER Questionnaires Researchers benefit from the ease with which the whole team can share digital information and from no longer needing to enter or analyse numerical data manually (Sue and Ritter 2007, Jones et al 2008a). In addition, the web has made certain hard to reach and special interest groups much easier and cheaper to access, partictilarly through online forums (Sue and Ritter 2007, Whitehead 2007).

However, Brindle et al (2005) and Jones et al (2008a) argue that data from e-questionnaires are less generalisable and less reliable because respondents need to have access to a computer and be computer hterate, and are not therefore respresentative of the population as a whole.

Additionally, it is suggested that e-questionnaires generate lower response rates than paper questiotmaires, are complicated and time- consuming to set up and complete, and that survey instnmients can yield different results when completed online rather than on paper Qones et al 2008a, 2008b, Whitehead 2007). A lack of direct interaction with respondents can also create problems for the researcher: for example, if the survey is forwarded to other people, or if respondents reply more than once (Whitehead 2007).

As technological capabilities, use of computers and availability continue to develop, it seems appropriate to question whether or not any of these objections are still valid and, if valid, whether they are necessarily inevitable.

Sample bias Concern has been expressed that the demographics of those responding to an online survey may differ from those of the sample population as a whole because of the need for the respondent to be computer literate (Brindle et al 2005, Jones et al 2008b). Sue and Ritter (2007) point out that the online population in the United States contains a greater proportion of individuals of higher socioeconomic status than the total population, and does not reflect the ethnic make up of the population as a whole. Adams and White (2008) compared the health behaviours of people responding to a web-based survey advertised on local TV and radio in the north east of England with the regional results from a national paper-based survey. The authors found that those responding to the web-based survey were younger, lived in less deprived areas, had a higher mean body mass index and ate more healthily (Adams and White 2008). However, in a survey of US college students, Velez et al (2004) foimd no demographic differences between those who responded online and those who responded using pencil and paper. It is also important to remember that the web-literate population grows and changes constantly as more of otxr working and leisure lives are acted out otiline. In 2009, 178 million people in Europe went online each week, and in the US, 73.9 per cent of the poptilation were internet users (Touvier et al 2010).

Table 2 Reported advantages and disadvantages of e-questionnaires for the respondent compared with paper questionnaires Advantages D Minimal effort required: can click on button to access and send. May enhance response rate and generate responses quickly (Douglas ef al 2005, Sue and Ritter 2007).

• Respondents can access the survey at a time convenient to them (Jones ef al 2008a, 2008b).

D Respondents can ask the researcher questions and get answers by email more easily than by post or phone dm and Chee 2003).

O Online anonymity may increase response rates and encourage more honest answers, as participants feel more comfortable in responding to sensitive questions (Brindle ef al 2005, Katz et al 2008, Beling ef al 2011). This is thought to be because the internet creates a sense of social distance (Wharton ef al 2003).

D Novel and creative method may increase interest in participating (Ahem 2005).

Disadvantages D Some people believe they lack the skills necessary to complete a survey online (Roos 2002).

n Some people find it difficult to fill in surveys using a mouse (Jones ef al 2008a).

n If respondents access the internet by a dial-up connection, they are paying for the time it takes them to complete the questionnaire (Roos 2002).

D Unsolicited surveys might be considered intrusive or offensive (Stewart 2003, Wharton et al 2003).

© RCN PUBLISHING / NURSE RESEARCHER September 2012 | Volume 20 | Number Nurse Researcher It is often claimed that adults aged over 65 are less hkely to use the internet and respond to e-questionnaires, but they have been identified as the fastest growing group of internet users (Whitehead 2007). Stopponi ef al (2009) foxmd that initial enquiries about their US online dietary- intervenüon trial were less Ukely to come from older people, but that older people were more hkely to enrol in the programme. Furthermore, Touvier ef al (2010) asked a group of 49-75 year olds hving In France to complete the same questionnaire online and on paper: 92.2 per cent of the participants preferred the online version, even though 25 per cent of the sample considered themselves to be novices or inexperienced in the use of computers, including use of the web.

People from lower socioeconomic classes are also considered hard to engage in use of the internet. However, statistics about this group can be deceptive. For example. Im and Chee (2011) appUed a quota sampling method when recruiting participants to a US online study of menopausal symptoms. These authors found it difficult to ful their quota of women from lower socioeconomic groups. This was because the poorer women did not define themselves as being from a lower socioeconomic background and were therefore included in a different quota, not necessarily because they did not participate in the research.

We are moving towards a situation where onhne respondents could be more representative of the target population than a sample that uses pencu and paper. Stewart (2003) claimed that the population of internet users had become similar to the general population of developed countries, and that is even more the case today (Internet World Stats 2012).

Moreover, paper questionnaires are cilso subject to sample bias, relying on researchers to have access to addresses or for participants to be hsted on electoral rolls.

Psychometric distortions This refers to the concern that self-report instruments validated for use with paper and pencu may not work in the same way when delivered online (Whitehead 2007). Some researchers have found that web respondents completing Iikert scales registered more severe or clinically significant symptoms compared with the norms generated by offline respondents (Whitehead 2007).

We do not, however, know whether or not the symptoms recorded online or on paper were more accurate.

Ritter ef al (2004) compared responses to existing self-report instruments completed online to those completed using paper and pencu. The authors randomised 462 people with chronic disease to receive either a questionnaire by post or a unk to a website where the participants could complete the questionnaire. The demographics of the two groups were similar. Of the 16 instruments tested, none showed any significant differences in responses between the two groups.

Ahern (2005) concluded in her review of the literature that 'numerous researchers across disciplines... have found no differences in data collected from internet research compared with paper-and-pencil data'. It would appear, therefore, that this supposed disadvantage of internet questionnaires is no longer true. It might even seem reasonable to assert that data collected from internet questionnaires are more reliable than data gathered on paper because they are not subject to processing errors. In a comparison of internet and paper questionnaires, Touvier ef al (2010) foimd 82 data-entry mistakes, 60 missing values, 57 inconsistent values and three abnormal values in the paper arm of the trial. Although these errors represented only approximately 1 per cent of total data entries, there were no mistakes in the internet data, which were processed electronically.

'Technophobia' There are problems cited with e-surveys that stem from a fear of using technology or a mistrust of its capabüiües. Concern that participating in an e-quesüonnaire would be comphcated and require specific knowledge was one of the primary reasons for non-response to e-questionnaires sent out by Roos (2002). Bebng ef al (2011) surveyed patients with a history of colorectal cancer to find out whether they preferred paper or onUne surveys:

half of respondents chose to receive paper surveys at home, while one quarter did not express a preference. However, the researchers distributed the survey on paper, so it seems reasonable to assume that people preferring paper-based surveys were more likely to respond.

Touvier ef al (2010) also found that older people who tried online and paper versions of a survey preferred the onhne version. This would appear to suggest that as people become more familiar with the intemet, technophobia wül stop being a problem. Researchers might also pre-empt such technophobia by making responding as simple as possible, and reassuring potential participants that taking part requires them to do nothing more comphcated than cUck on a button. Roos (2002) also suggested that researchers should emphasise the advantages of e-questionnaires in their September 2012 | Volume 20 | Number 1 © RCN PUBLISHING / NURSE RESEARCHER Questionnaires accompanying emails to create a positive attitude towards participation.

Jones ef al (2008a, 2008b) partly attributed their e-questiormaire's low response rate to the respondents' (unfotmded) fears that their identities would not be protected. This particular hurdle can largely be overcome if potential respondents are sent a link to the survey, rather than an attachment that the respondent has to download, complete and send back. The Unk option is simpler and more confidential because the responder's email address is not disclosed (Sue and Ritter 2007).

A concern that a link might download computer viruses has also been cited as a reason for potential respondents ignoring e-surveys (Sue and Ritter 2007, Whitehead 2007). However, as familiarity with and confidence in the internet grows, these concerns are likely to (diminish.

Health professionals may be deterred from developing e-questionnaires after reading reports that they cire costly and time-consuming to set up and require a high level of computer- literacy (Whitehead 2007, Jones ef al 2008a, Loescher ef al 2011).

But there are several sites, such as SurveyMonkey.com, where a novice can be provided with a template and guided through the process of setting up, distributing and collating the results of an e-survey in a matter of minutes, often for free (Greenlaw and Brown-Welty 2009).

Low response rates Aitken ef al (2008) concluded that online surveys are ineffective, after their e-quesdormaire of AustrciUan medical practitioners achieved a response rate of 8.7 per cent. However, when Ritter ef al (2004) randomised participants in the US to complete a survey either online or with paper and pencu, they found that the online group achieved a slightly higher response rate and required a great deal less follow up than those who completed the paper version; nearly two thirds of the paper and pencil group needed to be sent an initial reminder postcard, whereas only one quarter of the online group required an initial reminder emau. Other studies comparing online and paper questionnaires disagree about which achieves higher response rates (Wharton ef al 2003, Ahem 2005).

Differing results may of course reflect different levels of computer use in different communities and cultures, although increasing use of the internet is a trend shared by all developed countries (Internet World Stats 2012).

Additionally, distribution method is not the only factor controlling the level of response: the subject matter, clarity of the covering letter and the ease cind convenience of the chosen response method are examples of other factors that will play a part. In Aitken ef al (2008)'s questiormaire, the researchers sent invitations to participate in the project by post; participants then had to access the web and find the relevant site to respond electronically. The authors may have achieved a higher response rate by approaching the medics by email and including a Unk to their questionnaire, making it simpler and easier for people to respond.

Just as postal questionnaires can generate different response rates, so can those conducted online. However, whereas the methods of maximising response rates from paper questionnaires are well understood (see, for example, Alexander ef al 2008), internet questionnaires are still in their infancy and relatively Uttle is known about how to achieve the best possible response (Kaplowitz ef al 2004).

People Ccm be recruited to e-questiormaires directly by emaU or indirectly by responding to online advertisements. The latter method can reach a vast pool of potential respondents and is capable of generating a more than adequate number of responses. However, when expressed as a percentage of the total potential number of respondents, the response rate may stiU appear quite low. For example, Sadeh ef al (2009) recruited 5,006 participants to a US trial in one month by placing a pop-up screen on BabyCenter.

com. Although this response represented a smaU percentage of the 4.5 miUion unique visitors to the site during the recruitment period, it was lcirge enough for their findings to achieve statistical significance at P<0.001.

Similarly, Katz ef al (2008) used an e-survey to investigate non-medical opioid use in the US.

The authors posted a link on an informational drug website and received 896 vaUd responses in a one-month recruitment period (Katz ef al 2008).

Some researchers have found indirect online recruitment to be unsuccessful. Cantrell and Lupinacci (2007) constructed a website for their research into survivors of childhood cancer. They posted a Unk to the site on six related sites. Over six months, 90 respondents entered the authors' site and began filling in the questionnaire, but many did not complete the survey. We are not told what percentage of the total visitors to the hosting sites the 90 respondents represented, but the authors were andcipating a better response.

It may be that their link did not stand out among other links posted on the sites or that a single advertisement for the study was not enough. Other.

© RCN PUBLISHING / NURSE RESEARCHER September 2012 ¡Volume 20 | Number 1 Ifc Nurse Researcher •BflBcM E-survey questions Question What is your job title?

In your role, about how many pregnant teenagers or teenage mothers do you look after each year?

What sort of care do you provide? (Select all that apply) Where do you work?

What, in your view, are the obstacles to pregnant teenagers who state an intention to breastfeed being able to initiate breastfeeding in hospital in the early days after giving birth?

Please tell us about any initiatives you have come across that you think might enable more teenage mothers to breastfeed in hospital.

Can you suggest any additional ways in which inpatient care might be changed so that more young women who wanted to breastfeed could successfully initiate breastfeeding in hospital?

Is there anything else you would like to add?

1 Response options Open question <5 5-10 11-20 >20 Antenatal Intrapartum Postnatal Oxfordshire, London, Scotland,.

Wales, Ireland, England (north), England (south), England (midlands), Non-UK Open question Open question Open question Open question opportunities to highlight the authors' work, such as cin email to site subscribers from the website admitüstrator, might have increased the response rate (CantreU and Lupinacci 2007).

Roos (2002) hightUghted so-caUed 'multiple contact moments' - opportunities to generate .

awareness of the researcher's questionnaire - as important to the success of e-questionnaires sent directly to potential respondents by emau or post.

In addition to sending a Unk to the quesiormaire to members of emau lists and forums, researchers can collect email addresses face to face and use emau pre-notices, posters or articles in newsletters to raise the profile of - and therefore hopeftilly interest in - their work (Roos 2002, Whitehead 2007, Greenlaw and Brown-Welty 2009), Greenlaw and Brown-Welty (2009) compared responses to the same stirvey distributed by post, emau cind both methods together. The mixed-methods approach achieved the highest response rate, followed by the web, then the postal groups. In the mixed group, more people chose to respond online, indicating that the multiple approaches received about the study may have increased the response rate and that when given the choice, people prefer to respond online. Kapolwitz et al (2004) also found that mailed 'pre-notices' increased the response rate to online and paper questionnaires. In the Greenlaw and Welty-Brown (2009) study, it is possible that some respondents may have replied by post and email - resecirchers choosing to provide paper and online options wotild have to incorporate checks to guard against participants responding via both methods.

Online 'snowballing' - asking email recipients to forward details of the project to friends and colleagues to attract further participants - has also been successful (Stewart 2003). Use of this method shows that if a recipient forwards a questionnaire it need not be a disadvantage - it may increase the pool of potential respondents. However, researchers using e-questionnaires need to incorporate demographic questions that prove respondents meet the inclusion criteria for the study.

Direct recruitment to e-surveys also seems to be more successftil when emails are targeted at groups who will find the subject matter relevant, comprehensible and interesting (Greenlaw and Brown-Welty 2009). For example, hn and Chee (2003) struggled to recniit oncology nursing experts • to their international stirvey. The authors sent emails to the members of tmiversity faculty Usts, but not all the recipients spoke English or were oncology specialists. Im and Chee's experience led them to recommend that e-questionnaires shotild not be distributed at the weekend or during major holidays. It might also be the case that people are more motivated to respond when they feel they have been targeted because of their interests, experiences or expertise, and when this is made clear in the covering letter.

Additionally, response rates tend to be higher when researchers make it easy for people to take part (Roos 2002). Sending a questionnaire link merely requires the respondent to foUow the link to access the questionnaire; sending an attachment requires respondents to download, save, respond, and attach their response to another emaQ, Sending a Unk by post involves the respondent going through even more stages before he or she can access the questionnaire. It is also recommended that researchers include as few graphics and video attachments in their emails and questionnaires as possible to avoid problems with access and reduce download times (CantreU and Lupinacci 2007, Loescher ef a/2011).

September 2012 | Volume 20 | Number 1 © RCN PUBLISHING / NURSE RESEARCHER Questionnaires There is still some disagreement over whether or not the length of the questionnaire or the use of incentives affects response rates online (Stewart 2003). Monetary incentives can encourage participants to take part more than once imder different names (Im and Chee 2011). The issue of e-questionnaires generating shorter answers than paper questionnaires are covered below.

A science of conducting e-questionnaires is emerging and as knowledge increases and potential respondents become more familiar with online questionnaires, response rates for e-quesüoraiciires are likely to continue to grow and outnumber those completed using paper and pencil. Many of the reported disadvcintages of e-quesüonnaires compared with paper methods are now no longer tenable, or can be limited or circumvented if e-quesüonnaires are well planned and well executed.

My experience of conducting a small-scale e-quesüonnaire prowded me with an opportunity to reflect further on possible ways in which response rate, completeness and length might be improved when distributing quesüonnaires direcüy to potenüal respondents by email.

E-survey background I used an e-quesüonnaire to elicit the views of health workers about the breastfeeding support provided to teenage mothers in hospital. The quesüonnaire was part of a larger project aiming to develop a support intervenüon to enable more teenage mothers to breastfeed. Ethical approval for the study was obtained from the University of West London and the NHS Naüonal Research Ethics Service (Oxfordshire Research Ethics Committee C).

The quesüonnaire was sent to two groups: 159 NHS staff - 134 midwives and 25 maternity care assistants (MCAs) working for a trust in south-cenüal England - and the 91 members of the Teenage Pregnancy Midwives' online forum, a naüonal forum for health professionals (principally midwives but also MCAs and health visitors) involved in the care of pregnant and parenting teenagers in the UK.

I selected these groups to ascertain the local views of staff who wül be involved in • implementing a planned intervenüon and to gain a naüonal perspecüve.

The response rate from the trust was poor - «=17 (11 per cent). The response rate from the online forum iniüaUy appeared high - 71=85 (93 per cent) - but some respondents had not answered any of the survey quesüons, so the usable response rate was ii=67 (74 per cent).

E-questionnaire distribution to NHS trust Total number of email addresses 46 Email addresses belonging to individuals 35 Email addresses belonging to groups 11 Combined maximum number of individuals 124 in groups Maximum total number of recipients 159 Table 5 E-questionnaire distribution to National Teenage Pregnancy Forum Total number of email addresses 95 Individuals with two email addresses Void addresses Maximum total number of recipients 91 Research approach used The e-quesüonnaire comprised four demographic and four open quesüons (see Table 3, page 16 ).

I chose open quesüons to identify principal issues in a relaüvely unresearched area (Douglas ef at 2005).

I piloted the quesüonnaire in paper form with midwives and MCAs attending an NHS tmst study day. I then created an e-quesüonnaire version at Surveymonkey.com, which generated a link that respondents could foUow to access the e-quesüonnaire. I added the link to an emaQ giving informaüon about the project and inviting people to parücipate.

Survey distribution and response rates The email was sent to midwives and MCAs with an intranet address at a hospital tmst in south-central England. The 46 addresses, II of which were group addresses (generic department or community group addresses), came from a Hst held by the head of midwifery's personal assistant. The sizes of the groups meant that the maximum number of individual recipients of the emails was 159.1 put up posters in staff rooms in the largest maternity imit in the tmst adverüsing the e-quesüonnaire address and inviting midwives and MCAs not on the intranet to take part.

A local newsletter for maternity staff carried an arücle containing the e-quesüonnaire Unk.

I sent a reminder eman after two weeks that included the original email and unk, thanking © RCN PUBUSHING / NURSE RESEARCHER September 2012 | Volume 20 | Number 1 Nurse Researcher •Bffl^M Response rates from NHS trust Timespan Three weeks Five weeks Total responses Unusable responses Final usable responses Number of responses 13 16 17(11%) 1 16 (10%) tf\i\j:WM Response rates from forum Timespan Three days One week Three weeks Total responses Unusable responses Final usable responses Number of responses 38 55 83 87 (96%) 20 67 (74%) 1 ¡ those who had already responded and asking more people to do so. The response rates are in Table 4 (page 17).

I also sent the initial email to members of the Teenage Pregnancy Mdwives' online forum. The forum is nm by the National Teenage Pregnancy Midwifery Network, which is funded by the Teenage Pregnancy Utiit (part of the Department for Education). The high and prompt response rate meant that it was not necessary to send any reminders. This part of the stirvey distribution is svunmarised in Table 5 (page 17).

A maximtmi of 250 individuals were contacted across both groups.

I closed the survey after three months. There were 17 responses from the NHS trust and 87 from the online forum. One trust respondent and 18 forum respondents had answered the demographic questions but had not answered the open questions. I received two responses from the online forum that were from outside the UK and so discounted them. The final usable response rate was n=l6 (10 per cent) from the trust and «=67 (74 per cent) from the fortmi. The response rates are summarised in Tables 6 and 7.

I received a far greater number of responses from midwives than from MCAs.

This was to be expected from the online forum, the membership of which consists principally of midwives. MCAs may have been discouraged from responding to the trust questionnaire as they were less likely to have a personal intranet address or to access or assume ownership of a group address.

Discussion Dispcirity in response rates The response to the e-questionnaire from the national online forum was much higher, more immediate and required a great deal less foUow-up effort compared with emails sent within the hospital trust. Possible reasons for the disparity in the response rates between the two sample groups might be that:

D Group emails are less Ukely to elicit a response than emails sent to individuals. Many groups share a computer and not everyone in the group necessarily accesses group emails regularly.

• One individual may respond on behalf of the entire group.

D The trust intranet can only be accessed at work and people are perhaps less likely to reply dtoring busy shifts (although many of the e-forum members also used work emau addresses).

D The recipients of the trust email included health workers who did not work with teenagers, whereas the recipients of the web-forum maiUng had expressed an interest in caring for yoimg women by joining the group in the first place.

O The members of the e-fonom may feel more comfortable responding to a questionnaire online.

D The initial approaches to the two groups were assigned different subject headings by the PA and site moderator who sent them out. The trust mailing was headed 'Teenage Breastfeeding Research' whereas the fonom maiUng was headed 'Survey on teenagers and breastfeeding - please help!' It may be that this direct appeal for assistance boosted the response rate.

The number of partial responses is also worthy of comment: 19 respondents (18 from the online fonmi) answered the demographic questions but none of the open questions. This could be the result of placing the demographic questions at the beginning of the survey. Oppenheim (1992) argued that demographic questions should always be put at the end of questionnaires because respondents, who are keen to engage with the questionnaire topic, find them frustrating and off-putting. Velez et al (2004) fotmd that partial non-response rates were higher in the e-response group when they compared electronic and paper September 2012 | Volume 20 | Number 1 © RCN PUBUSHING/ NURSE RESEARCHER Questionnaires responses with a survey of college students.

However, Hanscom ef al (2002) argued that the missing-value rate for an e-survey is about half that for a paper survey. In any case, it is perfectly possible to set up an e-quesüormaire so that respondents have to answer parücular quesüons.

This may deter people from responding if some quesüons are sensiüve to answer or they do not understand them - it seems that researchers must decide whether or not responses to all quesüons are necessary, and whether completeness or response rate is worth more to their projects.

Velez ef al (2004) also found that e-responses to open quesüons were generally shorter than those offered in paper and pencu responses. Similarly, in my quesüonnaire, not many responses were more than a sentence long. This could be because the response boxes for the quesüons were small and expanded as people wrote - a larger response box may have encouraged people to write more to ful the space provided (Oppenheim 1992).

It may be that the online format does not offer respondents the same opportuniües as the paper and pencil version to go back over responses and add further comments. However, the number of usable responses was enough to compensate for these shortcomings and justify the use of the online method.

Maximising response rates With my e-quesüonnaire, I hoped that a covering emau providing informaüon and my contact details might encourage people to respond.

I invited potenüal parücipants to contact me to find out more informaüon, seek help if they had any problems completing the survey, or request a copy of the study results. I anüdpated that this would make the e-survey more personal, and make respondents feel valued and more involved in the project. Three respondents contacted me to request help accessing the survey and five asked for copies of the results.

I used a plain design for the e-quesüonnaire, because this avoids confusion and has been found to elicit better results than onUne surveys using complicated graphics (Brindle etal 2005, CantreU and Lupinacci 2007).

The final quesüon in the survey was an invitaüon to parücipants to add any further comments or informaüon they considered relevant. This was to prevent respondents becoming frustrated that the quesüonnaire had not allowed them to express all their views on the survey topic (WaUiman 2005).

The different response rates from the two groups in my survey reinforces the importance of moüvaüon.

The members of the forum were appealed to direcüy Strategies for maximising response rates and data quality in e-questionnaires L a Target recipients carefully to ensure that the questionnaire is received by people who will find it relevant and interesting.

D In your advert or covering letter, explain why the recipients have been targeted and emphasise the value of their contribution.

D Use subject headings that will catch the attention of potential respondents.

A direct appeal for help may increase the likelihood of the respondent opening an email or clicking on a survey link.

D Ensure that the survey is sent by a known and trusted individual because this might increase the response rate.

• Approach potential participants in person in the first instance, then follow up with online contact as this might increase response rates.

Ö Employ other multiple contact strategies, such as posters, pre-notices and articles in newsletters to increase knowledge of, and interest in, the questionnaire.

Ö Snowballing - asking email recipients to forward details of the project to friends and colleagues to attract further participants - can be employed successfully online.

However, the questionnaire would need to include demographic questions that ensure respondents meet the inclusion criteria.

O Make accessing the survey as simple and convenient as possible. If a respondent is required to respond online, he or she should be contacted online. Attempts to recruit online respondents through the post have not been successful.

• E-questionnaires should be set out clearly and not crowded with graphics or pop-ups that might cause confusion.

Ö The size of a response box for an open question should reflect the length of the response you hope to receive, even if the box expands as respondents type.

n Positioning demographic questions at the end of the questionnaire may help limit partial response rates.

for help with a survey that was relevcint to their experience and interests, whereas many of the trust recipients received a quesüonnaire at a generic emau address that did not necessarily deal with a subject in which they were parücularly interested. Perhaps the response rate from the trust workers might have been improved if the relevance of their contribuüon to üie project had been highhghted, if the subject heading of the email had included a direct appeal for help, and if I had approached potenüal respondents in person and asked them to supply a personal emau address to which the survey could be sent.

AddiüonaUy, the appeal for help from the forum was sent by the moderator of the site, a tmsted and respected colleague. The tmst workers might have been more moüvated to respond following an appeal by a known leader such as the head of midwifery or consultEint midwife.

Conclusion E-quesüonnaires offer the researcher an inexpensive, quick and convenient way to collect data. Many of the reported disadvantages of the medium are no longer vahd. E-quesüonnaires are Ukely to largely © RCN PUBLISHING / NURSE RESEARCHER September 2012 | Volume 20 | Number 1 Nurse Researcher replace paper questionnaires, and the online population is likely to become more representative of the general population.

The science of conducting the perfect e-questionnaire is emerging, but the lessons learned in my study and other research seem to suggest that satisfactory response rates and data quality can be achieved in a relatively short time if certain tactics are employed (Box 1, page 19).

The quesüoruiciire must be distributed to a group to whom it is relevant and interesting, and the value of the group's particular contribution to the project needs to be emphasised. There are many special-interest onUne forums that researchers can approach.

E-questionnaires also need to catch the attention of potential respondents, be set out clearly and be easy for participants to access and complete. Attempts to recruit people by post, asking them to access an online survey, have not been successful. A direct appeal for help from a known and trusted individual can also encourage more people to take part.

Response rates from a more general population might be unproved if the initial approach was made in person where possible. Careful positioning of demographic questions may help Umit partial responses and larger response boxes may encourage longer repUes to open questions.

Online archive For related information, visit our online archive of more than 7,000 articies and search using the keywords Acknowledgement The author wishes to acknowledge Dr Julia Magill-Cuerden for her invaluable assistance in preparing this manuscript for publication Conflict of interest None declared References Adams J, White M (2008) Health behaviours in people who respond to a web-based survey advertised on regional news media. European Journal of Public Health. 18, 3, 335-338.

Ahem NR (2005) Using the Internet to conduct research. Nurse Researcher. 13, 2, 55-70.

Aitken C, Power R, Dwyer R (2008) A very low response rate in an on-line survey of medical practitioners. Australian and New Zealand Journal of Public Health. 32, 3, 288-289.

Alexander GL, Divine GW, Couper MP et al (2008) Effect of incentives and mailing features on online health program enrollment. American Journal of Preventative Medicine. 34, 5, 382-388.

Beling J, Uberdni LS, Sun Z et al (2011) Predictors for electronic survey completion in healthcare research. Computers, Informatics, Nuramj.

29, 5, 297-301.

Brindle S, Douglas F, van Teilungen E et al (2005) Midwifery research: questionnaire surveys. Midwives. 8.4, 156-158.

Cantrell MA, Lupinacci P (2007) Methodological issues in oniine data collection. Journal of Advanced Nursing. 60, 5, 544-549.

Douglas F, van Teylingen E, Brindie S et al (2005) Designing questionnaires for midwifery' research. Midwives. 8, 5, 212-215.

Wil Si Greenlaw C, Brown-Welty S (2009) A comparison of weh-based and paper-based survey methods: testitig assumptions of survey mode and response cost. Evaluation Review.

33, 5, 464-80.

Hanscom B, Lurie JD, Homa K et al (2002) Computerized questionnaires and the quality of survey data. Spine. 27, 16, 1797-1801.

Im EC Chee W (2003) Issues in internet research. Nursing Outlook. 51, 1,6-12.

Im EO, Chee W (2011) Quota sampling in internet research: practical issues. Computers, Informatics, Nursing. 29, 7, 381-385.

Internet World Stats (2012) Internet Growth Statistics, http://tinyurl.com/6aaxc2 (Last accessed: July 30 2012).

Jones S, Murphy F, Edwards M et al (2008a) Using online questionnaires to conduct nursing research. Nursing Times. 104, 47, 66-69.

Jones S, Murphy F, Edwards M el al (2008b) Doing things differently: advantages and disadvantages of weh questionnaires.

Nurse Researcher 15, 4, 15-26.

Kaplowitz MB, Hadlock TD, Uvine R (2004) A comparison of web and mail survey response rates.

Public Opinion Quarterly. 68, I, 94-101.

Katz N, Fernandez K, Chang A et al (2008) Internet-based survey of nonmedical prescription opioid use in the United States.

The Clinical Journal of Pain.

24, 6, 528-535.

Loescher LJ, Hibler E, Hiscox H cl a/(2011) Challenges of using the internet for behavioral research. Computers, Informatics, Nursing.

29,8,445-448.

MackeUar DA, Hou SI, Whalen CC et al (20U) Reasons for not HIV testing, testing intentions, and potential use of an over-the-counter rapid HIV test in an internet sample of men who have sex with men who have never tested for HIV.

Sexually Transmitted Diseases. 38, 5, 419-428.

Oppenheim AN (1992) Questionnaire Design, Interviewing and Attitude Measurement.

Continuum International Publishitig, London.

Rattray J, Jones MC (2007) Essential elements of questionnaire design and development.

Journal of Clinical Nursing 16, 2, 234-243.

Ritter P. Lorig K. Laurent D et al (2004) Internet versus mailed questionnaires: a randomized comparison. Journal of Medical Internet Research. 6, 3, e29.

Roos M (2002) Methods of internet data collection and implications for recruiting respondents. Statistics Journal of the United Nations. 19, 3, 175-186.

Sadeh A, MindeU JA, Luedtke K et al (2009) Sleep and sleep ecology in the first 3 years: a web-based study. Journal of Sleep Research.

18, 1, 60-73.

Stewart S (2003) Casting the net: using the internet for survey research. British Journal of Midwifery. 11, 9, 543-546.

Stopponi MA, Alexander GL, McClure JB et al (2009) Recruitment to a randomized web-based nutritional intervention trial: characteristics of participants compared to non-participants.

Journal of Medical Internet Research. 11, 3, e38.

Sue VM, Ritter LA (2007) Conducting Online Surveys. Sage Publications, Thousand Oaks CA.

Touvier M, Méjean C Kesse-Guyot E et al (2010) Comparison between web-based and paper versions of a self-administered anthropométrie questionnaire. European Journal of Epidemiology. 25, 5, 287-296.

Velez P. Buletd JD, Volz S (2004) Respondent Differences Between Web-Based Surveys and Paper/Pencil Surveys: A Comparison of Response Rates, Respondents, and Responses.

http://tinyurl.com/chbiwsj (Last accessed: July 19 2012.) Walliman N (2005) Your Research Project.

Second edition. Sage Publications, London.

Wharton CM, HampI JS, HaU R et al (2003) PCs or paper-and-pencil: online surveys for data collection. Journal of the American Dietetic Association. 103, 11, 1458, 1460.

Whitehead LC (2007) Methodological and ethical issues in Internet-mediated research in the field of health: an integrated review of the literature. Social Science and Medicine. 65, 4, 782-791.

September 2012 | Volume 20 | Number 1 © RCN PUBLISHING/ NURSE RESEARCHER Copyright of Nurse Researcher is the property of RCN Publishing Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.

However, users may print, download, or email articles for individual use.