The Survey/Anyplace blog attempts to measure the "average" response rates that researchers can expect. There is great celebration in the office when a survey actually gets above the "average".
In-person surveys are always likely to get above "average" rates of response. These are targetted, difficult to avoid where a respondent is accosted in the street (although Chuggers (Charity Muggers) have taught people to navigate around folks brandishing clipboards) but are also costly.
Mail surveys can be targetted, too, will involve some cost, although FREEPOST can prevent costs from being incurred for non-responses. But, like some in-person surveys, they will involve the researcher in manual input of data prior to analysis. Does anyone actually do telephone surveys with random respondents any more? Call barring and opting-out and GDPR will have shielded many from this. Telephone polls using panels of respondents fare better, even if they consistently fail to forecast the result of a Referendum or Election.
We all know that surveys arriving by email can be captured by SPAM filters or simply ignored, it's even easier for low-cost on-line or in-app surveys.
Bribery, personalisation, careful targetting and selection, persistence and even advertising a worthy cause can all hope to increase response rates. Ultimately, however, the sample used will be biased, self-selecting, obliging (what is the right answer?) and probably unrepresentative.
So why do we get so excited when a 20% response rate for on-line module feedback indicates "below average" performance from a lecturer and reveals that the lecturer had egg on his tie or was wearing mismatched earrings?
No comments:
Post a Comment