Survey methods

What is a survey?

A survey is a method of collecting information. It may collect information about a population’s characteristics, self-reported and observed behaviour, awareness of programs, attitudes or opinions, and needs. Repeating surveys at regular intervals can assist in the measurement of changes over time. Information collected using surveys is invaluable in planning and evaluating policies and programs.

Unlike a census, where all members of a population are studied, sample surveys gather information from only a portion of a population of interest. The size of the sample depends on the purpose of the study.

In a statistically valid survey, the sample is objectively chosen so that each member of the population will have a known non-zero chance of selection. Only then can the results be reliably projected from the sample to the population. The sample should not be selected haphazardly or only from those who volunteer to participate.

When to use a survey

When determining the need for a survey, first check that the required information is not already available (for example, conduct library searches, check Open Data, refer to Queensland Government Statistician's Office (QGSO)).

The option of collecting the required information using existing administrative records should also be explored. Using existing data or records provides considerable advantages in terms of cost, time, and the absence of respondent burden. The major disadvantage is the lack of control over the data collected.

If existing data are not available or suitable, a number of factors must then be considered when determining which type of survey, if any, is appropriate.

(Please see Pre–survey scoping considerations (link at bottom of page) for a printable version.)

Overall objectives

  • What are the objectives of the project?
  • Are any existing data collections, research outputs, or sources of information related to the objectives already available?
  • Are any of the objectives measurable through the process of asking questions?
  • Can any of the objectives be met by gathering information using a quantitative survey?

Ethical consideration

  • Do you need identifying information (for example, names, addresses, telephone numbers) relating to respondents for follow-up research, or matching with other data? If so, you will need to clearly explain why you need such details, and obtain the respondents' consent.
  • Will respondents be adversely affected or harmed as a direct result of participating in the survey?
  • Are procedures in place for respondents to check the identity and bona fides of the researchers?
  • Is the survey to be conducted on a voluntary basis? If so, respondents must not be misled to believe it is compulsory when being asked for their cooperation.
  • Is it necessary to interview children under 14 years? If so, consent must first be obtained from their parent/guardian/responsible adult.

Legislative powers

  • Do you have authority to collect the information through either a compulsory or voluntary survey? (See Legislative framework on this website.)

Survey design and target population

  • Are responses likely to be varied and diverse? If so, qualitative research may be required first, to inform the design of quantitative questions.
  • Is the survey likely to be repeated, to measure change over time?
  • Is the target population for the survey clearly identified or identifiable? What contact information is available?
  • What particular respondent characteristics need to be quantified?

Data collection

  • How complex and how sensitive is the topic?
  • Do respondents have access to the required information?
  • Will they be willing to supply the information?
  • What is the most appropriate mode of delivery for survey questions (e.g. telephone, mail, web, face-to-face, observation, or a combination), with regard to data quality requirements and cost-effectiveness? Each mode of administration has strengths and weaknesses when asking about specific topics. They may be associated with very different response rates and potential for bias, so expert advice is desirable.
  • How important is it to have information collected:
    • by an independent source, to capture both positive and negative responses?
    • securely or under legislation which protects confidentiality?
    • efficiently and effectively by professional statisticians?
    • by an organisation with rigorous quality assurance processes, certified to international standards?

Expected outputs

  • How will the data and information derived from the survey be used?
  • What level of error can be tolerated? This will depend on how you intend to use the survey results.
  • Who is the target audience for the survey results?
  • How might the survey results be best presented to the intended audience (e.g. commentary, tables, graphs etc.) to ensure that they’re understood?

Timing and cost

  • Are the necessary financial, staff, computer, or other resources available to conduct the survey, and have the results analysed and reported?
  • When is the best time to conduct the survey? (For example, need to allow for seasonality, impact of school holiday periods etc.)
  • When are the outputs required? A quantitative survey takes a certain, irreducible amount of time to be developed and carried out in order to produce results which can be analysed and reported. Is enough time available to ensure that data of sufficient quality can be collected and analysed?
  • Is the survey to be repeated? How often?

Survey process

The following is an outline of the general process to be followed once the need for a survey has been determined. Some steps will not be necessary in all cases, and some processes can be carried out at the same time (for example, data collection and preparation for data entry and processing).

A sample survey is usually cheaper and timelier than a census, but still requires significant resources, effort and time. The survey process is complex and the stages are not necessarily sequential. Pilot testing of at least key elements, such as the questionnaire and survey operations, is a strongly recommended part of the development stage. It may be necessary to go through more than one cycle of development, testing, evaluation and modification before a satisfactory solution is reached.

The entire process should be planned ahead, including all critical dates. It is always beneficial to approach QGSO or prospective consultants as early as possible during the planning stage.

The time required from initial planning to the completion of a report or publication may vary from several weeks to several months according to the size and type of survey.

Key steps in the survey process

Planning and designing

  • Define the purpose and objectives of the survey and the required outputs. Experience has shown that well–defined output requirements at the outset minimise the risk of the survey producing invalid results.
  • Design collection methodology (see below) and sample selection method.
  • Develop survey procedures. Design and produce questionnaires and any other documentation (for example, instructions for interviewers and introductory letters or emails).

Testing and modifying

  • Test data collection systems (for example web survey programs or programs used by interviewers).
  • Pilot test all aspects of the survey if possible. As a minimum, a small-scale pre-test of questionnaires can reveal problems with question wording, layout, understanding or respondent reaction.
  • Analyse test results (completed questionnaires, response rate etc.). Obtain feedback from respondents and/or interviewers.
  • Modify procedures, questionnaires and documentation according to test evaluation.
  • Repeat steps as required.

Pre-survey

  • Finalise procedures, questionnaires and documentation.
  • Select sample. There are many methods of selecting a sample, varying in complexity. Some of these are discussed further, below. For additional advice on choosing an appropriate sample, refer to QGSO.
  • Train interviewers (if interviewer-based).

Conducting the survey

  • Conduct the survey, including follow-up of refusals and non-contacts, supervision and checks of interviewers’ work.

Processing and analysing

  • Enter (if required), check and clean data.
  • Process data—calculate population estimates (if required) and confidence intervals, prepare output tables.
  • Conduct data analysis and prepare report of survey results.
  • Prepare technical report—evaluate and document all aspects of the survey for use when designing future surveys.

Data collection method

Commonly used methods for collecting quantitative data include telephone and face-to-face interviews, self-completion questionnaires (such as web-based, mail, email, or SMS) or combinations of these.

Each has advantages and disadvantages in terms of the cost, time, response rate and the type of information that can be collected.

QGSO primarily uses telephone interview (Computer Assisted Telephone Interviewing (CATI)) and web‑based survey methods, or a combination of these.

Telephone surveys

A survey frame or list which contains telephone numbers is required to conduct a telephone survey. For general population surveys, such lists are not readily available or they have limitations that can lead to biased results.

The official electronic White Pages list can be used to select a sample of households, but the sample will not include households with silent numbers. In addition, it may exclude households with recent new connections, recent changes to existing numbers, or mobile-only households. Research conducted by the Australian Communications and Media Authority (ACMA) shows that an increasing number of Australian households do not have a landline telephone connected, and rely solely on a mobile phone(s).

Random digit dialling, using landline and/or mobile numbers, may address some of the under-coverage associated with an electronic White Pages or electoral roll list, but it is inefficient for sampling at a low geographic level, and does not allow for communicating (via pre-approach letter, for example) with households prior to the commencement of telephone interviewing.

When QGSO conducts telephone surveys that need to reflect a representative cross-section of the Queensland public, households are randomly selected based on information from databases kept by QGSO for official statistical purposes under the authority of the Statistical Returns Act 1896

Interviewer-based surveys

Interviewer-based surveys, such as face-to-face or telephone surveys, can allow more data to be gathered than self-completion surveys.

Interviewers can reduce non-response by answering respondents’ queries or concerns. They can often pick up and resolve respondent errors.

Face-to-face surveys are usually more expensive than other methodologies, and poor interviewers can introduce additional errors. Although the face-to-face approach may be unsuitable for some sensitive topics, it may sometimes be appropriate, with specially trained interviewers.

Telephone surveys are generally cheaper and quicker than face-to-face surveys, and are well suited to situations where timely results are needed. However, non-response may be higher than for face-to-face surveys as it is harder for interviewers to prove their identity, assure confidentiality and establish rapport.

Telephone surveys are not suited for situations where the respondents need to refer to records extensively. Also, the questionnaires must be simpler and shorter than for face-to-face surveys and prompt cards cannot be used.

Computer Assisted Telephone Interviewing (CATI)

Computer Assisted Telephone Interviewing (CATI) is a particular type of telephone survey technique that helps to resolve some of the limitations of general telephone-based surveying. With CATI, interviewers use a computer terminal. The questions appear on the computer screen and the interviewers enter responses directly into the computer. The interviewer’s screen is programmed to show questions in the planned order. Interviewers cannot inadvertently omit questions or ask them out of sequence. Online messages warn interviewers if they enter invalid values or unusual values.

Most CATI systems also allow many aspects of survey operations to be automated, e.g. rescheduling of call-backs, engaged numbers and “no answers”, and allow automatic dialling and remote supervision of interviewer/respondent interaction.

Self-completion surveys

Self-completion surveys via mail, email, the internet or SMS are generally the least expensive, particularly for a widespread sample. They allow respondents time to consider their answers, refer to records or consult with others (which can be helpful or unhelpful, depending on the survey’s objectives). They also eliminate interviewer errors and reduce the incidence of selected people (or units) being unable to be contacted.

A major disadvantage of self-completion surveys is the potentially high non-response. In such cases, substantial bias can result if people who do not complete the survey have different characteristics from those who do. However, response may be improved using techniques such as well–written introductory letters, incentives for timely completion of questionnaires, and follow-up for those initially not responding. Item non-response is another disadvantage of self-completion surveys. 

In self-completion surveys, there is no opportunity to clarify answers or supplement the survey with observational data. In mail surveys, the questionnaire usually has to be simple and reasonably short, particularly when surveying the general community. Internet and email-based surveys are commonly used for surveying clients or staff within organisations. They are a cheaper option than mail or interviewer-based surveys, and they allow more complex questionnaires to be used than mail surveys do.

Online panel surveys

Online panel surveys are another option for data collection. However, the panel members almost always represent a non-probability sample of the population, and the issues associated with this are covered below, under Bias and accuracy.

Sources of error

Whether a survey is being conducted by departmental/agency staff or by consultants, it is important to be aware of potential sources of error, and strategies to minimise them.

Errors arising in the collection of survey data can be divided into two types—sampling error and non‑sampling error.

Sampling error

Sampling error occurs when data are collected from a sample rather than the entire population. The sampling error associated with survey results for a particular sub-group of interest depends mainly on the number of achieved responses for that sub-group, rather than on the percentage of units sampled. Estimates of sampling error, such as standard errors, can be calculated mathematically. They are affected by factors such as:

  • sample size—increasing the sample size will decrease the sampling error
  • population variability—a larger sampling error will be present if the items of interest vary greatly within the population
  • sample design—standard errors cannot be calculated if the probability of selection is not known (for example, quota sampling (see Bias and accuracy, below)).

Non-sampling error

All other errors associated with collecting survey data is called non-sampling error, and can occur at any stage of the survey process. Such error is not easily identified or quantified, and therefore cannot be measured in the same way as sampling error. It is, however, just as important. The following table lists common sources of non-sampling error and some strategies to minimise them.

Source of error Examples Strategies to minimise error
Planning and interpretation Undefined survey objectives; inadequate definitions of concepts, terms or populations. Ensure survey objectives are outlined, and all concepts, terms and populations are defined precisely, through consultation between data users and survey designers.
Sample selection Inadequate list from which sample is selected; biased sample selection. Check list for accuracy, duplicates and missing units; use appropriate selection procedures (see “Bias and Accuracy” below).
Survey methods Inappropriate method (e.g., mail survey for a very complicated topic). Choose an appropriate method and test thoroughly.
Questionnaire Loaded, misleading or ambiguous questions, poor layout or sequencing. Use plain English, clear questions and logical layout; test thoroughly.
Interviewers Leading respondents, making assumptions, misunderstanding or misrecording answers. Provide clear interviewer instructions and appropriate training, including exercises and field supervision.
Respondents Refusals, memory problems, rounding answers, protecting personal interests or integrity. Promote survey through public media, appropriate to the target population; ensure confidentiality; if interviewer-based, use well-trained, impartial interviewers and probing techniques; if mail-based, use a well-written introductory letter.
Processing Errors in data cleaning. Adequately train and supervise processing staff; check a sample of each person’s work.
Estimation Incorrect weighting, errors in calculation of estimates. Ensure that skilled statisticians undertake estimation.

If a consultant conducts the survey, you, as a client, should have input into the questionnaire design, participate in testing, and attend interviewer training and debriefing. Details of techniques used to minimise non-sampling error should be requested.

Bias and accuracy

Non-response occurs in virtually all surveys through factors such as refusals, non-contact and language difficulties.

It is of particular importance if the characteristics of non-respondents differ from respondents. For example, if high-income earners are more likely to refuse to participate in an income survey, the results will obviously be biased towards lower incomes.

For this reason, all surveys should aim for the maximum possible response rate, within cost and time constraints, by using techniques such as call-backs to non-contacts, and follow-up of refusals. The level of non-response should always be measured.

Bias can also arise from inadequate sampling frames, the lists from which respondents are selected. Household and business mobile and landline telephone listings, and electoral rolls, are often used as sampling frames, but they all have limitations, as detailed earlier in Data collection method. Once again, if people/businesses omitted from the frame have different characteristics from those included, bias will be introduced.

One selection method often used by researchers is quota sampling. Interviewers are instructed to obtain a certain number of interviews, often with respondents in particular categories. For example, 30 interviews with females aged 18 to 25 years, and 20 interviews with males aged 18 to 25 years etc. Interviewers may interview anyone fitting these criteria. Unfortunately, people who are most easily contacted or most approachable may have different opinions or behaviour to those not interviewed, introducing potential bias. As each person’s chance of selection is not known, standard errors cannot, strictly speaking, be calculated. Consequently, the accuracy of the survey results cannot be determined.

Hint
For this reason, QGSO strongly recommends that probability sampling, where each person or unit has a known non-zero chance of selection, be used in preference to quota sampling. In probability sampling, the sample is selected by objective methods and, when properly carried out, there is no risk of bias arising from subjective judgements in the selection of the sample.

If constraints are such that a probability sample is impractical, other research methods—such as focus groups or purposive sampling—should be considered. It must be remembered, however, that results from these procedures cannot be assumed to be representative of a broader population.

When a probability sample has been undertaken, standard errors should be calculated to check the accuracy of all results, and this may be used to calculate relative standard errors (RSEs) or confidence intervals.

Hint
QGSO recommends that estimates with an RSE (that is, the standard error divided by the estimate multiplied by 100) which exceeds 50% should not be used. Estimates with an RSE from 25% to 50% inclusive should be used with caution, as should estimates with large confidence intervals.

Engaging consultants/contractors

QGSO can provide assistance to Queensland Government agencies needing to conduct surveys. (See Surveys on this website for details.)

The following information on preparing briefs may be of particular assistance in the engagement of consultants or contractors to conduct all or part of a survey. Queensland Government agencies should also refer to the Market research buyers' guide (GovNet) for information on the general process and requirements for procuring external consultants or contractors to perform survey work for Queensland Government.

To obtain the highest quality proposals, it is important to provide contenders with the maximum amount of relevant information. Concentrate on clearly stating aims, objectives and requirements.

The following (additional) information should be included in a brief for the conduct of a survey:

Background

Give relevant background details such as previous research and where the survey fits into the department’s/agency’s program.

Objectives

Outline the specific purpose and objectives of the survey.

Population

Indicate the population and/or sub-populations of interest. For example, all Queensland women aged 18 years or over.

The consultancy

Provide details of the survey content and preferred method (if appropriate). Include a list of data items and output specifications (for example, tables and analyses, including accuracy requirements). Attach a draft questionnaire, if prepared. Specify reporting requirements, including the calculation of standard errors or confidence intervals, and details of response rates and techniques used to minimise non-sampling error. Clearly indicate the parts of the project which will be the responsibility of the consultant.

Timing

Include dates, such as:

  • receipt of proposals
  • engagement of consultant
  • commencement and completion of pilot testing, if known
  • commencement and completion of survey, if known
  • presentation of results or report.

Proposals

Indicate proposal requirements, such as:

  • details of proposed method
  • full breakdown of costs (list categories of interest)
  • details of interviewers, if appropriate (for example, number, location, training, experience)
  • details of previous relevant work
  • names and backgrounds of staff who would be responsible for the project
  • details of any part of the project which would be subcontracted to another organisation
  • proposed timetable.

Selection criteria

Selection criteria should cover:

  • quality, clarity and relevance of proposed survey design
  • expertise in technical and operational aspects of sample surveys
  • demonstrated ability in undertaking similar work.

Where can I get further assistance?

For requests for advice or further information from Queensland Government Statistician's Office about conducting surveys, please contact us.

 

Pre-survey scoping considerations
pdf (36.11 KB)
Last reviewed