CERTIFICATE IN GLOBAL HEALTH RESEARCH
Course 9: Survey Methodologies
“Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling and high response rates will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.” (1) There are three main types of survey methodologies, and each has their own risks and benefits.
Open-ended Questions
Open-ended questions ask participants to come up with their own responses and allow the researcher to document the opinions of the respondent in his or her own words. These types of questions are useful for obtaining in-depth information on facts with which the researcher is not very familiar, opinions, attitudes and suggestions, or sensitive issues. Completely open-ended questions allow the researcher to probe more deeply into issues, thus providing new insights, bringing to light new examples or illustrations, and allowing for different interpretations and a variety of responses. Researchers who utilize open-ended questions must be skilled interviewers since they need to record all information to avoid loss of important information, and the analysis is time-consuming.(2) In addition, open-ended questions can be difficult to analyze statistically because the data is not uniform and must be coded in some manner.(3)
Examples of open-ended questions:
“What do you think are the reasons some adolescents in this area start using drugs?”
“What would you do if you noticed that your daughter (school girl) had a relationship with a teacher?”
Partially Categorized Questions
Partially categorized questions are similar to open-ended questions, but some answers have already been pre-categorized to facilitate recording and analysis. There is also usually an alternative titled “other” with a black space next to it. The advantages of these types of questions are that answers can be recorded quickly, and the analysis is often easier. One of the major risks is that the respondent will pre-categorize too quickly, resulting in a potential loss of interesting and valuable information. In addition, interviewers may try to force the information into the listed categories instead of exploring the question more thoroughly. If the respondent hesitates when answering a question, the interviewer may be tempted to present possible answers, causing bias. Therefore, the researcher must always avoid presenting possible answers to the study participant.
Example of a pre-categorized open-ended question:
“How did you become a member of the Village Health Committee?” (4)
Categorize the response into these options:
Volunteered
Elected at a community meeting
Nominated by community leaders
Nominated by the health staff
Other ____
Closed Questions
Closed questions have a list of possible answers from which respondents must choose. They can include yes/no questions, true/false questions, multiple choice, or scaled questions. Closed questions can be categorized into 5 different types:(5)
Multiple Choice- this question type is useful when the researcher would like participants to select the most relevant response.
Likert Scale- this question type is appropriate when the researcher would like to identify how respondents feel about a certain issue. The scale typically ranges from extremely not important, not important, neutral, important, to extremely important, or strongly disagree, disagree, neutral, agree to strongly agree.
Numerical- these questions are used when possible responses are numeric in form. For example, these questions are useful for asking someone’s age.
Ordinal- these questions are useful when participants are asked to rank a series of responses.
Categorical- this question type is appropriate when respondents are asked to identify themselves into a specific category. For example, they may be asked if they are male or female.
Closed questions are commonly used for obtaining data on background information such as age, marital status, or education. Closed questions may also be used to assess a respondent’s opinions or attitudes by choosing a rating on a scale. Additionally, closed questions may be used to elicit specific information in an efficient manner. For example, a researcher who is only interested in the sources of protein in a person’s diet may ask:
Did you eat any of the following foods yesterday? (6)
Peas, beans, lentils (yes/no)
Fish or meat (yes/no)
Eggs (yes/no)
Milk or cheese (yes/no)
Insects (yes/no)
Closed questions are time efficient, and the responses are simple to compare with different groups or the same group over time. However, oftentimes closed questions yield data that is biased or invalid. For example, the uniformity in ratings may be deceptive and create bias. In addition, when respondents are illiterate, the interviewer may have to read the list of possible answers in a given sequence, thus introducing additional bias into the study. (7) Though closed questions are easier to analyze statistically, they seriously limit the range of participant responses. (8)
The Impact of Choosing Open-ended or Closed-ended Questions
The validity of a research study depends on a researcher’s selection of survey questions. The researcher’s decision to utilize open-ended or closed questions is therefore critical and significant. A question asked in an open-ended format can yield drastically different results from the same type of question asked in a closed format. For example, a poll conducted after the 2008 presidential election asked respondents about “what one issue mattered most to you in deciding how you voted for president?” After asking the question in an open-ended manner and in a closed-ended manner (where the options were the economy, the war in Iraq, health care, terrorism, energy policy, and other), researchers found that “when explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8.5%) provided a response other than the five they were read; by contrast fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question.” (9) This example illustrates how open-ended questions elicit a larger variety of responses and how closed-ended questions may potentially influence people into choosing a certain answer.(10)
KAP Surveys
Appropriateness and Challenges
KAP surveys are focused evaluations that measure changes in human knowledge, attitudes and practices in response to a specific intervention. The KAP survey was first used in the fields of family planning and population studies in the 1950s. KAP studies use fewer resources and tend to be more cost effective than other social research methods because they are highly focused and limited in scope. KAP studies tell us what people know about certain things, how they feel, and how they behave. Each study is designed for a specific setting and issue.(11) “The attractiveness of KAP surveys is attributable to characteristics such as an easy design, quantifiable data… concise presentation of results, generalisability of small sample results to a wider population, cross-cultural comparability, speed of implementation, and the ease with which one can train numerators.” (12) In addition, KAP studies bring to light the social, cultural and economic factors that may influence health and the implementation of public health initiatives. “There is increasing recognition within the international aid community that improving the health of poor people across the world depends upon adequate understanding of the socio-cultural and economic aspects of the context in which public health programmes are implemented. Such information has typically been gathered through various types of cross-sectional surveys, the most popular and widely used being the knowledge, attitude, and practice (KAP) survey.” (13)
KAP Research Protocols
The basic elements of a KAP survey include: (14)
Domain identification: the domain is the subject of the KAP study, including the knowledge, attitudes and practices of a community in regard to that subject.
Identification of the target audience
Determination of sampling methods: KAP sampling methods usually use a survey or questionnaire administered through interviews.
Analysis and reporting: KAP surveys strive to collect the least amount of information to determine whether the knowledge, attitudes and practices of a community have changed from one time period to another. For large sample sizes, computer software such as SPSS or Excel is recommended to organize and analyze the data. The findings from the data are usually presented using descriptive statistics, such as a table or histogram for each section (knowledge, attitudes and practices). (15)
KAP Surveys and Public Health
KAP survey data are essential for informing public health work. For example, with regard to tuberculosis, a KAP survey can gather information about what respondents know about TB, what they think about people who have TB, the health system response to TB, and what care someone with TB should seek. KAP surveys are very helpful for identifying knowledge gaps, cultural beliefs, or behavioral patterns that may facilitate or create barriers to TB or other public health efforts. In addition, the data collected from KAP surveys enable program managers to set TB priorities, to establish baseline levels, and to measure change from interventions. KAP studies are also useful for studying other diseases, such as malaria. For example, a KAP study in Swaziland was used to provide baseline data about malaria knowledge, including attitudes and practices at the community level prior to implementing a malaria elimination strategy. The results showed that most participants exhibited a reasonable knowledge of malaria. However, the study found that there was a need for improving the availability of information regarding malaria through community channels. (16) A KAP survey can be conducted at any point during a public health intervention, but this type of survey is most useful when conducted in the early phases of the project and again after the intervention is completed.(17)
KAP surveys have also been used to assess and improve the condition of reproductive health in developing countries. For example, a KAP survey was implemented in Kabul, Afghanistan, in order to contribute to a better understanding of the way Afghan women perceive their reproductive health and their reproductive health needs. This is importance since a socially integrated and culturally accepted approach is essential for public health initiatives involving reproductive health. The survey found that knowledge levels were very limited about family planning methods and STIs. The study also found that most women preferred institutional delivery and assistance by qualified health staff at birth, though the number of women who receive this care is very low.(18) Additionally, KAP surveys are relevant to public health awareness campaigns. “Before beginning the process of creating awareness in any given community, it is first necessary to assess the environment in which awareness creation will take place… Understanding the levels of Knowledge, Attitude and Practice will allow for a more efficient process of awareness creation as it will allow the program to be tailored more appropriately to the needs of the community.” (19) KAP surveys are also “useful tools for identifying the technological interventions which are important in an area and which are likely to create a significant impact. By analyzing the words farmers use to communicate their knowledge, attitudes, and practices in regard to specific elements of a farming system, it is possible to identify those elements which may be good, those which may need to be improved, or those which may need to be discouraged.” With this information, interventions can be more effectively designed. (20)
The Shortcomings of KAP Surveys
Data Can Be Hard to Interpret Accurately
One of the main shortcomings of KAP surveys is that it is difficult to ensure an accurate interpretation of data. Researchers should be very cautious regarding the interpretation of results. The reliability of the data can be frequently impacted by underlying contextual and cultural factors. For example, a study on the Yao women of Malawi asked women to agree or disagree with a variety of statements, and it was found that a very high proportion of the women responded with the “agree” answers. “One explanation could be that there is indeed strong agreement and cultural homogeneity among the Yao women. However, when taking into account the Yao women’s socio-cultural background, which often includes little formal (Western-style) schooling and which emphasizes the value of being non-confrontational, it is also possible that the question formulation can influence attitudes towards favourable, “agreeing” answers.”(21) This example illustrates the importance of being aware of the respondents’ cultural backgrounds when interpreting their responses.
Lack of Standardized Approach to Validate Findings
Most KAP surveys utilize household surveys. It is also important to consider the fact that social norms and pressures may bias reporting and that conducting household surveys may systematically exclude portions of the population. For example, when AIDs-related KAP surveys were conducted in rural Africa, the household surveys did not accurately reflect casual sexual activity, as prostitutes were not captured in a representative manner. In light of this fact, the researchers of that study suggest “that there is an urgent need for a standardized approach to validating the findings from AIDs-related KAP surveys.” (22) Moreover, in the same study, “data were found to be accurate at the aggregate level. However, accuracy of reporting at the individual level was found to be low. The gender difference in reporting of casual partners may be due to female underreporting, to not having captured prostitutes or to a different perception of the meaning of casual partnership.” (23) Therefore, it is necessary for all KAP surveys to include a validity analysis, so as to ensure the accuracy of the surveys and allow for comparison of the quality of different KAP surveys. “There is an urgent need for a standardized approach to validating the findings from AIDS-related KAP surveys.”(24)
Analyst Biases in KAP Surveys
KAP surveys have not undergone extensive methodological scrutiny relative to the number of surveys conducted and their importance for social policy. Though KAP surveys are administered in many countries, the results are almost exclusively analyzed by Western researchers. “This fact suggests that technical proficiency and therefore, the quality of data generated clearly are considered more important to the process of analysis than familiarity with culture of data origin. That given survey data may derive from cultures and languages different from that of the analyst’s own has been ignored as a potential methodological problem.”(25) This is problematic since it is likely that the interpretations of KAP survey research data vary depending on the analyst’s degree of familiarity with the cultures and practices in the place of data origin. A study conducted in Bangladesh focused on this issue and analyzed differences in responses and interpretations to KAP surveys depending on the analyst’s exposure to Bangladeshi culture. The researchers found that “the Bengali analyses tended to be more directly relevant to program and policy development. The Bengali groups, in contrast to the Western groups, gave interpretations of the observed empirical relationships that dug beneath the superficial and external features of the problem (e.g. rates and facilities) to lay bare the basic causes of the problem.” (26) Thus, this study illustrates the importance of having analysts who are familiar with the culture and language of the country where the KAP surveys take place. “The findings of this study demonstrate that indigenous analysts tend to provide analyses that not only encompass the more context-free analyses provided by foreign analysts but also contain information more directly related to the culture, which provide a flavor of the context.” (27)
Other Criticisms
A main criticism of KAP surveys is that their findings generally lead to prescriptions for mass behavior modification instead of targeting interventions towards individuals. For example, a study which used KAP surveys to study the AIDs epidemic found that “these unfocused inquiries into diffuse behaviors in undifferentiated populations are not productive in low-seroprevalence populations, especially when the objective is to design interventions to avert further infection. The failure of KAP surveys to distinguish conceptually between the relevance of AIDS-related behavioral data for individuals and for populations makes them fundamentally flawed for such purposes.” (28)
Other major problems with KAP surveys are that investigators use the surveys to explain health behavior under the assumption that there is a direct relationship between knowledge and action. (29) A study on malaria control in Vietnam found that though respondents had a surprisingly high level of knowledge and awareness regarding malaria, “the findings are of limited value because of the lack of detail about and corroboration of self-reported adherence to preventive actions and health-seeking behaviors. Anecdotal evidence suggests there are deficiencies in these important practices, but the study design did not permit us to explore these.” (30) In addition, though KAP surveys provide descriptive data about practices and knowledge, they fail to explain why and when certain treatment practices are chosen. In other words, they fail to explain the logic behind treatment-seeking practices. (31)
The Alternative
KAP surveys can be useful when the research plan is to obtain general information about public health knowledge and sociological variables. However,“if the objective is to study health-seeking knowledge, attitudes and practices in context, there are suitable ethnographic methods available, including focus group discussions, in-depth interviews, participant observation, and various participatory methods.” (32) The preferred use of qualitative surveys and research is corroborated by a study on malaria control in Vietnam which found that though the KAP “survey generated useful findings, an initial, qualitative investigation (eg. observation and focus group discussions) to explore the large numbers of potential influences on behavior and exposure risk would have provided a more robust underpinning for the design of survey questions. This would have strengthened its validity and generated additional information.” (33) A study conducted by Agyepong and Manderson (1999) also confirms this notion and argues “that truly qualitative methods, such as observation, individual semi-structured interviews, or focus group discussions, are vital foundations for exploratory investigations at the community level, and should precede and underpin population-level approaches, such as KAP surveys.”(34)
Conclusion
The survey is critical to designing public health interventions and assessing their impact. There are a variety of different methodologies that can be used when designing surveys: open-ended questions, partially categorized questions, and closed ended questions. Each type of question has its own benefits and drawbacks, though partially categorized questions are considered to yield the most accurate and reliable data. KAP surveys explore respondents’ knowledge, attitudes and practices towards a particular topic. They are typically used for documenting community characteristics, knowledge, attitudes and practices that may serve to explain health risks and behaviors. Though they are very useful for obtaining general information about sociological and cultural variables, they are of limited validity if not grounded upon an initial qualitative research study or survey.(35)
Footnotes
(1) “Questionnaire Design.” Accessed on 10 December 2010.
(2) “Module 10B: Design of Research Instruments; Interview Guides and Interview Skills.” Accessed on 10 December 2010.
(3) Jackson, Sherri. Research Methods: A Modular Approach. (Belmont, California: Thomson Wadsworth, 2008).
(4) “Module 10B: Design of Research Instruments; Interview Guides and Interview Skills.” Accessed on 10 December 2010.
(5) “Crafting Your Survey Questions: Open-Ended Versus Closed-Ended.”
(6) “Module 10B: Design of Research Instruments; Interview Guides and Interview Skills.” Accessed on 10 December 2010.
(7) Ibid.
(8) Jackson, Sherri. Research Methods: A Modular Approach. (Belmont, California: Thomson Wadsworth, 2008).
(9) “Questionnaire Design.” Accessed on 10 December 2010.
(10) Ibid.
(11) “Knowledge, Attitudes and Practices (KAP) Studies for Water Resources Projects.” Accessed on 10 December 2010.
(12) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009).
(13) Ibid.
(14) “Knowledge, Attitudes and Practices (KAP) Studies for Water Resources Projects.” Accessed on 10 December 2010.
(15) Ibid.
(16) Hlongwana, K., et. al. “Community knowledge, attitudes and practices (KAP) on malaria in Swaziland: a country earmarked for malaria elimination.” Malar J. 8.29 (2009).
(17) “Advocacy, communication and social mobilization for TB control. A guide to developing knowledge, attitude and practice surveys.” Accessed on 10 December 2010.
(18) “KAP Survey regarding reproductive health”. Accessed on 13 December 2010.
(19) “KAP Study Protocol.” Accessed on 13 December 2010.
(20) “Participatory survey methods for gathering information.” Accessed on 13 December 2010.
(21) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009).
(22) Schopper, D., Doussantousse, S., Orav, J. “Sexual Behaviors Relevant to HIV Transmission in a Rural African Population. How much can a KAP survey tell us?” Soc. Sci. Med. 37.3 (1993):401-412.
(23) Ibid.
(24) Ibid.
(25) Ratcliffe, J. “Analyst Biases in KAP Surveys: A Cross-Cultural Comparison.” Studies in Family Planning.
(26) Ibid.
(27) Ibid.
(28) Smith, H. “On the limited utility of KAP-style survey data in the practical epidemiology of AIDS, with reference to the AIDS epidemic in Chile.” Health Transit Rev. 3.1 (1993):1-16.
(29) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009).
(30) Quy Anh, N. “KAP Surveys and Malaria Control in Vietnam: Findings and Cautions about Community Research.” Southeast Asian J Trop Med Public Health. 36.3 (2005):572-577.
(31) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009).
(32) Ibid.
(33) Quy Anh, N. “KAP Surveys and Malaria Control in Vietnam: Findings and Cautions about Community Research.” Southeast Asian J Trop Med Public Health. 36.3 (2005):572-577.
(34) Ibid.
(35) Ibid.