Printer Friendly

Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

This is an open access article under the terms of the Creative Commons Attribution-Noncommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.

Patient exit interviews--interviews at the point of patients' exit from a clinical consultation or health care facility--are an important data collection approach in health services research (Turner et al. 2001; Hrisos et al. 2009). They are commonly used to assess patients' satisfaction with the health care services received (Ejigu, Woldie, and Kifle 2013; Alonge et al. 2014; Asfaw et al. 2014; Chimbindi, Barnighausen, and Newell 2014; Sando et al. 2014; Islam et al. 2015), patients' out-of-pocket expenditures (Peabody et al. 2010; Chimbindi et al. 2015; Opwora et al. 2015), health care utilization (The Demographic and Health Surveys Program 2015; Etiaba et al. 2015), provider behavior during the clinical consultation (Stange et al. 1998; Ostroff, Li, and Shelley 2014), and patients' knowledge about their condition (Senarath et al. 2007; Anya, Hydara, and Jaiteh 2008; Israel-Ballard, Waithaka, and Greiner 2014). A number of standardized patient exit questionnaires have been developed for use by researchers, including the EUROPEP instrument (Wensing 2006), the RAND Patient Satisfaction Questionnaire (RAND Health 2015), the Patient Experiences Questionnaire (Steine, Finset, and Laerum 2001), and the patient exit questionnaires that form part of the Demographic and Health Surveys (DHS) Program's Service Provision Assessments (The Demographic and Health Surveys Program 2015). Patient exit interviews are popular, particularly in low- and middle-income countries, because it is operationally more efficient to identify patients at clinics than through population-based surveys. Exit interviews also allow researchers to collect data about patients' experiences with health care services with a minimum recall period.

If the group of eligible participants is large, such as in studies interviewing patients who have accessed a common clinical service or patients with a common condition or symptom, it will often not be operationally feasible to interview all patients of interest who are exiting a health care facility. Instead, a subset of patients is interviewed. How this subset is chosen (i.e., the sampling method) is of central importance to achieving both unbiased estimates and a sufficiently large sample size (operational efficiency).

Table 1 provides a summary of possible sampling methods for patient exit interviews. "Simple random sampling" refers to selecting patients for an interview by subjecting all eligible patients to a randomization (e.g., through a coin flip or smartphone application). "Systematic random sampling," on the other hand, uses a sampling interval (i.e., selecting every xth patient) with a random start point. We elaborate on each of these methods and outline their advantages and disadvantages in the discussion section.

In this paper, we (1) assess the frequency of use of different sampling methods for patient exit interviews; (2) evaluate each method's operational efficiency using simulation; (3) discuss each method's probability of yielding an unbiased (i.e., representative) sample; and (4) describe a novel method of sampling patients for exit interviews that is both unbiased (under one assumption) and operationally efficient.

METHODS

Literature Review

We conducted a review of studies that employed patient exit interviews as one of their data collection methods to gauge the frequency with which different sampling methods were used. We searched Medline via PubMed for studies published between May 23, 2014 and May 23, 2015 using variations of terms for patients, exit, and interview. The abstracts and full-text versions of all retrieved articles were analyzed using the following inclusion criteria: the interviews were (1) conducted with the users of a health care service; (2) administered after the health care service was used; and (3) performed at a health care facility. We excluded studies, which used self-administered questionnaires only. We did not restrict our search to certain geographic regions. All search terms were in the English language.

Simulation Study

We built a simulation in the Stata 13.0 statistical package to evaluate the operational efficiency of each sampling method for patient exit interviews. A method was judged to be the most operationally efficient method if it (1) maximized the percentage of all eligible participants that were interviewed; and (2) did not result in unacceptably high waiting times for patients until the next interviewer became available.

The simulation assumed that all patients seen at the facility were eligible to be interviewed, entered the consultation room in random order, spent a random length of time in the consultation room and, if selected for interview, a random length of time with the interviewer. We varied the number of consultation rooms and the number of interviewers from one to ten, and ran the simulation for each possible combination of the number of rooms to the number of interviewers. In addition, we varied the mean consultation length from 5 to 15 minutes, and the mean interview length from 10 to 40 minutes. The standard deviation varied from 18.7 to 74.8 percent of the mean consultation length, and from 8.0 to 60.0 percent of the mean interview length. We varied the threshold for an unacceptably high patient waiting time until an interviewer is available from 0 to 20 minutes; patients whose waiting time exceeded this threshold were not interviewed. Each simulation assumed that a total of 10,000 patients were seen at the facility during the data collection period. The outcomes recorded were (1) the proportion of all patients seen at the health care facility during the data collection period that were interviewed; (2) the mean number of patients interviewed per day; and (3) the percentage of patients who were not interviewed because they waited longer than the threshold time for an interviewer to become available.

Simulating a Typical Scenario. While we ran the simulation for a variety of scenarios and assumptions, we defined one particular scenario as typical. For this typical scenario, we chose a mean consultation length of 10.7 minutes (standard deviation of 6.7 minutes), which was the mean consultation length and variance reported by an assessment of primary care consultation lengths across six countries (Deveugele et al. 2002). The interview length for this scenario was 25 minutes (standard deviation of 7 minutes), which is a typical interview length and variance of patient exit interviews we have conducted in various primary care settings in sub-Saharan Africa (Chimbindi, Barnighausen, and Newell 2014; Chimbindi et al. 2015; and several ongoing studies). Keeping track of the interval of patients to be selected with systematic random sampling requires additional time and effort by the data collection team. Because (1) our simulation does not take into account this additional cost, and (2) our hypothesis was that sampling the next patient entering the consultation room is the operationally most efficient unbiased sampling method, we set the interval for systematic random sampling at the highest possible number that would result in a percentage of patients selected for interview at least 10 percent higher than the proportion of patients interviewed with sampling the next patient entering the consultation room. The maximum acceptable patient waiting time until an interviewer is available was set to be 5 minutes.

RESULTS

Frequency of Use of Each Sampling Method

The literature search retrieved 56 records, and after removing duplicates, screening abstracts, and full-text reviews, 24 studies were included in this rapid review. Seven studies were excluded because they used self-administered questionnaires; five of these were from high-income countries. Appendix SA2 summarizes the included studies by sampling method used. All identified studies were carried out in a low- or middle-income country with the majority (16 studies) being from sub-Saharan Africa. Nine studies did not describe the sampling methodology used for the patient exit interviews. The remaining studies employed one of four sampling methods: (1) interviewing all eligible participants (seven studies); (2) systematic random sampling (four studies); (3) consecutive sampling (i.e., interviewing all eligible patients at a health care facility until a sample target is met; two studies); or (4) interviewing the next patient exiting the consultation room (one study).

Operational Efficiency of Each Sampling Method

The simulations resulted in the following ranking of sampling methods ordered by decreasing operational efficiency: (1) sampling the next patient exiting the consultation room; (2) sampling the next patient entering the consultation room; (3) systematic random sampling; and (4) simple random sampling. This order was generally consistent across all scenarios assessed in the simulation. Exceptions were scenarios in which the interview length was considerably shorter than the consultation length and/or both the interview and consultation length had a very small variance. In these settings, assuming that the selection interval is set at an (near) optimal level and if the need for additional human resources to monitor the selection interval is ignored, systematic random sampling tended to be operationally more efficient than sampling the next patient entering the consultation room. Sampling all patients and consecutive sampling resulted in unfeasibly long waiting times for interviewees except in scenarios in which the consultation length was consistently higher than the interview length and/or the ratio of the number of data collectors to consultation rooms was high.

Table 2 summarizes the results of the simulations run for the "typical scenario" described in the methods. Across the 16 consultation room-to-interviewer combinations, sampling the next patient entering the clinical consultation room resulted on average in 21.7 percent fewer patients being interviewed than with sampling the next patient exiting the consultation room. Using 5 minutes as the maximum acceptable patient waiting time until an interviewer becomes available, systematic random sampling resulted in an average of 9.0 percent of selected patients not being interviewed. This measure decreases to an average of 3.4 percent when a maximum acceptable waiting time of 20 minutes is used. The resulting missingness is not random because the probability of exceeding the maximum acceptable patient waiting time until an interviewer became available increased with decreasing time spent in the consultation room. In 11 of the 16 room-to-interviewer combinations simple random sampling resulted in a higher percentage of selected patients exceeding the maximum acceptable patient waiting time than with systematic random sampling.

DISCUSSION

With Table 1 serving as a summary, this section will briefly describe each sampling method, discuss the method's probability of yielding a representative (i.e., unbiased) sample, and elaborate on its operational efficiency using the findings of our simulations.

Interviewing All Eligible Participants and Consecutive Sampling

Consecutive sampling, as used by the studies included in this review, refers to the data collection team interviewing all eligible patients at a facility until a target sample size for the facility is reached. Thus, the approaches of consecutive sampling and interviewing all eligible participants are conceptually similar because they both interview all eligible patients (i.e., a census) during the data collection period.

Bias. This approach results in a sample that is the same as, and therefore with certainty representative of, eligible participants who attended the facility during the data collection period. Thus, the degree to which the results are representative of all patients of interest at the health care facility depends on the degree to which the data collection period is representative of the larger time frame of interest. One means of increasing the representativeness of the data collection period for this larger time frame might be to select a sample of multiple (shorter) data collection periods.

Operational Efficiency. This sampling method will result in eligible patients queuing up to be interviewed, and thus unfeasibly high patient waiting times, when the consultation length is not consistently longer than the interview length. Thus, this approach is generally only feasible in settings with low volumes of eligible patients or if the patient exit interview is very short compared to the consultation length.

Sampling the Next Patient Exiting the Consultation Room

In this sampling method, the data collector arrives at the health care facility, or returns from a previous interview, and selects the next eligible participant exiting the clinical consultation. We suspect that at least some of the studies in our review, which did not state what sampling method was used, or which claimed to have sampled all eligible participants, simply selected the next patient exiting the consultation room.

Bias. Sampling the next patient exiting the consultation room results in a nonrepresentative sample. To explain the reasons for this claim, we assume that all patients fall into one of two categories: quick patients or slow patients, whereby slow patients spend more time in the clinical consultation than quick patients. If it takes a clinician, on average, M times as long to see a slow patient as compared to a quick patient, and the proportion of all patients who are quick patients is given by [alpha], then the total treatment time T is given by

T = [alpha]Nt + (1 - [alpha])NMt where N equals the total number of patients seen during the workday, and t equals the time required to see a quick patient. Then, the proportion of time clinicians spend seeing quick patients can be written as:

[[[alpha]Nt]/[[alpha]Nt + (1 - [alpha])NMt]] = [[[alpha]]/[M - [alpha](M - 1)]]

If the data collector selects patients for exit interviews at a random time point (arriving in the morning, or after finishing another interview), this proportion must always be the same as the proportion of quick patients in the interview sample in order for the interview sample to be representative of the patient population. In other words, a representative sample of interview participants would require that the share of quick patients in the sample is [alpha], that is, that [[[alpha]]/[M - [alpha] (M - 1)]] = [alpha]. This would only be possible if the clinician spent as much time with each slow patient as with each quick patient, which by assumption is not the case. For any setting in which some patients take more time (i.e., M > 1), [[[alpha]]/[M - [alpha] (M - 1)]] < [alpha]. In this situation, quick patients will always be underrepresented.

The intuition for this result is relatively straightforward: if two patients, one slow and one quick, start at the same point in time, the probability that the slow patient will still be around when the interviewer returns from an interview (or arrives at the facility) is larger than for the quick patient. Quick patients will thus be systematically missed, and average responses systematically biased toward patients with whom the clinician spent more time. An attempt could be made to reduce this bias through sampling weights that account for consultation length. However, this would require that the consultation times are recorded either by a designated study team member (which will usually lead to reduced operational efficiency because the team member could instead conduct interviews) or by the clinical team (which will not be feasible in many cases).

Operational Efficiency. Our simulations found that this method is almost always the most operationally efficient sampling method, and it excludes the possibility of patients having to wait until an interviewer is available. It is also logistically simple to implement.

Simple Random Sampling

This sampling method was not used by any of the studies identified by our literature review. A sampling frame is usually not available for patient exit interviews as many patients may not have an appointment, and a significant portion of those patients with an appointment may not attend. Thus, a randomization device (e.g., a coin or a smartphone with a randomization application) is likely required to randomly select patients. Table 3 outlines options for selecting patients when using simple (or systematic) random sampling.

Bias. With the exception of a census (i.e., sampling all eligible participants), this is the most rigorous method of sampling patients for exit interviews because it is the only approach that is entirely independent of the order in which patients wait in the waiting area, or exit the consultation room. For this method to yield an unbiased sample, all eligible patients at the health care facility need to be subjected to the random selection. If only patients who leave the consultation room while an interviewer is available are subject to randomization, the same bias will be introduced as with sampling the next patient exiting the consultation room.

Operational Efficiency. Ensuring that each eligible patient is randomized tends to add considerable operational complexity, the precise nature of which depends on the setting and who (interviewers, clinicians, or a designated study team member) randomizes patients to being interviewed (Table 3). Furthermore, this method is generally less operationally efficient than systematic random sampling and sampling the next patient entering the consultation room.

Systematic Random Sampling

In the case of systematic random sampling, the first patient to be interviewed is selected at random, and subsequently every xth patient is interviewed whereby the interval (x) is determined prior to the data collection. Table 3 outlines typical operational options for ensuring that the interval (x) is maintained.

Bias. Systematic random sampling will result in a random sample as long as the order, in which patients exit the clinical consultation, is random. While patterns in the order in which patients exit consultation rooms are fairly likely to exist at most facilities (e.g., patients without appointments are only seen at certain times of the day), the probability of being in the systematic random sample is the same for any one eligible patient. Thus, these patterns will only affect the representativeness of the interview sample if they occur in a periodic way throughout the data collection period, such that the pattern systematically coincides with the interval of the systematic random sample.

Operational Efficiency. Systematic random sampling requires the data collection team to monitor the interval with which patients are selected for interview. This can be accomplished in several ways, each of which has drawbacks (Table 3). Additionally, in most simulation scenarios, systematic random sampling was unable to achieve a higher operational efficiency than sampling the next patient entering the consultation room without resulting in patients having to wait until the next interviewer becomes available (Table 2). These patient waiting times are likely to compromise the representativeness of the sample, because some patients may leave the facility rather than wait for an interviewer.

It is important to bear in mind that the operational efficiency achieved with systematic random sampling in Table 2 assumes that the interval of selection is set at or near the optimal level. However, optimal interval setting is difficult to accomplish without considerable pilot testing. Ignoring the human resource needs to monitor the selection interval and assuming that the interval is set at or near the optimal level, systematic random sampling was the operationally most efficient unbiased sampling method in our simulations when the interview length was substantially shorter than the consultation length and/or the variances of both the consultation and interview lengths were considerably smaller than in the typical scenario shown in Table 2. Systematic random sampling performed poorly when the consultation and/or the interview length had a high variance.

Sampling the Next Patient Entering the Consultation Room

When the interviewer returns from an interview or arrives at the health care facility, he/she does not select the next patient exiting the consultation room, but instead selects the next patient entering the consultation room. In the case of multiple consultation rooms, the interviewer selects the next patient entering any of the consultation rooms.

Bias. We have shown mathematically that patients with longer consultation lengths are more likely to be interviewed when sampling the next patient exiting the consultation room (see the section entitled "Sampling the Next Patient Exiting the Consultation Room"). This bias is eliminated if interviewers do not select the next patient exiting, but rather wait for the next patient entering the consultation room. It is important to note that this sampling method is only unbiased under the assumption

that the interviewer's completion time for the previous interview (or arrival time at the facility) is random with respect to the characteristics of the next patient who will enter the consultation room. This will be the case if the order in which patients enter the consultation room is unrelated to the length of time patients spend with the clinician and the interviewer.

In high volume settings where patients exit the consultation room at fairly regular intervals, sampling the next patient entering the consultation room will, in practice, be similar to systematic random sampling with the sampling interval being determined by both the interview and the consultation length. A disadvantage of sampling the next patient entering the consultation room compared to systematic random sampling is that researchers employing the latter method have somewhat more control over their sample size (by adjusting the sampling interval). This can sometimes be leveraged to create a self-weighting sample, such as when sampling the same number of patients from facilities that were chosen with probability proportional to size. In contrast, researchers employing the method of sampling the next patient entering the consultation room will tend to sample more patients at busier facilities (or those with comparatively shorter consultation lengths) and may therefore need to weight their observations after data collection is completed.

Operational Efficiency. Our simulations demonstrate that sampling the next patient entering the consultation is, in the majority of scenarios, a more operationally efficient method than systematic and simple random sampling. Important additional advantages of this sampling approach over systematic and simple random sampling are as follows: (1) it can be easily implemented in any setting without pilot testing; (2) it is simple to implement for data collection and clinical teams; (3) it eliminates the possibility of burdening patients with a waiting time until an interviewer is available; and (4) it does not require any time or effort on the part of the clinical team. While sampling the next patient entering the consultation room is operationally less efficient than sampling the next patient exiting the consultation, this loss of operational efficiency is relatively minor. For instance, across the scenarios shown in Table 2, the mean percentage of patients interviewed is 46.6 percent with sampling the next patient exiting the consultation room, and 39.2 percent with sampling the next patient entering the consultation room. Similarly, across the scenarios, the mean number of patients interviewed per data collection day is 59.6 versus 46.5 for sampling the next patient exiting and the next patient entering the consultation room, respectively.

CONCLUSIONS

We have proposed a new, simple sampling method for patient exit interviews (sampling the next patient entering the consultation room) and demonstrated the relative advantages of this approach for typical primary health care settings. We show that sampling the next patient entering the consultation room tends to be the most operationally efficient unbiased sampling method as long as one assumption is met: the order with which patients are seen by the clinician is random with respect to the time spent in the consultation room and with the interviewer.

Our analysis and simulation results also allow for the following additional conclusions. First, sampling the next patient exiting the consultation room should only be used if either of the following two conditions is met: (1) the researcher is not concerned about having a sample, in which patients who spent a longer time in the consultation room are overrepresented, or (2) it is feasible to time consultation lengths so that observations can be weighted. Second, a number of assumptions have to be met for systematic random sampling to be unbiased and more operationally efficient than sampling the next patient entering the consultation room: (1) there is no periodicity in the order with which patients enter the consultation room; (2) the interview length is considerably shorter than the consultation length, or both the interview and consultation lengths do not differ significantly between patients; (3) the sampling interval is set at or near the optimal level; and (4) the researchers find a reliable way to monitor the sampling interval without reducing the number of available interviewers. Lastly, simple random sampling (i.e., randomizing all eligible patients to being interviewed or not using a randomization device) is the only sampling method, which will always yield an unbiased sample without any additional assumptions.

ACKNOWLEDGMENTS

Joint Acknowledgment/Disclosure Statement: The authors gratefully acknowledge financial support from the Wellcome Trust, NIH (NICHD R01-HD084233, NIAID R01-AI124389, R01-AI112339, NIAP01 AG041710) and the International Initiative for Impact Evaluation (3ie) and the Clinton Health Access Initiative (CHAI).

Disclosures: None.

Disclaimers: None.

REFERENCES

Alonge, O., S. Gupta, C. Engineer, A. S. Salehi, and D. H. Peters. 2014. "Assessing the Pro-Poor Effect of Different Contracting Schemes for Health Services on Health Facilities in Rural Afghanistan." Health Policy Plan 30 ( 10): 1229-42.

Anya, S. E., A. Hydara, and L. E. Jaiteh. 2008. "Antenatal Care in The Gambia: Missed Opportunity for Information, Education and Communication." BMC Pregnancy Childbirth 8: 9.

Asfaw, E., S. Dominis, J. G. Palen, W. Wong, A. Bekele, A. Kebede, and B.Johns. 2014. "Patient Satisfaction with Task Shifting of Antiretroviral Services in Ethiopia: Implications for Universal Health Coverage." Health Policy Plan 29 (Suppl 2): ii50-8.

Chimbindi, N., T. Barnighausen, and M. L. Newell. 2014. "Patient Satisfaction with HIV and TB Treatment in a Public Programme in Rural KwaZulu-Natal: Evidence from Patient-Exit Interviews." BMC Health Services Research 14: 32.

Chimbindi, N., J. Bor, M. L. Newell, F. Tanser, R. Baltusen, J. Hontelez, S. de Vlas, M. Lurie, D. Pillay, and T. Barnighausen. 2015. "Time and Money: The True Costs of Health Care Utilization for Patients Receiving 'Free' HIV/TB Care and Treatment in Rural KwaZulu-Natal." Journal of Acquired Immune Deficiency Syndromes 70(2): e52-60.

The Demographic and Health Surveys Program. 2015. "SPA Overview" [accessed on June 29, 2015]. Available at http://dhsprogram.com/What-We-Do/SurveyTypes/SPA.cfm

Deveugele, M., A. Derese, A. van den Brink-Muinen, J. Bensing, and J. De Maeseneer. 2002. "Consultation Length in General Practice: Cross Sectional Study in Six European Countries." British Medical Journal 325 (7362): 472.

Ejigu, T., M. Woldie, and Y. Kifle. 2013. "Quality of Antenatal Care Services at Public Health Facilities of Bahir-Dar Special Zone, Northwest Ethiopia." BMC Health Services Research 13: 443.

Etiaba, E., O. Onwujekwe, B. Uzochukwu, and A. Adjagba. 2015. "Investigating Payment Coping Mechanisms Used for the Treatment of Uncomplicated Malaria to Different Socio-Economic Groups in Nigeria." African Health Sciences 15 (1): 42-8.

Hrisos, S., M. P. Eccles, J. J. Francis, H. O. Dickinson, E. F. Kaner, F. Beyer, and M. Johnston. 2009. "Are There Valid Proxy Measures of Clinical Behaviour? A Systematic Review." Implementation Science 4: 37.

Islam, E, A. Rahman, A. Halim, C. Eriksson, F. Rahman, and K. Dalai. 2015. "Perceptions of Health Care Providers and Patients on Quality of Care in Maternal and Neonatal Health in Fourteen Bangladesh Government Healthcare Facilities: A Mixed-Method Study." BMC Health Services Research 15: 237.

Israel-Ballard, K., M. Waithaka, and T. Greiner. 2014. "Infant Feeding Counselling of HIV-infected Women in Two Areas in Kenya in 2008." International Journal of STD and AIDS 25 (13): 921-8.

Opwora, A., E. Waweru, M. Toda, A. Noor, T. Edwards, G. Fegan, S. Molyneux, and C. Goodman. 2015. "Implementation of Patient Charges at Primary Care

Facilities in Kenya: Implications of Low Adherence to User Fee Policy for Users and Facility Revenue." Health Policy Plan 30 (4): 508-17.

Ostroff, J. S., Y. Li, and D. R. Shelley. 2014. "Dentists United to Extinguish Tobacco (DUET): A Study Protocol for a Cluster Randomized, Controlled Trial for Enhancing Implementation of Clinical Practice Guidelines for Treating Tobacco Dependence in Dental Care Settings." Implementation Science 9: 25.

Peabody, J. W., J. Florentino, R. Shimkhada, O. Solon, and S. Quimbo. 2010. "Quality Variation and Its Impact on Costs and Satisfaction: Evidence from the QIDS Study." Medical Care 48 (1): 25-30.

RAND Health. 2015. "Patient Satisfaction Questionnaire from RAND Health" [accessed on August 19, 2015]. Available at http://www.rand.org/health/surveys_tools/psq.html

Sando, D., P. Geldsetzer, L. Magesa, I. Andrew, L. M.-S. Machumi, N. Mary, D. M. Li, E. Spiegelman, H. Siril, P. Mujinja, H. Naburi, G. Chalamilla, C. Kilewo, E. Anna-Mia, W. W. Fawzi, and T. Barnighausen. 2014. "Evaluation of a Community Health Worker Intervention and the World Health Organization's Option B versus Option A to Improve Antenatal Care and PMTCT Outcomes in Dar es Salaam, Tanzania: Study Protocol for a Cluster-Randomized Controlled Health Systems Implementation Trial." Trials 15: 359.

Senarath, U., D. N. Fernando, G. Vimpani, and I. Rodrigo. 2007. "Factors Associated with Maternal Knowledge of Newborn Care among Hospital-Delivered Mothers in Sri Lanka." Transactions of the Royal Society of Tropical Medicine and Hygiene 101 (8): 823-30.

Stange, K. C, S.J. Zyzanski, T. F. Smith, R. Kelly, D. M. Langa, S. A. Flocke, and C. R. Jaen. 1998. "How Valid Are Medical Records and Patient Questionnaires for Physician Profiling and Health Services Research? A Comparison with Direct Observation of Patients Visits." Medical Care36 (6): 851-67.

Steine, S., A. Finset, and E. Laerum. 2001. "A New, Brief Questionnaire (PEQ) Developed in Primary Health Care for Measuring Patients' Experience of Interaction, Emotion and Consultation Outcome." Family Practice 18 (4): 410-8.

Turner, A. G, G. Angeles, A. O. Tsui, M. Wilkinson, and R. Magnani. 2001. Sampling Manual for Facility Surveys. MEASURE Evaluation Manual Series. Chapel Hill, NC: MEASURE Evaluation, Carolina Population Center, University of North Carolina at Chapel Hill.

Wensing, M. 2006. EUROPEP 2006: Revised Europep Instrument and User Manual. Nijmegen: Centre for Quality of Care Research.

SUPPORTING INFORMATION

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Appendix SA2: Study Characteristics and Sampling Methodology Used.

Address correspondence to Pascal Geldsetzer, M.B.Ch.B., Department of Global Health and Population, Harvard T.H. Chan School of Public Health, 665 Huntington Avenue, Boston, MA 02115; e-mail: pgeldsetzer@mail.harvard.edu. Gunther Fink, Ph.D., Maria Vaikath, M.Sc, and Till Barnighausen, M.D., Sc.D., are with the Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA. Till Barnighausen, is also with the Institute of Public Health, Heidelberg University, Heidelberg, Germany; and Africa Health Research Institute, KwaZulu-Natal, South Africa.

DOI: 10.1111/1475-6773.12611
Table 1: Summary of Sampling Methods for Patient Exit Interviews

Sampling Method         Description of Method        Bias

(1) Sampling all        All eligible patients        Unbiased
eligible patients       exiting the consultation
                        room are interviewed.
(2) Sampling the next   After returning from         Biased (*)
patient exiting the     an interview, the
consultation room       data collector selects
                        the next patient exiting
                        the consultation room.
(3) Simple random       All eligible patients are    Unbiased
sampling                randomized to being          ([double dagger])
                        interviewed or not.
                        ([dagger])
(4) Systematic random   Every xth patient            Unbiased in the
sampling                exiting the consultation     absence of a
                        room is selected for         cyclical pattern in
                        interview.                   the order in which
                                                     patients exit the
                                                     consultation room
(5) Sampling the next   After returning from         Unbiased
patient entering the    an interview, the            ([paragraph])
consultation room       data collector selects
                        the next patient entering
                        the consultation room.

Sampling Method         Operational Efficiency

(1) Sampling all        Generally only feasible if the ratio of data
eligible patients       collectors to eligible patients is high and/or
                        the interview is consistently shorter than the
                        consultation length
(2) Sampling the next   Maximum operational efficiency
patient exiting the
consultation room
(3) Simple random       Logistically complex to implement; generally
sampling                less operationally efficient than methods (2),
                        (4), and (5)
(4) Systematic random   Difficult to set a feasible interval ([section])
sampling                without pilot testing; generally less
                        operationally efficient than (2) and (5)
(5) Sampling the next   Generally more efficient than (3) and (4);
patient entering the    somewhat less efficient than (2)
consultation room

(*) The probability of selection into the sample is inversely related
to the time spent in the consultation room.
([dagger]) Either a research team member or the clinician(s) selects
eligible patients for interview using a randomization device, such as a
coin flip or a smartphone application.
([double dagger]) For this method to be unbiased, all eligible patients
at the facility must be subjected to the randomization.
([section]) "Interval" refers to x when every .xth patient is selected
for interview.
([paragraph]) This method is unbiased as long as the order in which
patients enter the consultation room is random with respect to both the
time spent in the consultation room and the time spent with the
interviewer.

Table 2: Simulation Results--The Operational Efficiency of Possible
Sampling Methods for Patient Exit Interviews (*)

                            Sampling the Next Patient
                            Exiting the Consultation
                            Room

Consultation  Interviewers  % of All      No. of             % of All
Rooms                       Patients      Patients           Patients
([dagger])                  Interviewed   Interviewed        Interviewed
                                          per Day
                                          ([double dagger])
                                          Mean (SD)

 1             1             33.6         14.9 (1.1)          25.0
 1             2             62.3         27.7 (1.9)          48.7
 1             5             98.7         43.3 (3.9)          94.8
 1            10            100.0         44.1 (4.1)         100.11
 2             1             18.8         16.2 (1.3)          13.6
 2             2             36.3         31.8 (2.4)          27.0
 2             5             81.0         71.1 (4.4)          64.0
 2            10            100.0         88.5 (6.4)          98.6
 5             1              8.2         17.7 (1.4)           6.0
 5             2             16.2         34.4 (4.8)          11.6
 5             5             39.4         85.6 (6.5)          28.5
 5            10             74.7        162.4 (11.4)         55.9
10             1              4.3         18.5 (1.5)           3.2
10             2              8.8         36.2 (5.3)           6.1
10             5             21.1         87.9 (16.4)         15.3
10            10             41.6        173.2 (33.2)         29.9

              Sampling the Next  Systematic Random
              Patient Entering   Sampling ([section])
              the Consultation
              Room
Consultation  No. of             Interval       % of All
Rooms         Patients           ([paragraph])  Patients
([dagger])    Interviewed                       Interviewed
              per Day
              ([double dagger])
              Mean (SD)

 1             11.1 (0.9)         3              28.2
 1             21.4 (1.4)         1              69.9
 1             42.1 (3.2)         1              99.6
 1             44.4 (3.8)         1             100.0
 2             11.9 (1.0)         6              14.7
 2             23.5 (1.5)         3              30.1
 2             55.6 (4.6)         1              87.1
 2             85.7 (7.6)         1             100.0
 5             12.7 (1.6)        15               6.2
 5             25.2 (1.8)         7              13.0
 5             61.9 (5.0)         3              32.5
 5            121.4 (7.6)         1              81.4
10             13.2 (2.8)        28               3.2
10             25.4 (4.7)        14               6.7
10             63.8 (8.7)         5              18.4
10            124.5 (25.1)        3              33.1

              Systematic Random                   Simple Random Sampling
              Sampling ([section])                ([section],**)

Consultation  No. of              % of Selected   % of All
Rooms         Patients            Patients        Patients
([dagger])    Interviewed         ([section])     Interviewed
              per Day             Missed
              ([double dagger])
              Mean (SD)

 1             12.4 (1.3)         15.5             18.3
 1             30.9 (1.7)         30.1             42.2
 1             44.1 (4.3)          0.4             94.2
 1             43.9 (4.8)          0.0            100.0
 2             13.0 (1.2)         12.1              9.3
 2             26.7 (1.8)          8.8             22.2
 2             76.4 (4.1)         13.0             62.3
 2             86.9 (9.6)          0.0             98.6
 5             13.4 (1.8)          7.7              3.9
 5             27.6 (4.6)          9.2              9.2
 5             69.1 (10.9)         2.5             26.8
 5            173.2 (26.5)        18.6             56.0
10             14.1 (1.5)          9.2              2.1
10             27.7 (5.5)          7.0              4.6
10             76.6 (16.5)         8.3             13.6
10            137.8 (31.9)         0.8             28.0

              Simple Random Sampling ([section],**)
Consultation  No. of              %of
Rooms         Patients            Selected
([dagger])    Interviewed         Patients
              per Day             ([section])
              ([double dagger])   Missed
              Mean (SD)

 1              8.0 (1.7)         27.6
 1             18.5 (2.9)         12.1
 1             41.5 (4.0)          0.5
 1             44.2 (4.1)          0.0
 2              8.0 (1.9)         31.2
 2             19.1 (3.7)         17.5
 2             54.7 (5.6)          3.1
 2             85.7 (9.4)          0.1
 5              8.4 (2.1)         32.8
 5             19.6 (4.4)         20.5
 5             57.0 (10.8)         6.1
 5            119.1 (21.2)         1.2
10              8.7 (2.5)         31.8
10             19.0 (5.6)         21.8
10             59.1 (6.1)          9.0
10            120.3 (28.2)         2.6

(*) The simulations were run for a total of 10,000 patients being seen
at the health care facility, a mean consultation length of 10.7 minutes
(SD: 6.7 minutes), and a mean interview length of 25.0 minutes (SD: 7.0
minutes). The minimum consultation and interview lengths are 30 seconds.
([dagger]) This is the number of rooms in which patients are being seen.
([double dagger]) The simulation assumes a workday of 8 hours without
breaks.
([section]) The simulation assumes that the maximum acceptable time for
participants to wait until an interviewer becomes available is 5
minutes. If this waiting time is exceeded, the patient will have been
missed by the interviewer(s).
([paragraph]) This is the interval set for the systematic random
sampling (e.g., an interval of three signifies that every third patient
is selected for interview). Where possible, the interval was set at the
highest number needed to achieve at least a 10% higher proportion of
patients interviewed than with sampling the next patient entering the
consultation room (assuming no selected patients are missed).
(**) The probability of selecting a given patient for an interview was
set at the probability of all patients interviewed with sampling the
next patient entering the consultation room.
% = Percentage; No. = number; SD = standard deviation.

Table 3: Typical Options for Selecting Patients When Using Simple or
Systematic Random Sampling

Who Selects         When Are                Advantages
Patients?           Patients Selected?

Interviewer         Prior to consultation   All study team members can
                    (in waiting area)       conduct interviews (*)
                                            Does not place burden of
                                            patient selection on the
                                            clinical team
Clinician           During the              All study team members can
                    consultation            conduct interviews (*)
                                            May increase clinical team's
                                            interest in the study
Designated          At exit from the        Third person to monitor
study team          consultation            adherence to patient
member                                      selection ([section])
([double dagger])                           Does not place the burden
                                            of patient selection on the
                                            clinical team

Who Selects         When Are                Disadvantages
Patients?           Patients Selected?

Interviewer         Prior to consultation   Biased if seating order in
                    (in waiting area)       the waiting area is not
                                            random
                                            Possibly biased if
                                            interviewer fails to keep
                                            track of the patient flow
                                            through the waiting area
                                            Unethical if enquiring about
                                            eligibility criteria in the
                                            waiting area violates
                                            patient confidentiality
Clinician           During the              Biased if clinician fails to
                    consultation            reliably conduct the
                                            randomization or to adhere
                                            to the sampling interval
                                            ([dagger])
                                            Requires buy-in from
                                            clinical team
Designated          At exit from the        Loss of operational
study team          consultation            efficiency because the study
member                                      team member selecting
([double dagger])                           patients could be conducting
                                            interviews instead

All study team members can both select and interview patients.
([dagger]) A clinician may forget to randomize or fail to correctly
execute the randomization process.
([double dagger]) Necessary because the interviewer would miss patients
leaving the consultation room while he/she is conducting interviews.
([section]) The presence of a third person responsible for selecting
patients may make it more difficult for the interviewer to skip certain
patients (e.g., because they are perceived to be difficult
interviewees).
COPYRIGHT 2018 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:METHODS ARTICLE
Author:Geldsetzer, Pascal; Fink, Gunther; Vaikath, Maria; Barnighausen, Till
Publication:Health Services Research
Article Type:Report
Geographic Code:1USA
Date:Feb 1, 2018
Words:6515
Previous Article:Automated Delineation of Hospital Service Areas and Hospital Referral Regions by Modularity Optimization.
Next Article:Covariate Balancing through Naturally Occurring Strata.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters