Skip to main content

A promising method for identifying cross-cultural differences in patient perspective: the use of Internet-based focus groups for content validation of new Patient Reported Outcome assessments

Abstract

Objectives

This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs).

Methods

The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures – all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions.

Findings

The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs.

Conclusion

The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires.

Article overview

We begin this article with two brief literature reviews: One to identify how Internet focus groups (IFG) have been used in health and social science research; the second to examine current approaches to cross-cultural validation of PROs. Based on these growing bodies of knowledge, there appeared compelling reasons to extend IFG based methods to assist with the cross-cultural adaptation of new patient-reported outcome measures. As a result, a proof of concept (POC) study was specifically designed to assess the usefulness of IFG-based inquiry to detect and explore thematic differences across linguistically and culturally different peoples. This POC study was conducted in Germany and the United States, and involved persons experiencing problems with oily skin of the face and scalp.

More specifically, the qualitative IFG methods involved the thematic coding of multi-lingual transcripts, which in turn provided comparative thematic data between countries; these results were used to adapt the content of candidate items for a series of new PRO measures. Moderators' implementation of coding and thematic analysis activities involved a significant change in their traditional roles; which also required their more formal involvement as members of the PRO design team. Greater use of moderators in PRO content development activities is a good use of expertise, due to their deep emersion in, and understanding of, the concerns and cultural perspectives expressed by participants.

Review 1: Internet focus groups a new technology

The use of Internet technologies as a medium for social 'dialogue' has become tremendously popular over the last decade. The transformation of text-based bulletin-board services into multimedia 'blogs' and virtual community networks have lead to a proliferation of both formal and informal discussion groups which address almost any topic imaginable. A specialized form of virtual interest group is used for consumer research, the Internet Focus Group (IFG); also known as bulletin board focus groups in the US [1]. IFGs first appeared in the late 1990's and have since been used by educators, clinicians, researchers and marketing specialists to research stakeholder values [2], explore cross-cultural differences [3], and provide supportive and educational on-line environments [4, 5]. Within healthcare delivery research, IFGs have also been used to better understand patients' perspectives and knowledge of their disease conditions and/or medical treatments (1). All of which has given rise to various research organization specializing in the use of virtual methodologies (see for example: [6–9]).

Despite some sampling concerns associated with the use of IFG technology among less affluent or older persons, the use of IFGs as a marketing and research tool continues to grow. This is likely due to a number of practical reasons, three of the most important are: 1) The ability to overcome geographical and physical restrictions to participation; 2) the ease and speed of participant engagement, facilitation and surveying; and 3) the automated management of resulting transcripts and survey data [4]. Demonstration that virtual methods provide equivalent qualitative results as both traditional face-to-face and telephone methodologies has also furthered the use of IFGs in mainstream research [10, 11]. Moreover the quality of results from IFGs may be greater than face-to-face methods when addressing topics of a sensitive nature, and participants often report feeling freer to provide candid responses (with less social desirability bias) than would be the case in face-to-face settings [12–15]. Table 1 presents a more detailed summary of potential advantages and some limitations of IFG use.

Table 1 Benefits and Limitations of Internet based Focus Groups

Review 2: Cross-cultural validation of patient reported outcomes

Borrowing psychometric methods developed in psychology, Outcomes Research (OR) scientists develop reliable and valid measures to assess the impact of clinical conditions and medical interventions from the patients' perspective. Early in the design phase of new Patient Reported Outcome (PRO) measures, patients are involved in content validation activities to identify meaningful themes and dimensions of future measurement. Typically, patient focus groups or interviews help assure that: 1) The content of new measures adequately cover concerns and issues which are important to patients/consumers; 2) The wording of new questions and instructions are based in the natural language and phraseology of respondents; and 3) The instructions, item pool, and response options are understandable and acceptable to persons who will be completing the surveys.

Over the years, the essential process of content validation has been included as a central topic in various PRO guidance documents authored by PRO outcomes working groups and drug regulatory agencies [16–27]. More recently, an additional set of recommendations regarding PRO content was made by membership of the 1999 Health Outcomes Methodology Symposium; "...that measurement tools be... more culturally appropriate for diverse populations and more conceptually and psychometrically equivalent across such groups"[28]. In response to such calls for culturally sensitivity and relevance, instrument developers have begun to address cultural content issues when designing new patient-reported measures: Some examples include; epidemiological surveys [29], clinical assessment and screening tools [30, 31], and community health surveys [32].

Various methods have been tried to reduce the cultural content bias of PROs. By far the most common is to follow rigorous procedures to adapt an instrument designed in one culture for use in other cultural contexts. Guidelines for such cross-cultural adaptation activities are well defined (see IQOLA and ERIQA guidelines [33, 34]) and rely on a rigorous forward and backward translation methodology [35, 36], followed by the use of psychometric replication (or bridging) studies to examine the internal and external validity of the 'adapted' translation in the target culture [37]. A much less frequently used approach involves the use of thematic review and harmonization of content between focus groups conducted concurrently in different cultures, a method known as parallel cross-cultural PRO content validation [38]. This approach has been tried by relatively few instrument developers [39–41], largely due to the time and budgetary resources associated with the initial stages of questionnaire design.

Unfortunately, it is rare during cultural adaptation of PRO measures to include the re-validation of the content coverage in the target culture. While biological and clinically assessed indicators are often considered more universal in nature, the manifestation and impact of disease and disability on the lives of individuals is typically culturally bound. Nevertheless, an implicit assumption is often made that the original thematic content and scale dimensions are equally relevant across all cultures. As a result, various academics have argued that culturally unique content may be missed during the adaptation processes, and that input from patients in different target cultures is necessary to design instruments with adequate coverage of unique cultural meaning [36, 42]. The failure to assess the cultural limitations of existing item content can result in culturally adapted measures with poor 'ecological validity' (i.e., the measure is ill suited to the context) and which do not address culturally-specific concerns [43–45].

When cultural differences in content or content relevance are identified after the fact, there are several approaches to handle such discrepancies. Some instrument developers have chosen to use only those items which are relevant across all cultural contexts and thus the re-validated measure is intended to possess a universal scale structure. An example of such an approach was taken during recent revisions to the Women's Health Questionnaire (WHQ) where developers made a decision to remove items that exhibited signs of cultural specificity [46]. Another approach is to use more general wording for items, which removes references to culturally specific content and allows individuals greater latitude when interpreting what situations the questions refer to [47, 48]. The EQ-5D is a well-known example of a PRO that uses general summary items to assure perceived relevance across cultures and across illness conditions [49]. Another, rarely used, solution is to allow the specific item content to vary in each different culture [31]. This approach requires significant content redevelopment activities for each country in which the PRO is applied. Table 2 presents an overview of the various ways instrument designers help ensure the cross-cultural validity of PRO content.

Table 2 Cross-cultural content development solutions used during PRO development

Internet Focus Group technologies may provide a way to address long-standing concerns about PRO content development based on geographically and culturally limited sampling. A major advantage of IFGs over traditional face-to-face focus groups is they extend the researcher's ability to span geographical barriers within the constraints of limited project resources. Moreover, they may provide a way to use a set of standardized procedures and tools for cross-cultural harmonization of content during early PRO development. As yet, however, the usefulness of IFGs for cross-cultural use has not been systematically evaluated.

Proof of concept study: IFGs and cross-cultural PRO content development

This POC study was part of a larger project to develop and validate a new set of PROs that assess the symptomatic impact of oily skin on the face (and scalp) among patients in the US and Germany. The concepts we sought to demonstrate were that IFGs methods can be used to identify differences in thematic content between countries and that such inquiry can lead to a better understanding of the various reasons for such differences. It was anticipated that prior knowledge of thematic differences could be fruitfully applied during the cross-cultural development of new PROs. Figure 1 presents a diagrammatic overview of the major activities occurring over the course of the POC study.

Figure 1
figure 1

A flow diagram of the stages of IFG cross-cultural content validation process.

Recruitment of participants

US and German IFG participants were recruited using standard methods, namely, from patient/consumer databases of individuals willing to take part in market research. These databases are maintained by market research companies specifically for such purposes. Some additional participants were recruited by asking database referrals to suggest others they know with similar problems (oily skin). In the US, a small number of participants (n = 4) were recruited from prior face-to-face focus groups addressing patients' concerns and experiences with oily skin.

Potential recruits between the ages of 18 and 65 years were screened by telephone using a Recruiting Questionnaire (i.e., the Screener) and those who met the following criteria were invited to participate:

  1. 1.

    All participants were required to:

    • Perceive portions of their face or their scalp to be oily

    • Experience that their oily skin/scalp was bothersome

    • Actively and regularly attempt to control the level of facial/scalp oiliness

  2. 2.

    A proportion of the samples also included individuals who experienced the following:

    • Mild or moderate acne

    • Seen a dermatologist in the past 2 years for their acne

    • An oily scalp and were also balding (males only)

    • Represented Asian, Black, Latino/Hispanic, White/Caucasian peoples

    • Represented various regions of the country (US only)

IFG methods and thematic analysis

The current consumer-based POC study used an on-line IFG application called FocusForumsâ„¢ to explore how individuals with oily skin characterize and evaluate both the symptoms and impact of their condition on their daily lives. This IFG application contains a number of functions to assist with development and refinement of content for the new PRO item pool (see Table 3).

Table 3 IFG functions and their use during PRO content development

A Topic Guide was developed to flexibly guide the lines of inquiry within the IFGs. This guide was based on a conceptual model arising from a literature review and input from dermatology thought leaders. Over the course of four days, focus group members participated on-line for approximately 45 minutes each day – during which they provided written responses to questions contained in the Topic Guide, follow-up probes from moderators, and the comments of other participants. The thematic content of these responses (i.e., the transcripts) were independently coded by the US and German moderators using a draft Thematic Coding Schedule. When a response did not seem fit in any of the existing coding categories, the moderator created a new coding category to categorize and tag the new thematic content. The primary purpose of this modifiable Coding Schedule was to identify content differences between the sets of IFGs conducted in the two countries. Once content differences were identified, reasons for these differences could be explored; some of which could be attributable to the effects of culture.

Table 4 presents a truncated example of the frequency counts of the number of unique individuals who made comments in each of the thematic coding categories. Great skill and patience was required of the moderators to read and code the large number responses (over 770 US and 1040 German responses), each response often contained a number of subtly inter-related themes, in such cases multiple codes were applied. The involvement of moderators in this coding task was a significant alteration in their usual qualitative activities.

Table 4 Frequency counts of unique respondents making comments in various coding categories related to the daily management of skin oiliness*

As indicated by '**' coding categories in Table 4, some thematic codes were applied more frequently in one of the two countries. These differences were discussed during teleconferences between the IFG moderators and the PRO Development team. Moderators, drawing on their first-hand experience within the IFG sessions, lead the discussion about how such differences in thematic endorsement might be explained. Table 5 presents the possible reasons for observed differences in the coding frequencies between the two countries and the questions that need to be addressed in order to evaluate each of these reasons.

Table 5 Potential reasons for observed differences in the numbers of people endorsing a particular theme

Sample selection

Differences in sample characteristics of the focus groups could have lead to differences in how the participants elaborated and explored topical issues. In turn, such differences could have affected how responses were ultimately coded. Although a standardized recruitment screener was used to help assure that the composition of IFG membership was consistent across countries, some sampling differences may have been culturally unavoidable. For example in this study, the samples of US and German IFGs differed on their medical treatment histories. IFG participants in Germany reported more medical consultations for their condition than those in the US. This may have been due to differences in access/use of health service delivery systems in the two countries or differences in the severity of the condition itself.

Session dynamics

During cross-cultural harmonization discussions, it was determined that some differences in coding frequency arose from variation in the number and types of probing questions used by the IFG moderators. While the moderators used the same Topic Guide to facilitate the IFGs, they used additional probes to develop a more comprehensive understanding of certain issues and behaviors. The practice of spontaneous probing is wholly consistent with qualitative research methodologies [50]. These probing questions were not prearranged, but rather emanated from the unique dynamics and flow of discussion within the particular IFG. In response to supplemental questioning, IFG members likely made additional comments and because these probes were not applied equivalently across groups and countries, the frequencies of certain thematic categories were unequally represented. An example of differential probe use can be seen in the Distress/Interruption sub-section of Table 5, where US and German coding frequencies differed on "preoccupation with appearance". Such differences should not be automatically assumed to represent a true cultural difference.

Transcript coding

Other differences in content frequencies may have been due to how moderators decided to code participants' responses. Decisions about how to classify a particular response were not always clear-cut and were based on coder interpretation. In such instances, moderators made independent judgments about which coding categories to assign to responses. Since coding categories were occasionally changed in response to what was observed within the response transcripts, reliance on inter-rater reliability analyses and coder retraining (an often used exploratory research method) was not considered a useful focus in this study. Moreover, the primary purpose of the content coding activity was to highlight areas for discussion, not to focus on the reliability of the coding schedule itself [51]. An example occurred when a modification of the German coding schedule was made to account for a distinction between oiliness of the 'side of nose' versus the 'nose', the US moderator on the other hand, used only the 'nose' code to characterize both types of responses. When such distinctions were encountered during harmonization discussions, moderators evaluated the potential reasons for distinctions and typically agreed to collapse categories where differences were not thought to be culturally determined.

Cross-cultural differences

A final explanation for the differences in thematic frequency counts relates to the distinctive linguistic, conceptual, and experiential differences which exist between the two cultures. For example, differences in the use of dry blotting versus wet blotting codes lead to a further review of the original transcripts in this area. It was determined that dry blotting was preferred by US females because, unlike wet blotting, this method of facial oil control did not require them to reapply their make-up foundation. On the other hand, German females, who mentioned fewer make-up concerns and a greater reliance on facial powder to control the appearance of oily skin (shine), seemed less concerned by washing; possibly due to the relatively straightforward task of reapplying facial powder. Possibly providing some support for this notion, both US and German males (who did not report using make-up) indicated that they washed the face with soap and water more often than female participants.

Another potential area of cultural difference was the mention of eating behaviors as a way of reducing skin oiliness. The moderators suggested that the German culture may foster a mindset of "avoidance" of things that might be harmful; while those in the US may tend to believe they can prompt favorable outcomes by being proactive and engaging in positive behavior. This working hypothesis arose out of the observation that German participants more frequently indicated they attempted to control excess sebum by avoiding "bad" things such as chocolate and sweets; whereas US participants more frequently indicated that their skin would be less oily if they did "good" things such as eating "healthy foods." Such differences may reflect cultural differences in how individuals understood and approached the daily management of their condition.

PRO item design

Following harmonization discussions to identify potential areas of cultural differences, PRO item pools were developed based on the most commonly occurring coding themes. During item design, the original IFG transcripts were revisited to assure that wording, phraseology and concepts in the new assessments reflected those used by the focus group participants in each country. Once the questions for the new oily skin scales were drafted, the IFG participants were invited back to provide cognitive debriefing feedback and to rate the degree to which the proposed items addressed important aspects of their condition. The item importance ratings provided yet another opportunity to assess cultural differences in the relative importance of item content and how items might perform differently between the two countries in the future. Table 6 provides an example of importance rating results for a new set of "Symptom Bother" rating scales.

Table 6 Importance rating of symptom bother items by country (ordered from most to least important)

The largest difference in importance ratings of these rating scales occurred on the 'self-conscious' item, with German IFG participants indicating the term was much less important than the US participants. This 'relevancy' or 'importance' rating difference suggests that the cross-cultural performance of this item in particular should be subject to closer inspection during later construct validation activities. Interestingly, self-consciousness was also singled-out by a professional PRO translation services as a term that was difficult to translate into German.

Discussion

The use of IFGs for parallel cross-cultural PRO content development was both time/cost effective and received very positive reviews from participants. The thematic frequency analysis of IFG transcripts highlighted a number of areas of difference between countries, which led to fruitful discussion within the content harmonization sessions. Various explanations were explored which could account for observed differences, including both non-cultural factors (e.g., the effects of, sampling, probing, coding) as well as cultural factors. Occasionally, the discussions prompted a re-review of the original transcripts as new cultural and gender issues were raised and considered. Information about the most commonly endorsed thematic categories and potential areas of thematic difference between cultures provided a solid basis on which to draft PRO questions; a draft that reflected the common concerns and issues of IFG participants. The proposed questions, were then reviewed by participants and rated as to their importance. The resulting importance ratings provided further clues as to which items might differentially perform across cultures in future studies.

IFGs and the changing roles of the professional moderator

In the past, the role of professional moderators has addressed the largely independent mandate to conduct qualitative inquiry within focus groups sessions. Once moderators identified the major focus group themes and issues which seem important, these themes and issues were then summarized in a final focus group report. Typically, the involvement of moderators ended as they passed this report on to the PRO development teams responsible for preparing the draft PRO item pools and construct validation activities. In the current study, moderators were much more active in instrument design activities, particularly the thematic coding and frequency analyses. It is informative to review some of the philosophic and methodological tensions that moderators may encounter as they take on this new role. Tensions which also seem to exist between various schools of thought about research methodologies in the health sciences, social science, and field of applied marketing [52–55].

When qualitative focus groups are used to validate the content of new PRO measures, either explicitly or implicitly, the investigative methods used by two different epistemologies come into contact. These ways of gleaning 'truth' can be characterized as belonging to either a qualitative tradition, based on an inductive and phenomenological approach; or a quantitative tradition, based on a deductive and positivistic approach [53, 54]. By nature, qualitative focus group research is inductive, open-ended and flexible, responding to the flow of each unique session, rather than closed-ended and fixed. Consistent with various qualitative research methods, the focus group inquiry allows the patients the freedom to provide information that does not necessarily fit with any expectation/hypotheses going into the research. It is precisely this openness to new and unexpected information that allows measurement designers to more fully "ground" the content of new Patient Reported Outcomes in the concerns and issues that patients think are relevant [56].

In turn, PRO design specialists use this deeper understanding of patient themes and issues to design pools of questions that measure the relevant content [57] and the performance of new assessment scales are evaluated in subsequent psychometric studies. These later psychometric studies utilize quantitative (statistical) methods to reduce the length and detail of surveys so that they only measure the most important concepts to most respondents. The resulting measurement scales allow for the qualitative assessment of predetermined concepts – an approach which appears to run counter to principles of qualitative inquiry. Supporting the distinction between qualitative and quantitative methods, Brookes suggests that qualitative methods are used to validate conceptual meaning using phenomenological data (an inductive approach) and quantitative validation activities focus on measurement and operational activities associated with the hypothetical deductive approaches of positivistic science [58].

When qualitative and quantitative activities meet

The apparent duality between qualitative and quantitative methods, however, may not be clear cut and some have argued that both inductive and hypothetical-deductive methods of inquiry may compliment each other [59–64], or at least provide similar results [65]. Supporting a blending of traditions, advocates of most qualitative schools of thought acknowledge that any inquiry is influenced to some degree by the interests and understanding of the interviewer, as well as the objectives of their qualitative work. In order to account for such influences, qualitative research methods often include self-reflective activities where the interviewer identifies their own influences on the processes of qualitative exploration and interpretation [66].

Parallels can be drawn between the influence of moderator's personal knowledge on the direction of qualitative inquiry and the influence of a body of knowledge in a particular field on what is explored within a focus group ([67], pp. 92–4). Indeed, current PRO development guidelines recommend that instrument design start by defining a clear 'conceptual framework', developed with input from key clinical opinion leaders' who have experience understanding patient perspectives and a good understanding of applied outcomes research [68, 69]. The 'conceptual framework' should not be confused with a 'conceptual' or 'theoretical model', whose organization is based on a set of predefined and empirically testable relationships. The conceptual framework is a way of sketching out the current understanding in a particular area of interest and forms the basis for development of the Discussion/Topic Guide used to guide IFG inquiry. The conceptual framework is then modified through qualitative inquiry according to what does and does not make sense to patients, as well as what aspects of patients' experiences, perspectives and behaviors have not been taken into account by the initial framework.

Early in the current study, moderators expressed concerns that the Topic Guide and coding activities lead to a quantitative reduction and over-simplification of qualitative findings. Questions arose as such as: 'Do we lead the lines of inquiry too much?' 'Does the detailed coding activities focus too much on the detail versus the bigger picture?' and 'Is it really necessary for the moderators to perform the coding functions?'. Such questions reflect initial concerns as exemplified by the follow statement made by one of the moderators:

"I personally have struggled considerably with the coding. I found it an enormous "stretch" as a qualitative researcher, who tries to see the "big picture in the constellation in the stars," versus focusing on small details."

Over time, however, moderators began to see practical value in thematic analysis as they explored the reasons for differences in thematic endorsement between the two countries. Recursive discussion about the various thematic differences resulted in more expansive ways of describing observations and distinguishing cultural differences from other sources of thematic variation – resulting in a deeper understanding of cultural issues and perspectives. The re-review of previously coded transcript materials provided substantive examples of how in-depth analysis of responses was stimulated by analysis of coding frequency results.

What could have been done differently?

Moderators had a number of suggestions about what could have been done differently in the future and their suggestions provide some direction for refinement of the methodology.

Keep coding activities simple

The complexity of the themes covered in this proof of concept study presented a particular challenge. The coding schedule was too long and required subdividing across the four different IFG sessions. This gave rise to concerns that responses to open-ended questions asked on one day contained information that should have been coded in a different part of the schedule. If the same schedule were used across all sessions, the topical coverage might have to be reduced.

Implement the coding schedule in a timely fashion

It was recommended that the coding activities be preformed at the end of each day and that results of the frequency analyses allow moderators to ask follow-up questions during the following IFG session. This would have provided a more informative way to directly probe participants' views on thematic differences. In order to speed the coding activities, it was suggested that an independent bilingual coder be employed to reduce the interpretive demands placed on the moderators.

Alternatives to a coding approach

In order to reduce problems associated with differential probing patterns between moderators, a 'cross-reader' approach was suggested as an alternative to the thematic coding and frequency analysis. This reader could simply read responses, looking for differences, or alternately read and code responses in a more consistent manner. One of the moderators stated:

"Based on our experiences, if you have moderators who speak at least 2 of the languages, you can cross read. Then you can do the real qualitative work with the guide only, and let it flow, probe and dig deeper. These issues could then be communicated and synchronized <harmonized>."

A less popular alternative to cross-reading was the suggestion that more structure be imposed on the use of probes within the IFG sessions. Since moderators often used in-session probes to address questions raised by session observers, spontaneous use of probes in one group would have to be implemented across other sessions, a proposition which was thought unwieldy.

Concluding remarks

In summary, the qualitative activities of IFGs appear to be enhanced through the use of thematic analyses which help to focus moderator discussion on topics associated with cross-cultural differences in thematic content. In this proof of concept study, the methods were shown to work, although some refinement of approaches may help simplify the tasks without compromising the usefulness of IFGs for cross-cultural harmonization.

Coding is an additional tool that can help moderators summarize and quickly compare the level of thematic endorsement between countries and between IFGs within a country. If applied in a timely manner (same day and subsequently) the thematic coding results can facilitate further exploration within the next IFG session. Such results also support the process of cross-cultural harmonization of issues, as facilitators re-visit responses and compare similar statements of different respondents in light of new information about potential group and cultural differences. The method however, is not intended as a substitute for qualitative inquiry itself, and the process of understanding the thoughts, experiences and values of the customer. IFGs and thematic analysis are additional tools in the professional toolbox of focus group moderators.

References

  1. Sevastik J, Burwell RG, Dangerfield PH: A new concept for the etiopathogenesis of the thoracospinal deformity of idiopathic scoliosis: summary of an electronic focus group debate of the IBSE. European Spine Journal 2003, 12: 440–450. 10.1007/s00586-002-0489-4

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  2. Kevern J, Webb C: Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001, 21: 323–333. 10.1054/nedt.2001.0563

    Article  CAS  PubMed  Google Scholar 

  3. Calderón JL, Baker RS, Wolf KE: Focus Groups: A Qualitative Method Complementing Quantitative Research for Studying Culturally Diverse Groups. Journal of Education for Health 2000, 1: 19–21.

    Google Scholar 

  4. Adler CL, Zarchin YR: The "virtual focus group": using the Internet to reach pregnant women on home bed rest. J Obstet Gynecol Neonatal Nurs 2002, 31: 418–427. 10.1111/j.1552-6909.2002.tb00064.x

    Article  PubMed  Google Scholar 

  5. Glitz B, Hamasu C, Sandstrom H: The focus group: a tool for programme planning, assessment and decision-making--an American view. Health Info Libr J 2001, 18: 30–37. 10.1046/j.1365-2532.2001.00310.x

    Article  CAS  PubMed  Google Scholar 

  6. Deutschen Gesellschaft für Online-Forschung (Germany): http://www.dgof.de/ 2006.

  7. NEON (Germany): http://www.neon.bvm.org/ 2006.

  8. Interactive marketing Research Organization (USA): http://www.imro.org/ 2006.

  9. ESOMAR, AQR (UK): http://www.esomar.org/ 2006.

  10. Kramish CM, Meier A, Carr C, Enga Z, James AS, Reedy J, Zheng B: Health behavior changes after colon cancer: a comparison of findings from face-to-face and on-line focus groups. Fam Community Health 2001, 24: 88–103.

    Article  Google Scholar 

  11. Atkinson MJ, Sinha A, Hass SL, Colman SS, Kumar RN, Brod M, Rowland CR: Validation of a general measure of treatment satisfaction, the Treatment Satisfaction Questionnaire for Medication (TSQM), using a national panel study of chronic disease. Health Qual Life Outcomes 2004, 2: 12. 10.1186/1477-7525-2-12

    Article  PubMed Central  PubMed  Google Scholar 

  12. Desai P: In Methods Beyond Interviewing in Qualitative Market Research. Edited by: Publications S. 2002.

    Google Scholar 

  13. Buchanan T, Smith JL: Research on the Internet: validation of a World-Wide Web mediated personality scale. Behav Res Methods Instrum Comput 1999, 31: 565–571.

    Article  CAS  PubMed  Google Scholar 

  14. Buchanan T, Smith JL: Using the Internet for psychological research: personality testing on the World Wide Web. Br J Psychol 1999, 90 ( Pt 1): 125–144. 10.1348/000712699161189

    Article  CAS  Google Scholar 

  15. Atkinson MJ, Sinha A: Use of Internet technologies during the early development of PRO instrumentation: Experience from two PRO development projects. In From Quality of Life to Patient Outcomes Assessment: Research Agenda for a Paradigm Shift. Edited by: Procedings DIAC. Baltimore; 2003.

    Google Scholar 

  16. Santanello NC, Baker D, Cappelleri JC, Copley-Merriman K, DeMarinis R, Gagnon JP, Hsuan A, Jackson J, Mahmoud R, Miller D, Morgan M, Osterhaus J, Tilson H, Willke R: Regulatory issues for health-related quality of life--PhRMA Health Outcomes Committee workshop, 1999. Value Health 2002, 5: 14–25. 10.1046/j.1524-4733.2002.51047.x

    Article  PubMed  Google Scholar 

  17. Willke RJ, Burke LB, Erickson P: Measuring treatment impact: a review of patient-reported outcomes and other efficacy endpoints in approved product labels. Control Clin Trials 2004, 25: 535–552. 10.1016/j.cct.2004.09.003

    Article  PubMed  Google Scholar 

  18. Szende A, Leidy NK, Revicki D: Health-related quality of life and other patient-reported outcomes in the European centralized drug regulatory process: a review of guidance documents and performed authorizations of medicinal products 1995 to 2003. Value Health 2005, 8: 534–548. 10.1111/j.1524-4733.2005.00051.x

    Article  PubMed  Google Scholar 

  19. Papanicolaou S, Sykes D, Mossialos E: EMEA and the evaluation of health-related quality of life data in the drug regulatory process. Int J Technol Assess Health Care 2004, 20: 311–324.

    Article  PubMed  Google Scholar 

  20. Stewart KA, Neumann PJ: FDA actions against misleading or unsubstantiated economic and quality-of-life promotional claims: an analysis of warning letters and notices of violation. Value Health 2002, 5: 389–396. 10.1046/j.1524-4733.2002.55146.x

    Google Scholar 

  21. Apolone G, De Carli G, Brunetti M, Garattini S: Health-related quality of life (HR-QOL) and regulatory issues. An assessment of the European Agency for the Evaluation of Medicinal Products (EMEA) recommendations on the use of HR-QOL measures in drug approval. Pharmacoeconomics 2001, 19: 187–195. 10.2165/00019053-200119020-00005

    Article  CAS  PubMed  Google Scholar 

  22. Wiklund I: Quality of life and regulatory issues. Scand J Gastroenterol Suppl 1996, 221: 37–38.

    Article  CAS  PubMed  Google Scholar 

  23. Turner S: Economic and quality of life outcomes in oncology: the regulatory perspective. Oncology (Williston Park) 1995, 9: 121–125.

    CAS  Google Scholar 

  24. Bech P: Issues of concern in the standardization and harmonization of drug trials in Europe: health-related quality of life, ESCT meeting, Strasbourg, 23–24 May 1991. Qual Life Res 1992, 1: 143–145. 10.1007/BF00439722

    Article  CAS  PubMed  Google Scholar 

  25. Mastaglia B, Toye C, Kristjanson LJ: Ensuring content validity in instrument development: challenges and innovative approaches. Contemp Nurse 2003, 14: 281–291.

    Article  PubMed  Google Scholar 

  26. Haynes SN, Lench HC: Incremental validity of new clinical assessment measures. Psychol Assess 2003, 15: 456–466. 10.1037/1040-3590.15.4.456

    Article  PubMed  Google Scholar 

  27. Gotay CC, Lipscomb J, Snyder CF: Reflections on findings of the Cancer Outcomes Measurement Working Group: moving to the next phase. J Natl Cancer Inst 2005, 97: 1568–1574.

    Article  PubMed  Google Scholar 

  28. Lohr KN: Health outcomes methodology symposium: summary and recommendations. Med Care 2000, 38: II194-II208.

    Article  CAS  PubMed  Google Scholar 

  29. Globe DR, Schoua-Glusberg A, Paz S, Yu E, Preston-Martin S, Azen S, Varma R: Using focus groups to develop a culturally sensitive methodology for epidemiological surveys in a Latino population: findings from the Los Angeles Latino Eye Study (LALES). Ethn Dis 2002, 12: 259–266.

    PubMed  Google Scholar 

  30. Huer MB, Saenz TI: Challenges and strategies for conducting survey and focus group research with culturally diverse groups. Am J Speech Lang Pathol 2003, 12: 209–220. 10.1044/1058-0360(2003/067)

    Article  PubMed  Google Scholar 

  31. Lansdown RG, Goldstein H, Shah PM, Orley JH, Di G, Kaul KK, Kumar V, Laksanavicharn U, Reddy V: Culturally appropriate measures for monitoring child development at family and community level: a WHO collaborative study. Bull World Health Organ 1996, 74: 283–290.

    CAS  PubMed Central  PubMed  Google Scholar 

  32. Clark MJ, Cary S, Diemert G, Ceballos R, Sifuentes M, Atteberry I, Vue F, Trieu S: Involving communities in community assessment. Public Health Nurs 2003, 20: 456–463. 10.1046/j.1525-1446.2003.20606.x

    Article  PubMed  Google Scholar 

  33. Bullinger M, Alonso J, Apolone G, Leplege A, Sullivan M, Wood-Dauphinee S, Gandek B, Wagner A, Aaronson N, Bech P, et al.: Translating health status questionnaires and evaluating their quality: the IQOLA Project approach. International Quality of Life Assessment. Journal of Clinical Epidemiology 1998, 51: 913–923. 10.1016/S0895-4356(98)00082-1

    Article  CAS  PubMed  Google Scholar 

  34. Conway K, Mear I, Acquadro C: An attempt to develop minimal requirements for the 1st step of cross-cultural adaptation of patient reported outcomes (PROs) measures. Quality of Life News Letter 2001, 27: 6.

    Google Scholar 

  35. Eremenco SL, Cella D, Arnold BJ: A comprehensive method for the translation and cross-cultural validation of health status questionnaires. Eval Health Prof 2005, 28: 212–232. 10.1177/0163278705275342

    Article  PubMed  Google Scholar 

  36. Jones PS, Lee JW, Phillips LR, Zhang XE, Jaceldo KB: An adaptation of Brislin's translation model for cross-cultural research. Nurs Res 2001, 50: 300–304. 10.1097/00006199-200109000-00008

    Article  CAS  PubMed  Google Scholar 

  37. Anderson RT, Aaronson NK, Bullinger M, McBee WL: A review of the progress towards developing health-related quality-of-life instruments for international clinical studies and outcomes research. Pharmacoeconomics 1996, 10: 336–355.

    Article  CAS  PubMed  Google Scholar 

  38. Van de Vijver FJR: Towards a Theory of Bias and Equivalence. Nachrichten Spezial Band 3. In Cross-Cultural Survey Equivalence Edited by: J.Harkness H. ZUMA; 1998, 41–65. [http://www.gesis.org/Publikationen/Zeitschriften/ZUMA%5FNachrichten%5Fspezial/#zn-3]

    Google Scholar 

  39. Saxena S, Carlson D, Billington R: The WHO quality of life assessment instrument (WHOQOL-Bref): the importance of its items for cross-cultural research. Qual Life Res 2001, 10: 711–721. 10.1023/A:1013867826835

    Article  CAS  PubMed  Google Scholar 

  40. Skevington SM, Lotfy M, O'Connell KA: The World Health Organization's WHOQOL-BREF quality of life assessment: psychometric properties and results of the international field trial. A report from the WHOQOL group. Qual Life Res 2004, 13: 299–310. 10.1023/B:QURE.0000018486.91360.00

    Article  CAS  PubMed  Google Scholar 

  41. WHOQOL: A cross-cultural study of spirituality, religion, and personal beliefs as components of quality of life. Soc Sci Med 2006, 62: 1486–1497. 10.1016/j.socscimed.2005.08.001

    Article  Google Scholar 

  42. Vogt DS, King DW, King LA: Focus groups in psychological assessment: enhancing content validity by consulting members of the target population. Psychol Assess 2004, 16: 231–243. 10.1037/1040-3590.16.3.231

    Article  PubMed  Google Scholar 

  43. Bernal G, Bonilla J, Bellido C: Ecological validity and cultural sensitivity for outcome research: issues for the cultural adaptation and development of psychosocial treatments with Hispanics. J Abnorm Child Psychol 1995, 23: 67–82. 10.1007/BF01447045

    Article  CAS  PubMed  Google Scholar 

  44. Willgerodt MA: Using focus groups to develop culturally relevant instruments. West J Nurs Res 2003, 25: 798–814. 10.1177/0193945903256708

    Article  PubMed  Google Scholar 

  45. Sperber AD: Translation and validation of study instruments for cross-cultural research. Gastroenterology 2004, 126: S124-S128. 10.1053/j.gastro.2003.10.016

    Article  PubMed  Google Scholar 

  46. Girod I, de la Loge C, Keininger D, Hunter MS: Development of a revised version of the Women's Health Questionnaire. Climacteric 2006, 9: 4–12. 10.1080/13697130500487372

    Article  CAS  PubMed  Google Scholar 

  47. Yao G, Wu CH: Factorial invariance of the WHOQOL-BREF among disease groups. Qual Life Res 2005, 14: 1881–1888. 10.1007/s11136-005-3867-7

    Article  PubMed  Google Scholar 

  48. Atkinson MJ, Stewart WC, Fain JM, Stewart JA, Dhawan R, Mozaffari E, Lohs J: A new measure of patient satisfaction with ocular hypotensive medications: The Treatment Satisfaction Survey for Intraocular Pressure (TSS-IOP). Health Qual Life Outcomes 2003, 1: 67. 10.1186/1477-7525-1-67

    Article  PubMed Central  PubMed  Google Scholar 

  49. Rabin R, de Charro F: EQ-5D: a measure of health status from the EuroQol Group. Ann Med 2001, 33: 337–343.

    Article  CAS  PubMed  Google Scholar 

  50. Goldman AE, McDonald SS: The Group Depth Interview: Principles & Practice.. Englewood Cliffs, NJ, Prentice Hall; 1987.

    Google Scholar 

  51. Weinberger M, Ferguson JA, Westmoreland G, Mamlin LA, Segar DS, Eckert GJ, Greene JY, Martin DK, Tierney WM: Can raters consistently evaluate the content of focus groups? Social Science & Medicine 1998, 46: 929–933. 10.1016/S0277-9536(97)10028-4

    Article  CAS  Google Scholar 

  52. Kosny, A.: Joint Stories and Layered Tales: Support, Contradiction and Meaning Construction in Focus Group Research. The Qualitative Report 2006, 8: 538–548. [http://www.nova.edu/ssss/QR/QR8–4/kosny.pdf]

    Google Scholar 

  53. Morgan AK, Drury VB: Legitimising the subjectivity of human reality through qualitative research method. The Qualitative Report 2003, 8: 70–80. [http://www.nova.edu/ssss/QR/QR8–1/morgan.html]

    Google Scholar 

  54. Bate P, Robert G: Studying health care "quality" qualitatively: the dilemmas and tensions between different forms of evaluation research within the U.K. National Health Service. Qualitative Health Research 2002, 12: 966–981. 10.1177/104973202129120386

    Article  PubMed  Google Scholar 

  55. Catterall M, Maclaran P: Focus group data and qualitative analysis programs: Coding the moving picture as well as the snapshots. Sociological Research Online 1997., 2:

    Google Scholar 

  56. Sofaer S: Qualitative methods: what are they and why use them? Health Services Research 1999, 34: 1101–1118.

    CAS  PubMed Central  PubMed  Google Scholar 

  57. Powell RA, Single HM, Lloyd KR: Focus groups in mental health research: enhancing the validity of user and provider questionnaires. International Journal of Social Psychiatry 1996, 42: 193–206.

    CAS  PubMed  Google Scholar 

  58. Brookes CE: On the nature of psychodynamic science. J Am Acad Psychoanal Dyn Psychiatry 2004, 32: 541–550. 10.1521/jaap.32.3.541.44772

    Article  PubMed  Google Scholar 

  59. Brooks SA: Re: Reconcilable differences: the marriage of qualitative and quantitative methods. Canadian Journal of Psychiatry - Revue Canadienne de Psychiatrie 1997, 42: 529–530.

    CAS  PubMed  Google Scholar 

  60. Langhout RD: Reconceptualizing quantitative and qualitative methods: a case study dealing with place as an exemplar. American Journal of Community Psychology 2003, 32: 229–244. 10.1023/B:AJCP.0000004744.09295.9b

    Article  PubMed  Google Scholar 

  61. Kroll T, Neri MT, Miller K: Using mixed methods in disability and rehabilitation research. Rehabilitation Nursing 2005, 30: 106–113.

    Article  PubMed  Google Scholar 

  62. Abusabha R, Woelfel ML: Qualitative vs quantitative methods: two opposites that make a perfect match. Journal of the American Dietetic Association 2003, 103: 566–569. 10.1053/jada.2003.50129

    Article  PubMed  Google Scholar 

  63. Stoop AP, Berg M: Integrating quantitative and qualitative methods in patient care information system evaluation: guidance for the organizational decision maker. Methods of Information in Medicine 2003, 42: 458–462.

    CAS  PubMed  Google Scholar 

  64. Brinton B, Fujiki M: Blending quantitative and qualitative methods in language research and intervention. American Journal of Speech-Language Pathology 2003, 12: 165–171. 10.1044/1058-0360(2003/063)

    Article  PubMed  Google Scholar 

  65. Arborelius E, Timpka T: General practitioners' comments on video recorded consultations as an aid to understanding the doctor-patient relationship. Fam Pract 1990, 7: 84–90.

    Article  CAS  PubMed  Google Scholar 

  66. F K: The Focused Group Interview and Moderator Bias. Marketing Review 1976, 31: 19–21.

    Google Scholar 

  67. J W: Analysis and Interpretation. In Qualitative Research in Action. London, Edward Arnold; 1989.

    Google Scholar 

  68. Verdugo MA, Schalock RL, Keith KD, Stancliffe RJ: Quality of life and its measurement: important principles and guidelines. J Intellect Disabil Res 2005, 49: 707–717. 10.1111/j.1365-2788.2005.00739.x

    Article  CAS  PubMed  Google Scholar 

  69. Ventegodt S, Hilden J, Merrick J: Measurement of quality of life I. A methodological framework. ScientificWorldJournal 2003, 3: 950–961. 10.1100/tsw.2003.75

    Article  PubMed  Google Scholar 

  70. Miller TW, Walkowski J: Qualitative Research Online. Edited by: Publishers R. Madison; 2004.

    Google Scholar 

Download references

Acknowledgements

Our thanks to Dr. Y. Bolkan (Adjunct Assistant Professor, Department of Chemical Engineering at the University of British Columbia and University of Calgary) for her timely assistance with German translation activities associated with preparation of the Topic Guide and various participant surveys.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark J Atkinson.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Atkinson, M.J., Lohs, J., Kuhagen, I. et al. A promising method for identifying cross-cultural differences in patient perspective: the use of Internet-based focus groups for content validation of new Patient Reported Outcome assessments. Health Qual Life Outcomes 4, 64 (2006). https://doi.org/10.1186/1477-7525-4-64

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1477-7525-4-64

Keywords