Skip to main content

A pilot Internet "Value of Health" Panel: recruitment, participation and compliance



To pilot using a panel of members of the public to provide preference data via the Internet


A stratified random sample of members of the general public was recruited and familiarised with the standard gamble procedure using an Internet based tool. Health states were perdiodically presented in "sets" corresponding to different conditions, during the study. The following were described: Recruitment (proportion of people approached who were trained); Participation (a) the proportion of people trained who provided any preferences and (b) the proportion of panel members who contributed to each "set" of values; and Compliance (the proportion, per participant, of preference tasks which were completed). The influence of covariates on these outcomes was investigated using univariate and multivariate analyses.


A panel of 112 people was recruited. 23% of those approached (n = 5,320) responded to the invitation, and 24% of respondents (n = 1,215) were willing to participate (net = 5.5%). However, eventual recruitment rates, following training, were low (2.1% of those approached). Recruitment from areas of high socioeconomic deprivation and among ethnic minority communities was low. Eighteen sets of health state descriptions were considered over 14 months. 74% of panel members carried out at least one valuation task. People from areas of higher socioeconomic deprivation and unmarried people were less likely to participate. An average of 41% of panel members expressed preferences on each set of descriptions. Compliance ranged from 3% to 100%.


It is feasible to establish a panel of members of the general public to express preferences on a wide range of health state descriptions using the Internet, although differential recruitment and attrition are important challenges. Particular attention to recruitment and retention in areas of high socioeconomic deprivation and among ethnic minority communities is necessary. Nevertheless, the panel approach to preference measurement using the Internet offers the potential to provide specific utility data in a responsive manner for use in economic evaluations and to address some of the outstanding methodological uncertainties in this field.


Although concerns have been expressed about the use of cost utility analyses (CUA)[1, 2], the number of such analyses has increased in the past ten years[3]. Guidelines in the UK and Canada, and those proposed by the Washington Panel on cost effectiveness in the USA, promote CUA where the purpose of the analysis is informing public resource allocation [46] The UK's National Institute for Health and Clinical Excellence (NICE) has made cost utility an explicit aspect of policy making[6]. The UK and Washington Panel reference cases suggest that the perspective for the valuation of benefits in CUA should be that of the general public[5, 6]. The arguments around adopting this perspective are beyond the scope of this article, but are described elsewhere[5, 714, 14]

A wide range of approaches has been taken to obtain utility data for economic evaluations[15] Although the widespread use of standard measures such as the EQ5D and SF6D[16] may address some of this inconsistency, this approach will not be appropriate in all situations and there remains a case for developing alternative methods for obtaining health state-specific utility data. We have piloted one approach, using the Internet to obtain preferences on written health state descriptions from a "standing panel" of members of the public.

Computer-based preference elicitation tools have been available for more than 15 years [1723] with later use of the Internet [2428]. Many preference elicitation tools, and studies employing them, are concerned with the psychology of preference elicitation[29, 30] and are therefore less concerned with selection bias than Internet-based epidemiological[31, 32], behavioural[33, 34] or therapeutic studies[35, 36]. While Internet based research faces many of the same challenges encountered in more traditional approaches, additional concerns are legitimate, in particular: sampling and sampling representativeness, competition for the attention of respondents, and barriers to participation related to literacy or disability[37]. Reported experience varies, with some studies reporting very disappointing results for recruitment and retention[38], and others showing rates which are comparable to traditional methods[39, 40]. However, despite possible exceptions[31], it seems reasonably consistent that research participants in Internet-based studies are likely to be different from those recruited by other means [4144]. Whether these differences matter in the context of preference elicitation studies remains uncertain.

In this paper we describe recruitment and participation in the pilot panel study and discuss the potential for extension of this approach to fulfil the need for eliciting utilities from the general public for research purposes and to support the need for these values to inform allocation policy decisions.


Recruitment and training

We recruited panel members from a convenience sample of four UK cities: Exeter, Sheffield, Glasgow and Aberdeen. A random sample was chosen from the electoral rolls for these cities in January 2004, stratified for socio-economic status using tertiles of the Index for Material Deprivation (IMD2000)[45]. We assumed a 15–20% response rate to the invitation to attend panel training based on the authors' previous experience with preference elicitation studies using face to face interviews and aimed for an arbitrary target sample size for the panel of 100.

Participants were invited by letter to express interest in joining the panel, accompanied with a short questionnaire seeking reasons for non-participation. Positive respondents were then invited to a three hour training session in each of the cities involved. Panel members were recruited and trained in two phases during summer and autumn 2004, involving eight training sessions.

Training sessions covered the following areas as background: research and policy making; role of modelling in estimating cost effectiveness; limitations of existing methods for utility assessment. Participants were familiarised with the standard gamble, using formats appropriate to whether the health states were considered better or worse than death, with one-to-one support from facilitators.

Health state descriptions were placed on the website for at least three weeks. Descriptions were posted on the website in sets containing different health states within the same condition (e.g. levels of severity or treatment side effects). States within a set were presented in random order. Sets included health states depicting the following diseases: congestive heart failure; eczema; hip osteoarthritis; Crohn's disease; colorectal cancer; depression; glioma; prostate cancer; insomnia; ovarian cancer; opiate abuse; and chronic obstructive pulmonary disease. Descriptions were developed using reports of quality of life using patient based disease-specific outcome measures and clinical expert opinion and presented in bullet point rather than narrative format[46].

We encouraged participants by email to provide preference values in this period and issued email reminders. Panel members who valued at least one description within the three week period were entered into a lottery for £50 Internet gift vouchers, held after each set of descriptions were taken off the Internet site. A regular newsletter was sent to participants reporting participation, results, website developments and other news regarding the project.

Preference elicitation

Panel members were asked to imagine themselves in the described health state, for at least twenty years, or, if they felt their life expectancy was likely to be less than that, for the rest of their life[47] The standard gamble method was used, based on the axiomatic advantage that it reflects choices made under conditions of uncertainty[48] This was carried out using bottom-up "titration", in which respondents work through choices with increasing probability of good outcome in the gamble option. We used this approach rather than an iterative approach where responses "ping-pong" between options with high and low probabilities of worst outcome in the gamble[49] in order to overcome reported difficulties with completion of the iterative approach[46].

Internet site development

The website was created in 2004 and piloted by the project team and panel members from the first phase of recruitment. It includes the standard gamble interface, information on the project, and a bulletin board for sharing questions and information on the project.

The standard gamble interface (Figure 1) has several features of interest:

Figure 1
figure 1

Standard gamble interface.

  • It is not possible for participants to enter responses which are fundamentally illogical e.g. preferring the gamble at a given probability of restoration of full health, but then preferring the health state of interest when this probability increases. (Do you jump right into a gamble or start with a question asking preference between perfect health and the state of interest?)

  • Participants who indicate that they would take the gamble where the probability of death is 1.0 must confirm that they consider the health state description worse than being dead. They are then automatically taken to an interface which presents the options appropriately for the elicitation of negative utility values.

  • As the probabilities in the risky choice change, they are represented graphically as a bag of different coloured balls, each representing the potential outcomes of full health and death.

  • Participants had three possible respondes to each choices in the standard gamble: choose to remain in the described health state; choose the risky option (with varying chance of death or full health); or "uncertain". Illogical chains of response (e.g. "remain in health state", followed by "uncertain", followed by "remain in health state") were not permitted and participants were required to repeat the choice which resulted in the illogical response. Choices at all levels of risk had to be completed before the response was accepted.

  • The increments for changing probability in the gamble are set at 1% between probabilities of full health of 0.95 and 1.0 in the gamble option and 5% otherwise.

Responses were downloaded into a database with automatic calculation of respondent's utility for each health state description.


Recruitment was described and the demographic characteristics of the pilot panel compared to data from the UK National Census carried out in 2001.

Completion of preference elicitation tasks was described in three ways. First, participation by panel member, defined as the proportion of panel members who carried out at least one valuation task during the study period. Second, for each set of health state descriptions, the proportion of panel members who responded was calculated – participation by health state description set. Third, for each panel member who carried out at least one valuation task (participant), we calculated compliance, defined as the proportion of health states valued by each participant.

Potential determinants of participation by panel member and compliance were explored through univariate and multivariate analyses using SPSS for windows version 11. Age, marital status, occupation and ethnicity were collected from panel members at recruitment. Socioeconomic status was attributed according to place of residence, using the Scottish Index of Material Deprivation (SIMD) for Aberdeen and Glasgow[50], calculated at postcode sector level and the 2004 version of the Index of Material Deprivation for Exeter and Sheffield at Lower Super Output Area (LSOA) level[51]. LSOAs contain populations of 1000–1500 people. For the purposes of the analysis, SIMD and IMD were treated as a single scale. Other variables considered were city of residence, nationality (Scottish or English) and training session.


Recruitment and retention

Recruitment was carried out in two rounds. Initially, people in Exeter, Sheffield and Aberdeen were recruited and trained. It became clear that the target panel size would not be met from this sample and a further round of recruitment took place in Exeter and Glasgow to increase panel size. Overall, recruitment and training took about seven months. The panel carried out valuation tasks from August 2004 through March 2006, and we met our membership (n = 112) goal in November 2004. In Autumn 2004, therefore, we were recruiting new panel members while existing members were participating in valuation tasks.

Overall, 5,320 people were contacted through the electoral roll. Only 1215 (23%) of those approached responded to the initial invitation letter. Of this group, 286 (23.6%) expressed willingness to participate in the project and 112 (39% of those who agreed) attended a training session. Only people who attended a training session were considered part of the panel. Thus, the net final recruitment was 2.1% of those initially approached.

Residents from Exeter were more willing to participate (see Table 1: χ2 = 41.18, P < 0.001) and were more willing to give reasons for declining (see Table 2: χ2 = 12.86, P < 0.001) compared to residents from the other cities. Lack of Internet access was more frequently reported among respondents in cities other than Exeter (χ2 = 62.0, P < 0.001). Lack of time and Internet access were the most common reasons given for declining the initial invitation to participate. Other reasons included illness or disability, impending travel and not reaching the intended recipient because of incorrect address details or their decease.

Table 1 Recruitment by City
Table 2 Reasons for declining initial invitation

Panel member characteristics

The age range of panel members was 18 to 79 years with mean 48 years. The panel included a higher proportion of people in middle age than the UK population as a whole, and fewer younger and older people (see Figure 2).

Figure 2
figure 2

Value of Health Panel age structure vs UK population.

Table 3 shows the demographic characteristics of the panel members. There were more women (51.8%) than men (48.8%) (P = NS). Men were, on average, slightly older than women, though the difference was not significant. The panel had a higher proportion of married and retired people with correspondingly lower proportions of unmarried people and those in employment than the national population. However, only the proportions of married and single people and those from ethnic minorities were differed significantly from national data

Table 3 Panel member personal characteristics

Table 4 shows the proportion of panel members from each city whose area of residence fell into tertiles of IMD or SIMD scores ranked at a national level for Scotland or England. The distribution is significant (χ = 16.8, P < 0.025). If the panel reflected the national distribution of socioeconomic status as measured by the IMD/SIMD, the samples from each city would contain 33% of people in each national tertile. People from areas of high deprivation are under-represented in the panel, particularly in Exeter and Sheffield. The numbers of people recruited from Scotland were low, making this comparison imprecise.

Table 4 Panel compared to national distribution of socioeconomic status

Participation and compliance

During the first year of the project (October 2004–5), 25 members (22%) of the panel formally withdrew. Most of these panellists had completed some valuations before withdrawal. Having insufficient time, moving house, losing Internet access and personal or family illness were the main reasons cited. There was no statistical association between age, sex or socioeconomic status and this explicit withdrawal from the project.

Overall, 83 panel members (74.1%) participated in the project i.e. carried out at least one valuation. In almost all cases (94.5%), panellists who completed one health state in a set of health states, went on to complete the entire set. Most valuation tasks were carried out in one sitting: in only 13 (2.3%) were responses from a set received on more than one day. In these cases, respondents carried out valuations in no more than two sittings separated by 1 to 28 days (mean 6.9 days, median 6 days).

Panel members were asked to complete the valuation tasks within an arbitrary three week period, although in some cases descriptions were posted for longer. Figure 3 shows the cumulative probability of obtaining a set of values within 21 days. Where respondents carried out valuation on more than one day, the date of completion (i.e. the second date) was used in this calculation.

Figure 3
figure 3

Probability of participation within 21 days of a set of health state descriptions being posted.

Taking variations in panel membership into account, overall average participation by health state description set was 41% (range 24%–65%). This is the proportion of available panel members who completed each set of health state descriptions (see Figure 4). The drop in participation around presentation of the third set of decriptions results from a combination of (a) increased panel membership following the second round of recruitment and (b) initial access problems experienced by new panel members, mostly related to incorrect email addresses and incorrect or forgotten logins and passwords. Resolution of these problems resulted in an increase in participation, although this was followed by a gradual decline.

Figure 4
figure 4

Participation over time.

Univariate analysis showed no significant association with participation and age, sex, nationality, city, retirement status or training session. Data on ethnicity were incomplete and excluded from further analysis.

People with lower socioeconomic status were less likely to participate (t test, t = 3.713, P = 0.013) and those who were married were more likely to participate; 86% of married people participated versus 52.5% of unmarried people (χ2 = 13.90, P < 0.001).

Logistic regression confirmed the independent effects of socioeconomic status and marital status on participation. The odds ratio (95% confidence interval) for marital status was 0.57 (0.36 to 0.91), although odds ratios for specific categories were not significant. This analysis is therefore akin to a χ2 test for trend. In the same model, the odds ratio for participation according to IMD score was 0.94 (0.91 to 0.98) i.e. the odds of participation fell slightly as IMD (socioeconomic deprivation) increases. Pseudo-R2 for the model was low, at 0.12.

Compliance, defined as the proportion of health state valuations provided by each member as a percentage of the total for which they were eligible to complete, ranged from 3–100% (see Figure 5). A quarter of the panel carried out less than 20% of the elicitation tasks. There was no association between compliance and age (Spearman correlation, P = 0.92); sex (t test, P = 0.422); nationality (ANOVA, P = 0.23); city (ANOVA, P = 0.631); marital status (t test, P = 0.568); occupation (ANOVA, P = 0.19) or IMD/SIMD score (Spearman correlation, P = 0.40).

Figure 5
figure 5

Distribution of compliance.


This is the first attempt, of which we are aware, to collect new utility data repeatedly from members of the public for the specific purpose of informing ongoing cost utility analyses. Although we have demonstrated basic feasibility, in so far as the panel was established as planned and utility data obtained within the required period, recruitment was very low and retention limited. This was, in part, driven by the need for attendence at a training session. Initial positive response to the invitation to participate was similar to that shown in studies previously carried out by one of the authors (JB) aiming to recruit for a single episode of health state valuation using face to face interviews.

Across health state description sets, participation was around 40%, giving a sample size range for each health state description of 28 to 62. Participation by health state description set declined during the study period, demonstrating the need for ongoing recruitment and training. However, around 30% of the panel continued to participate at one year, and appeared to stabilise, consistent with other accounts of Internet research[52]. It is perhaps not surprising that recruitment and retention were limited given the burden placed on respondents: to attend face to face training and respond to 18 sets of preference measurement with limited rewards (a small cash lottery).

Reips identified 25 advantages and disadvantages (and proposed solutions) of the Internet for psychological experiments[52]. Our study avoided the problem of multiple submissions by requiring logging into the standard gamble and checking the timing of submissions, and the potential for misunderstanding through lack of interaction was addressed by initial training sessions. However, drop out remained high despite the use of financial incentives, reminders, some personalisation and limited feedback. Feedback from the panel suggested that more detailed and personalised feedback on their utility data and the purposes to which they were put, and a certain payment rather than a lottery may have improved compliance.

The three week period chosen for valuation tasks was arbitrary but appears appropriate. The probability of completion by that time was very high, even where health state descriptions were available on the website for longer. This issue has not been addressed in previous studies. The shape of the curve for completion was surprising. We expected there would be an initial surge of responses after descriptions were posted which would quickly tail off, with smaller responses following reminders. Reminders were sent at varying points while each health care description set was posted on the Internet and this may account for the overall pattern shown i.e. that panellists carried out the valuation tasks fairly evenly throughout the three week period.

The demographic make up of this pilot panel does not reflect Scotland and England as a whole. This was not unexpected: one of the purposes of the pilot study was to understand better the determinants of recruitment, participation and compliance so as to inform the establishment of a larger, more representative panel. Representation of people from more deprived areas, and from ethnic minority groups, was particularly low, demonstrating the challenge for engagement which is shown in other types of study[53] This was despite stratification by socioeconomic status.

In addition to the low initial recruitment from areas of higher socioeconomic deprivation, lack of participation amongst people recruited to the panel was also associated with lower socioeconomic status. The association between marital status and participation is not explained by covariance with the other limited independent variables. Surprisingly, compliance was not associated with socioeconomic status, suggesting either that the number of participants was insufficiently large to demonstrate an effect, or that the principal impact of socioeconomic status is on participation. Lack of adequate access to the Internet or lack of effectiveness in training sessions would be consistent with the latter hypothesis. The association between participation and marital status was not shown for compliance, which showed no association with any of the other covariates measured.

The importance of the panel's lack of representativeness depends on the influence of demographic factors on utilities for hypothetical conditions, which is an area of limited previous study. Age [5456], sex[54, 56], marital status[54], nationality[57], educational level[58] and ethnicity[59] have been demonstrated as being significant predictors of utility. Experience of illness appears to be a particularly important determinant of variation in preferences for hypothetical states [6062].

The underlying reasons for variation in utilities for hypothetical states is not clear but may relate to risk attitude[63], whose distribution is very unclear in the general population, or numeracy[64].

The extent to which the panel's utilities represent what would be obtained from a demographically representative panel is therefore unclear and may not, relative to other concerns, be of paramount importance. Firstly, most research to date has focussed on the effect of demographic factors on the absolute utilities for health states, rather than on the impact of these factors on the effect of moving between states. It is not clear, therefore, with the possible exception of current illness[62], whether demographic imbalance would result in different estimates of incremental effectiveness between health technologies competing for scarce health care resources. Secondly, variation in utilities arising from methodological factors (e.g. choice of rating task, perspective of rater) appear to be more influential. This suggests that, while analysts might be cautious about using utilities from a source which is not demographically balanced, they should be more averse to combining utilities from sources which use different methods in the same evaluation.

The use of computer-based preference elicitation is not new[17]. Sumner et al developed the Utiter programme in 1991[18]. This was followed by U-Maker[19], Gambler[20], iMPACT[21, 22] and, more recently, ProSPEQT[23]. In addition, "one-off" computer based utility assessment has been used in a wide range of studies [6567] and as a teaching tool[68]. Computer based utility measurement has potential advantages over interviewer-based methods: lower cost once software has been developed; elimination of interviewer variation; avoidance of transcription errors in data entry; potential to address logical errors automatically[22]; and increased flexibility over the time required to complete the task. Acceptability among members of the general public is reasonable, although the standard gamble has been rated as less acceptable than visual analogue scaling or time trade off in one study[69]

The use of the Internet is a logical extension to the development of computer-based utility measurement tools. The most technically sophisticated approach is iMPACT3, developed by Lenert and colleagues. This uses an object orientated approach to facilitate the depiction of health states using written descriptions or multi-media presentations[24] and includes automatic error correction[70] Ubel and colleagues have also developed a series of Internet-based tools, including the person trade off[71] for use in a range of experiments [2528].

Lenert[72] suggests web based preference elicitation may reduce interviewer bias, although we are not aware of studies which have addressed this using the standard gamble. However, Damschroder et al[71] have compared computer based preference measurement using PTO to face to face interview and found no significant differences in: values obtained; occurrence of non-trading; or measures of logical consistency between the two modes.

Although the Value of Health Panel project shares many of the features of other Internet based preference measurement systems, it is unique in having recruited and maintained a group of members of the public who have expressed preferences on a wide range of health state descriptions. Recruitment was, however, not Internet-based. There are no published accounts of recruitment to preference studies using the Internet, although Ubel et al have reported obtaining a large representative sample of US citizens for one study[26] Validation of the data obtained from such panels remains important, and logical consistency and procedural invariance are methods which may be applied[73]. Although some work has been carried out in this project[74], the area remains relatively under-studied in general.

The establishment of Internet panels for market research has increased dramatically in the past five years. Harris Interactive, advertise a global panel of 1 million members, with 600,000 in the USA[75]. In the UK, YouGov has recruited a panel of 89,000 people through Internet advertising and floated on the Stock Exchange in 2005 [76]. However, Internet penetration in the UK is only around 52% and people who are likely to join Internet panels are more likely to be politically interested and knowledgeable than those less likely to participate[77].

Nevertheless, it seems likely that the upward trend in Internet access will continue, as will access to broadband technology. This presents important opportunities for preference measurement and research with, potentially, advantages over one-to-one interviews. For example, large numbers of people can be involved; alternatives to written descriptions can be used; costs are likely to be less than one to one interviews; automatic checks for illogical responses can be integrated; and various approaches to representing risk (or time) in preference measurement can be explored. In short, the potential for using the Internet in this field, to improve the application of cost utility analyses and address some of the important methodological challenges that exist in preference measurement, is only beginning to be exploited.


NHS R&D Programme; National Institute for Health and Clinical Excellence (NICE); NHS Quality Improvement Scotland (NHSQIS).


  1. Hutton J, Brown R: Use of economic evaluation in decision-making: What needs to change? Value Health 2002, 5: 65–66. 10.1046/j.1524-4733.2002.52115.x

    Article  PubMed  Google Scholar 

  2. Neumann PJ: Why don't Americans use cost-effectiveness analysis? American Journal of Managed Care 2005, 10: 308–312.

    Google Scholar 

  3. Sonnad S, Greenberg D, Rosen A, Neumann P: Diffusion of published cost-utility analyses in the field of health policy and practice. International Journal of Technology Assessment in Health Care 2005, 21: 399–402.

    Article  PubMed  Google Scholar 

  4. Glennie J, Torrance GW, Baladi J, Berka C, Hubbard E, Menon D, Otten N, Riviera M: The revised Canadian Guidelines for the Economic Evaluation of Pharmaceuticals. Pharmacoeconomics 1999, 15: 459–468. 10.2165/00019053-199915050-00004

    Article  CAS  PubMed  Google Scholar 

  5. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB: recommendations of the panel on cost-effectiveness in health and medicine consensus statement. JAMA 1996, 276: 1253–1258. 10.1001/jama.276.15.1253

    Article  CAS  PubMed  Google Scholar 

  6. National Institute for Clinical Excellence: Guide to the Methods of Technology Appraisal. London, National Institute for Clinical Excellence; 2003.

    Google Scholar 

  7. Dolan P: Whose Preferences Count? Med Decis Making 1999, 19: 482–486.

    Article  CAS  PubMed  Google Scholar 

  8. Brazier J, Akehurst R, Brennan A, Dolan P, Claxton K, McCabe C, O'Hagan T, Sculpher M, Tsuchyia A: Should patients have a greater role in valuing health states: whose well-being is it anyway? [04/3]. Sheffield, School of Health and Related Research, University of Sheffield. Discussion Paper Series; 2004.

    Google Scholar 

  9. Torrance G, Feeny D, Furlong W, Barr R, Zhang Y, Wang Q: Multiattribute utility functions for a comprehensive health status classification. Medical Care 1996, 34: 702–722. 10.1097/00005650-199607000-00004

    Article  CAS  PubMed  Google Scholar 

  10. Gafni A: Willingness to pay as a measure of benefits: relevant questions in the context of public decision making about health care programmes. Medical Care 1991, 29: 1246–1252. 10.1097/00005650-199112000-00007

    Article  CAS  PubMed  Google Scholar 

  11. De Wit GA, Busschbach JJ, De Charro FT: Sensitivity and perspective in the valuation of health status: whose values count? Health Econ 2000, 9: 109–126. 10.1002/(SICI)1099-1050(200003)9:2<109::AID-HEC503>3.0.CO;2-L

    Article  CAS  PubMed  Google Scholar 

  12. Buckingham K: A note on HYE (healthy years equivalent). Health Economics 1993, 12: 301–309. 10.1016/0167-6296(93)90013-5

    Article  CAS  Google Scholar 

  13. Ubel P, Richardson J, Menzel P: Societal value, the person trade-off, and the dilemma of whose values to measure for cost-effectiveness analysis. Health Economics 2000, 9: 127–136. 10.1002/(SICI)1099-1050(200003)9:2<127::AID-HEC500>3.0.CO;2-Y

    Article  CAS  PubMed  Google Scholar 

  14. Furlong W, Oldridge N, Perkins A, Feeny D, Torrance GW: Community or Patient Preferences for Cost-Utility Analyses: Does it Matter? International Society for Pharmacoeconomics, ISPOR Conference, Arlington, Virginia 2003.

    Google Scholar 

  15. Stein K, Fry A, Round A, Milne R, Brazier J: What value health? A review of health state values used in early technology assessments for NICE. Applied Health Economics and Policy 2006.

    Google Scholar 

  16. Dolan P: The measurement of health related quality of life for use in resource allocation in health care. In Handbook of Health Economics. Edited by: Culyer A, Newhouse J. London: Elsevier Science; 2002.

    Google Scholar 

  17. Brennan P, Strombom I: Improving health care by understanding patient preferences. Journal of the American Medical Informatics Association 1998, 5: 257–262.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  18. Sumner W, Nease R, Littenberg B: U-titer: a utility assessment tool. Proceedings of the Annual Symposium on Computer Application in Medical Care 1991, 701–705.

    Google Scholar 

  19. Sonnenberg FA: UMaker. New Jersey, Clinical Informatics Research Group, University of Medicine and Dentistry; 1993.

    Google Scholar 

  20. Gonzalez B, Eckman G, et al.: Gambler: a computer workstation for patient utility assessment. Medical Decision Making 1992, 12: 350.

    Google Scholar 

  21. Lenert L, Michelson D, Flowers C, Bergen M: IMPACT: an object-orientated graphical environment for construction of multimedia patient interviewing software. Proceedings of the Annual Symposium on Computer Application in Medical Care 1995, 319–323.

    Google Scholar 

  22. Lenert LA, Sturley A, Watson ME: iMPACT3: Internet-Based Development and Administration of Utility Elicitation Protocols. Med Decis Making 2002, 22: 464–474. 10.1177/0272989X02238296

    Article  CAS  PubMed  Google Scholar 

  23. McFarlane P, Bayoumi A, Pierratos A, Redelmeier D: The quality of life and cost utility of home nocturnal and conventional in-center haemodialysis. Kidney International 2003, 64: 1004–1011. 10.1046/j.1523-1755.2003.00157.x

    Article  PubMed  Google Scholar 

  24. Goldstein MK, Clarke AE, Michelson D, Garber AM, Bergen MR, Lenert LA: Developing and Testing a Multimedia Presentation of a Health-state Description. Med Decis Making 1994, 14: 336–344.

    Article  CAS  PubMed  Google Scholar 

  25. Damschroder L, Zikmund-Fisher B, Kulpa J, Ubel P: Considering adaptation in preference elicitations. Society for Medical Decision Making Annual Conference; San Francisco 2005.

    Google Scholar 

  26. Damschroder L, Muroff J, Smith D, Ubel P: A reversal in the public/patient discrepancy: utility ratings for pain from pain patients are lower than from non-patients. Society for Medical Decision Making Annual Conference; San Francisco 2005.

    Google Scholar 

  27. Damschroder L, Ubel P, Zikmund-Fisher B, Kim S, Johri M: A randomized trial of a web-based deliberation exercise: improving the quality of healthcare allocation preference surveys. Society for Medical Decision Making Annual Conference; San Francisco 2005.

    Google Scholar 

  28. Baron J, Ubel P: Types of inconsistency in health-state utility judgements. Organizational Behaviour and Human Decision Processes 2002, 89: 1100–1118. 10.1016/S0749-5978(02)00019-5

    Article  Google Scholar 

  29. Lenert LA, Goldstein MK, Bergen MR, Garber AM: The Effects of the Content of Health State Descriptions on the Between-Subject Variability in Preferences. California, USA, Stanford University; 2005:1–21.

    Google Scholar 

  30. Lenert LA, Ziegler J, Lee T, Unfred C, Mahmoud R: The Risks of Multimedia Methods: Effects of Actor's Race and Gender on Preferences for Health States. J of the American Informatics Assn 2000, 7: 177–185.

    Article  CAS  Google Scholar 

  31. Marquet RL, Bartelds AI, van Noort SP, Koppeschaar CE, Paget J, Schellevis FG, van der ZJ: Internet-based monitoring of influenza-like illness (ILI) in the general population of the Netherlands during the 2003–2004 influenza season. BMC Public Health 2006, 6: 242. 10.1186/1471-2458-6-242

    Article  PubMed Central  PubMed  Google Scholar 

  32. Hubbard PA, Broome ME, Antia LA: Pain, coping, and disability in adolescents and young adults with cystic fibrosis: a Web-based study. Pediatr Nurs 2005, 31: 82–86.

    PubMed  Google Scholar 

  33. Bowen A, Williams M, Horvath K: Using the internet to recruit rural MSM for HIV risk assessment: sampling issues. AIDS Behav 2004, 8: 311–319. 10.1023/B:AIBE.0000044078.43476.1f

    Article  PubMed Central  PubMed  Google Scholar 

  34. Fernandez MI, Varga LM, Perrino T, Collazo JB, Subiaul F, Rehbein A, Torres H, Castro M, Bowen GS: The Internet as recruitment tool for HIV studies: viable strategy for reaching at-risk Hispanic MSM in Miami? AIDS Care 2004, 16: 953–963. 10.1080/09540120412331292480

    Article  CAS  PubMed  Google Scholar 

  35. Clarke G, Reid E, Eubanks D, O'Connor E, DeBar LL, Kelleher C, Lynch F, Nunley S: Overcoming depression on the Internet (ODIN): a randomized controlled trial of an Internet depression skills intervention program. J Med Internet Res 2002, 4: E14. 10.2196/jmir.4.3.e14

    Article  PubMed Central  PubMed  Google Scholar 

  36. Formica M, Kabbara K, Clark R, McAlindon T: Can clinical trials requiring frequent participant contact be conducted over the Internet? Results from an online randomized controlled trial evaluating a topical ointment for herpes labialis. J Med Internet Res 2004, 6: e6. 10.2196/jmir.6.1.e6

    Article  PubMed Central  PubMed  Google Scholar 

  37. Rhodes SD, Bowie DA, Hergenrather KC: Collecting behavioural data using the world wide web: considerations for researchers. J Epidemiol Community Health 2003, 57: 68–73. 10.1136/jech.57.1.68

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  38. Koo M, Skinner H: Challenges of internet recruitment: a case study with disappointing results. J Med Internet Res 2005, 7: e6. 10.2196/jmir.7.1.e6

    Article  PubMed Central  PubMed  Google Scholar 

  39. Formica M, Kabbara K, Clark R, McAlindon T: Can clinical trials requiring frequent participant contact be conducted over the Internet? Results from an online randomized controlled trial evaluating a topical ointment for herpes labialis. J Med Internet Res 2004, 6: e6. 10.2196/jmir.6.1.e6

    Article  PubMed Central  PubMed  Google Scholar 

  40. Clarke G, Reid E, Eubanks D, O'Connor E, DeBar LL, Kelleher C, Lynch F, Nunley S: Overcoming depression on the Internet (ODIN): a randomized controlled trial of an Internet depression skills intervention program. J Med Internet Res 2002, 4: E14. 10.2196/jmir.4.3.e14

    Article  PubMed Central  PubMed  Google Scholar 

  41. Scholle SH, Peele PB, Kelleher KJ, Frank E, Jansen-McWilliams L, Kupfer D: Effect of different recruitment sources on the composition of a bipolar disorder case registry. Soc Psychiatry Psychiatr Epidemiol 2000, 35: 220–227. 10.1007/s001270050231

    Article  CAS  PubMed  Google Scholar 

  42. Etter JF, Perneger TV: A comparison of cigarette smokers recruited through the Internet or by mail. Int J Epidemiol 2001, 30: 521–525. 10.1093/ije/30.3.521

    Article  CAS  PubMed  Google Scholar 

  43. Im EO, Chee W: Methodological issues in the recruitment of ethnic minority subjects to research via the Internet: a discussion paper. Int J Nurs Stud 2005, 42: 923–929. 10.1016/j.ijnurstu.2005.01.002

    Article  PubMed  Google Scholar 

  44. Ross MW, Mansson SA, Daneback K, Cooper A, Tikkanen R: Biases in internet sexual health samples: comparison of an internet sexuality survey and a national sexual health survey in Sweden. Soc Sci Med 2005, 61: 245–252. 10.1016/j.socscimed.2005.01.019

    Article  PubMed  Google Scholar 

  45. Index Multiple Deprivation 2000 []

  46. Schunemann H, Stahl H, Austin P, Akl E, Armstrong D, Guyatt G: A comparison of narrative and table formats for presenting hypothetical health states to patients with gastrointestinal or pulmonary disease. Medical Decision Making 2004, 24: 53–60. 10.1177/0272989X03261566

    Article  PubMed  Google Scholar 

  47. Dolan P, Gudex C: Time Preference, Duration and Health State Valuations. Health Economics 1995, 4: 289–299.

    Article  CAS  PubMed  Google Scholar 

  48. von Neumann J, Morganstern O: THeory of Games and Economic Behaviour. 2nd edition. Princeton: Princeton University Press; 1947.

    Google Scholar 

  49. Lenert LA, Cher DJ, Goldstein MK, Bergen MR, Garber A: The Effect of Search Procedures on Utility Elicitations. Med Decis Making 1998, 18: 76–83.

    Article  CAS  PubMed  Google Scholar 

  50. Scottish Index of Multiple Deprivation: Summary Technical Report. Edinburgh, Scottish Executive 2004.

    Google Scholar 

  51. Noble M, Wright G, Dibben C, Smith GAN, McLennan D, Anttila C, Barnes H, Mokhtar C, Noble S, Avenell D, Gardner J, Covizzi I, Lloyd M: Indices of Deprivation 2004: Report to the Office of the Deputy Prime Minister. London, Neighbourhood Renewal Unit; 2004.

    Google Scholar 

  52. Reips UD: Standards for Internet-based experimenting. Exp Psychol 2002, 49: 243–256. 10.1026//1618-3169.49.4.243

    Article  PubMed  Google Scholar 

  53. Bartlett C, Doyal L, Ebrahim S, DAvey P, Bachmann M, Egger M, Dieppe P: The causes and effects of socio-demographic exclusions from clinical trials. Health Technology Assessment 2005., 9:

    Google Scholar 

  54. Dolan P, Roberts J: To what extent can we explain time trade-off values from other information about respondents? Soc Sci Med 2002, 54: 919–929. 10.1016/S0277-9536(01)00066-1

    Article  PubMed  Google Scholar 

  55. Dolan P: Effect of age on health state valuations. J Health Serv Res Policy 2000, 5: 17–21.

    CAS  PubMed  Google Scholar 

  56. Ashby J, Hanlon M, Buxton MJ: The time trade-off technique: how do the valuations of breast cancer patients compare to those of other groups? Quality of Life Research 1994, 3: 257–265. 10.1007/BF00434899

    Article  CAS  PubMed  Google Scholar 

  57. Badia X, Roset M, Herdman M, Kind P: A comparison of United Kingdom and Spanish general population time trade-off values for EQ-5D health states. Medical Decision Making 2001, 21: 7–16.

    Article  CAS  PubMed  Google Scholar 

  58. Sims T, Garber A, Goldstein M: Does education really matter? Examining the role of education in health preferences among older adults. Society for Medical Decision Making, Annual Meeting; San Francisco 2005.

    Google Scholar 

  59. Cykert S, Joines JD, Kissling G, Hansen CJ: Racial differences in patients' perceptions of debilitated health states. J Gen Intern Med 1999, 14: 217–222. 10.1046/j.1525-1497.1999.00320.x

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  60. Dolan P: The Effect of Experience of Illness on Health State Valuations. J Clin Epidemiol 1996, 49: 551–564. 10.1016/0895-4356(95)00532-3

    Article  CAS  PubMed  Google Scholar 

  61. King JT, Tsevat J, Roberts MS: Positive Association between Current Health and Health Values for Hypothetical Disease States. Medical Decision Making 2004, 24: 367–378. 10.1177/0272989X04267692

    Article  PubMed  Google Scholar 

  62. Lenert L, Treadwell JR, Schwartz C: Associations Between Health Status and Utilities Implications for Policy – Impact of Illness. Med Care 1999, 37: 479–489. 10.1097/00005650-199905000-00007

    Article  CAS  PubMed  Google Scholar 

  63. Rosen A, Tsai J, DOwns S: Variations in risk attitude across race, gender and education. Medical Decision Making 2003, 23: 511–517. 10.1177/0272989X03258431

    Article  PubMed  Google Scholar 

  64. Woloshin S, Schwartz L, Moncur M, Gabriel S, Tosteson A: Assessing values for health: Numeracy matters. Med Decis Making 2001, 21: 382–390. 10.1177/02729890122062686

    Article  CAS  PubMed  Google Scholar 

  65. Gerson L, Ullah N, Hastie T, Triadafilopoulos G, Goldstein M: Patient derived health state utilities for gastroesophageal reflux disease. American Journal of Gastroenterology 2005, 100: 524–533. 10.1111/j.1572-0241.2005.40588.x

    Article  PubMed  Google Scholar 

  66. Munakata J, Woolcott J, Anis A, Sculpher M, Yu W, Sanders G, et al.: Design of a prospective economic evaluation for a tri-national clinical trial in HIV patients (OPTIMA). Society for Medical Decision Making Annual Conference; San Francisco 2005.

    Google Scholar 

  67. Tosteson A, Kneeland T, Nease R, Sumner W: Automated Current Health Time-Trade-Off Assessments in Women's Health. Value in Health 2002, 5: 98–105. 10.1046/j.1524-4733.2002.52102.x

    Article  PubMed  Google Scholar 

  68. Utility Assessment []

  69. Lenert LA, Sturley AE: Acceptability of Computerized Visual Analog Scale, Time Trade-off and Standard Gamble Rating Methods in Patients and the Public. AMI Association Proceedings 2001.

    Google Scholar 

  70. Lenert L, Sturley A, Rupnow M: Toward improved methods for measurement of utility: automated repair of errors in elicitation. Medical Decision Making 2003, 23: 67–75. 10.1177/0272989X02239649

    Article  PubMed  Google Scholar 

  71. Damschroder L, Baron J, Hershey J, Asch D, Jepson C, Ubel P: The validity of person tradeoff measurements: randomized trial of computer elicitation versus face-to-face interview. Medical Decision Making 2004, 24: 170–180. 10.1177/0272989X04263160

    Article  PubMed  Google Scholar 

  72. Lenert L: Web-based Assessment of Patients' Preferences. California, USA, University of California, San Diego; 2006.

    Google Scholar 

  73. Lenert L: Validity and interpretation of preference-based measures of quality of life. 2006.

    Google Scholar 

  74. Stein K, Ratcliffe J, Milne R, Round A, Brazier J: Construct validity of utility data obtained from an internet panel of members of the public. Society for Medical Decision Making Annual Meeting Annual Meeting; Boston 2006.

    Google Scholar 

  75. Harris Interactive []

  76. YouGov: Polling for a Profit []

  77. Baker K, Curtice J, Sparrow N: Internet Poll Trial: Research Report. London, ICM Research; 2003.

    Google Scholar 

Download references


We are extremely grateful to the following for their help:

The members of the Value of Health Panel for their hard work and patience;

The patients and clinicians who provided help in the development of health state descriptions;

Joanne Perry for her role as project administrator throughout the project;

Dan Fall (University of Sheffield) and Stephen Elliott (Llama Digital) for website development;

Sam Ballani and Pam Royle for providing IMD and SIMD data.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ken Stein.

Additional information

Authors' contributions

KS, RM, JB, JR and AR conceived the study and participated in its design.

KS coordinated the studied, participated in statistical analyses and drafted the manuscript.

MD and TC participated in recruitment, health state development and statistical analyses.

All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Stein, K., Dyer, M., Crabb, T. et al. A pilot Internet "Value of Health" Panel: recruitment, participation and compliance. Health Qual Life Outcomes 4, 90 (2006).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: