Skip to main content

Absence of social desirability bias in the evaluation of chronic disease self-management interventions



Bias due to social desirability has long been of concern to evaluators relying on self-report data. It is conceivable that health program evaluation is particularly susceptible to social desirability bias as individuals may be inclined to present themselves or certain health behaviors in a more positive light and/or appease the course leader. Thus, the influence of social desirability bias on self-report outcomes was explored in the present study.


Data were collected from 331 participants of group-based chronic disease self-management interventions using the highly robust eight-scale Health Education Impact Questionnaire (heiQ) and the 13-item short form Marlowe-Crowne Social Desirability Scale (MC-C). The majority of self-management courses were run by community-based organizations across Australia between February 2005 and December 2006 where 6 to 12 individuals have the opportunity to develop considerable rapport with course leaders and each other over about six weeks. Pre-test data were collected on the first day of courses, while post-test and social desirability scores were assessed at the end of courses. A model of partial mediation within the framework of structural equation modeling was developed with social desirability as the mediating variable between pre-test and post-test.


The ‘Defensiveness’ factor of the MC-C showed clear association with heiQ pre-test data, a prerequisite for investigating mediation; however, when investigating the eight full pre-test/post-test models ‘Defensiveness’ was only associated with one heiQ scale. This effect was small, explaining 8% of the variance in the model. No other meditational effects through social desirability were observed.


The overall lack of association of social desirability with heiQ outcomes was surprising as it had been expected that it would explain at least some of the variance observed between pre-test and post-test. With the assumption that the MC-C captures the propensity for an individual to provide socially desirable answers, this study concludes that change scores in chronic disease self-management program evaluation are not biased by social desirability.


There has long been a concern that considerable bias in survey research can stem from respondents providing answers that are partly determined by social influences, in particular social desirability [1]. While the influence of social desirability bias has been found to vary according to the survey method, telephone and personal interviews have been found to be particularly prone to socially desirable responding [13]. Hence, social desirability bias may be a major threat to the validity of self-report outcomes data. Although there are several elements to its conceptualization, social desirability bias can generally be described as a response style exhibited by respondents who endorse items that represent traits and/or behaviors that they think stand for a socially acceptable or endorsed position [4]. Further, it can be differentiated between two dimensions: 1) the need for social approval, i.e. creation of a positive impression of oneself to receive approval from others (impression management), and 2) self-deception or defensiveness, i.e. avoidance of disapproval by denying socially undesirable traits and/or behaviors [58]. Social desirability has been found to be related to demographic variables; it is more likely to be identified in older women [9, 10], women of lower socio-economic status [10, 11], and higher age [12, 13]. Finally, social desirability has been found to be strongly related to the positive rating of the personal qualities of self, family and friends and not of ‘people in general’, the so-called ‘better than average’ effect [14].

While social desirability bias has been a general concern in evaluations based on self-reports [3], it may play a particularly important role in chronic disease health education interventions, in particular those that are offered to groups of people with chronic conditions who were initially unknown to each other. First, it is likely that individuals would be inclined to present themselves or certain health behaviors in a more positive light. This phenomenon would generally apply to any health-related outcomes assessment. Second, in the specific context of group-based interventions, it is intended that participants and course leaders build strong rapport during the intervention that may last several weeks or months. As a result, at the end of courses, participants may be inclined to provide socially desirable answers to endorse course leaders regardless of whether they truly benefited from the intervention. That is, participants may be aware that they are indirectly evaluating the performance of both the course leader and the organization and therefore provide socially desirable responses to appease leaders rather than showing how they really felt after graduating from the self-management course. Finally, in this setting, participants often fill out questionnaires in the presence of leaders and their peers which again may trigger socially desirable responses as they may feel pressurized to endorse the leaders’ performance. Hence, social desirability bias may have a particular influence on post-test scores and thus apparent change scores.

To measure the influence of potential socially desirable responses, several scales have been developed [5, 1518]. Of these, the Marlowe-Crowne (MC) Social Desirability scale [16] is one of the most widely used indices [19]. It is commonly described as a measure of a person’s need for approval. Although the original authors defined the concept of social desirability in terms of two dimensions, i.e. need for approval and avoidance of disapproval [6, 20], they conceptualized the MC scale as a measure of a single dimension [6, 21]. However, subsequent studies found little support for this hypothesis, with results ranging from two-factor [5, 22] to multi-factor solutions [19, 21, 2326]. While such findings cast some doubt on the measurement properties of the MC scale, these studies should be treated with caution. Only two studies applied rigorous psychometric statistical techniques to investigate the properties of the MC scale [19, 21]. Moreover, the generalizability of studies is questionable as almost all samples consisted of students [19, 24, 25, 2729].

The original MC scale consists of 33 items. Therefore, for some respondents it may be a burden to complete, particularly if the scale is among a panel of scales. As a consequence, short forms have been developed, with Reynolds’ (1982) and Strahan and Gerbasi’s (1972) short forms being most frequently applied [19, 21]. Commentaries on the usefulness of the short forms vary substantially. While some suggest that all are unsatisfactory [19, 24], others show that they are improvements over the original [25, 26, 28]. However, these studies should also be treated with caution. Apart from one study [19] none applied rigorous statistical methods. Further, factor analyses on the short forms were generally aimed at confirming/rejecting the one-factor hypothesis, whereas none tested the scales for a potential two- or multi-factor solution. Of all short forms, Reynolds’ MC-C [30] has been explored extensively [31] and is one of the most frequently used short forms [3234]. It has generally been described as a reliable alternative to the full scale [30, 31, 35] with acceptable internal consistency [24, 25, 30, 31, 34].

In summary, social desirability bias has received frequent attention in the literature [20, 36]. However, in view of its potential threat to the validity of scores derived from participants of health interventions, it is surprising that this bias has rarely been explored in contexts where social desirability is likely to be an important bias. Only two out of more than 100 controlled trials of chronic disease self-management courses considered social desirability as a potential covariate [37]. The aim of this study was to explore the influence of social desirability bias on change scores derived from data collected from groups of participants taking part in chronic disease self-management courses.


Courses and participants

Data were collected from 331 participants of chronic disease self-management courses implemented mainly by community-based organizations across Australia between February 2005 and December 2006. As shown in Table  1, three quarters of respondents were female (74.2%), mean age 62.2 years (age range 19 to 90 years), and the majority reported to be affected by osteoarthritis (45.5%), depression (29.9%), diabetes (22.1%), and asthma (21.5%). The predominant course type (71.2%) was a generic intervention [38], while the remaining disease-specific interventions were mostly aimed at people with arthritis.

Table 1 Demographic characteristics of respondents

Participant recruitment was undertaken at a course level where leaders were recruited through established networks and snowball recruitment as previously described [39, 40]. Pre-test data were provided at the start of courses (T1), while post-test and social desirability data were collected at the end of courses (T2), on average six weeks after pre-test. The 13-item short form MC-C was applied [30]. Questions were answered using a ‘true-false’ response scale in the same manner as in the original scale [16]. The Health Education Impact Questionnaire (heiQ), a widely used measure of impacts of self-management interventions, was used to collect patient-reported outcomes data [41, 42]. The version of the heiQ that was applied comprised 38 items, each uniquely associated with one of the following eight factors: Positive and active engagement in life, Health directed activities, Skill and technique acquisition, Constructive attitudes and approaches, Self-monitoring and insight, Health service navigation, Social integration and support, and Emotional distress. All items were measured on a 6-point Likert response scale ranging from “strongly disagree” to “strongly agree”.

Statistical model

As described in the introduction, previous research on the validity of the MC scale lacked both statistical sophistication and samples including people with chronic disease [5, 7, 19, 22, 24]. Consequently, it was deemed necessary to determine the psychometric properties of the MC-C before embarking upon the analyses. This was approached in an exploratory way. Data were first analyzed in CEFA [43], a computer program for unrestricted factor analyses [44]. As the MC-C was assumed to measure one underlying construct, i.e. social desirability, multi-factor structures were analyzed with oblique rotation to allow for correlations between factors. For this GEOMIN was used [44, 45]. Due to the scaling of the MC-C, the input matrix was based on polychoric correlations and the ordinary least squares method was used for parameter estimation [43]. Once the factor structure was determined, it was again tested in LISREL version 8.72 [46], using Robust Maximum Likelihood (RML), to both confirm the model and estimate model parameters [47].

For evaluation of the model resulting from the confirmatory factor analysis, a combination of fit statistics was chosen for a comprehensive assessment of model fit, i.e. a range of qualitatively different fit statistics was applied [4850]. First, the χ2 statistic [51] was used. It is based on the comparison of the model covariance matrix with the sample covariance matrix. If a non-significant χ2 is obtained, this indicates that the two matrices do not differ significantly, i.e. it indicates that the model fits well [52]. Second, the root mean square error of approximation (RMSEA) was chosen, with values of < 0.05 indicating close fit and those of < 0.08 indicating acceptable fit [53]. Third, for the standardized root mean square residuals (SRMR) a value of up to 0.08 was considered acceptable. Finally, the comparative fit index (CFI) was selected, with a cut-off value of 0.95 or above [54, 55].

In a second step, a model of partial mediation was developed in the framework of structural equation modeling (SEM) again using LISREL [46]. Social desirability was included as a mediating variable between predictor (pre-test) and outcome (post-test) following Kenny and colleagues [5658]. To establish whether social desirability was a mediator between heiQ pre-test and post-test data, the following conditions had to be established [56, 57]:

  1. 1)

    Mediator and predictor must correlate, i.e. the predictor must affect the mediating variable for the latter to be a mediator between predictor and outcome. This was tested by regressing mediator (MC-C) on predictor (heiQ pre-test).

  2. 2)

    The predictor must affect the outcome. This was tested by regressing outcome (heiQ post-test) on predictor (heiQ pre-test).

  3. 3)

    The mediator must affect the outcome, i.e. it had to be established that the regression of outcome on mediator was significant. In this model, MC-C was included as a second endogenous variable, i.e. both heiQ post-test and MC-C were regressed on heiQ pre-test.

  4. 4)

    Once conditions (1) to (3) were met, the statistical significance of the mediational effect was tested, i.e. the statistical significance of the product of the paths from a) predictor to mediator, and b) mediator to outcome [5961].

  5. 5)

    Finally, while steps (1) to (4) are both necessary and sufficient conditions to establish mediation, the mediational effect must be interpreted in the overall context of the model [61]. Thus, it was assessed what proportion of the total effect was being mediated.

An example of the model using one hypothetical heiQ scale is visualized in Figure  1 where both MC-C and heiQ post-test are regressed on heiQ pre-test, and heiQ post-test is regressed on the MC-C.

Figure 1

Structural equation model, following LISREL notation, with the short form Marlowe-Crowne social desirability scale MC-C as a partial mediating variable.

Before analyzing heiQ and MC-C data, some preparatory steps were undertaken. First, each case with more than 50% missing items was deleted. Second, due to the alternate keying of the MC-C items, it could easily be detected if participants exhibited an acquiescent response style [62]. Consequently, respondents who had provided either only ‘true’ or only ‘false’ answers were discarded. It was assumed that they had filled out the MC-C regardless of item content. Once this preparation was finalized, all remaining missing values were replaced using the EM Algorithm [63], leading to a final sample size of n = 318.


Exploratory factor analyses of the MC-C using CEFA suggested that a one-factor solution did not fit the data well. With two eigenvalues clearly above one (3.4 and 1.9, respectively) and two further eigenvalues at 1.1, factor solutions ranging between two factors and four factors were explored. While fit statistics improved in all multi-factor solutions, models beyond two factors were not superior to the two-factor solution. Therefore, a two-factor solution – labeled SD1 ‘defensiveness’ and SD2 ‘self-presentation’ – appeared most suitable for the MC-C with a moderate correlation of the two factors (0.48). As shown in Table  2, this solution was confirmed in LISREL. While fit statistics were excellent (non-significant Satorra-Bentler chi-square [64, 65], RMSEA = 0.023 [90% CI, 0.0;0.043], CFI = 0.99, and SRMR = 0.079), some small factor loadings were obtained ranging from 0.33 to 0.76. Reliability was also relatively low, with Coefficient alpha at 0.59 for SD1 and 0.56 for SD2. As the validation of the MC-C was of exploratory nature [54], these values were deemed acceptable for the present study.

Table 2 Confirmatory factor analysis of the short form Marlowe-Crowne social desirability scale MC-C (n = 318)

Social desirability in heiQ data

The first step of the 5-step procedure suggested that ‘defensiveness’ correlated significantly with pre-test data across all heiQ scales. Correlations ranged from 0.24 to 0.39, equivalent to a small to medium effect [59, 66]. In contrast, none of the heiQ scales indicated an association between ‘self-presentation’ and pre-test data (Table  3). Thus, only ‘defensiveness’ was explored as a potential partial mediating variable in heiQ data, while ‘self-presentation’ could be ruled out as a mediator.

Table 3 Covariance between ‘defensiveness’, ‘self-presentation’, and heiQ pre-test data

In Step 2 it was found that all direct paths from pre-test to post-test were significant. While subscale Social integration and support showed the strongest association between the two scores, all heiQ subscales showed substantial paths from predictor to outcome (Table  4).

Table 4 Regression of heiQ post-test on heiQ pre-test datav

Finally, Table  5 presents the associations between pre-test and post-test once SD1 was included in the models. Again, paths between pre-test and ‘defensiveness’ were significant. Once pre-test data were controlled for ‘defensiveness’, Emotional distress was the only subscale that showed a significant association between ‘defensiveness’ and heiQ post-test data.

Table 5 Regression of ‘defensiveness’ and heiQ post-test data on heiQ pre-test data, and regression of heiQ post-test data on ‘defensiveness’

As ‘defensiveness’ was found to be associated with Emotional distress, steps 4 and 5 were performed on this heiQ scale only. Once ‘defensiveness’ was included in the model, the path between pre-tests and post-tests decreased by 0.062, a significant effect, as it was more than twice its standard error [52], i.e. SEM = √ (0.1912 * 0.0992 + 0.3222 * 0.0562) = 0.026. The magnitude of the effect, however, was small as it contributed only 8.2% of the total variation in change scores.


This study explored the potential mediating effect of social desirability in the measurement of outcomes of chronic disease self-management courses. For this, we used rigorous statistical techniques – including exploratory and confirmatory factor analysis of the MC-C as well as applying a comprehensive 5-step model – to explore both direct and mediating effects on key outcomes. Surprisingly, while we had expected clear evidence of bias in estimates of change through socially desirable responding, virtually no social desirability bias was found. When analyzing social desirability bias as a potential mediating variable between heiQ pre-test and post-test data, only the ‘defensiveness’ factor but not the ‘self-presentation’ factor of the MC-C showed an association with pre-test data, a prerequisite for investigating mediation. The notion of ‘defense’ and ‘self-protection’ was introduced as one critical aspect of the approval motive [6]. Subsequent research, however, suggested that subjects’ motivations to present themselves in a socially desirable way was linked more strongly to ‘defensiveness’ rather than ‘self-presentation’ [7, 67] which may explain our findings, i.e. the lack of association of pre-test data with ‘self-presentation’.

Despite the significant association of ‘defensiveness’ with all pre-tests, it exerted only little influence on heiQ post-test data once pre-test data were controlled for. Only one heiQ scale (Emotional distress) showed that ‘defensiveness’ operated as a true, albeit minor, mediator. Therefore, the influence of social desirability bias in heiQ data can largely be ruled out. This finding is contrary to our expectations. First, the specific context of group-based chronic disease self-management interventions, potential rapport among participants and between participants and course leader(s), and provision of data in the presence of course leaders are factors that may be conducive to exhibiting a socially desirable response style. Second, social desirability has been found to be related to a range of demographic variables. Among others, older women [9, 10], women of lower socio-economic status [10, 11], and older respondents [12, 13] have been found to be most prone to socially desirable responding. While we did not have socio-economic data, the remaining characteristics largely fit our sample, i.e. an additional argument for the presence of social desirability in our study.

There are several possible explanations for our findings, i.e. lack of social desirability bias in heiQ data. First, all heiQ items have been written in a way that discourage response styles [41]. That is, even people who are usually prone to socially desirable responding may have been discouraged to do so through the content and structure of heiQ items. The heiQ was developed using grounded approaches including the use of concepts and wording that were directly derived from patients. Second, the short form MC-C scale was used to explore a potential effect of social desirability bias. Although there is sufficient support in the literature that the MC-C is a valid alternative to the full MC scale, and our re-validation supported a two-factor solution with excellent fit statistics, it is possible that the analyses were hampered by a suboptimal performance of this shortened measure. Despite excellent fit indices in LISREL, low reliability and some small factor loadings may have limited the power of the analyses to detect mediational effects of social desirability.

In this study we applied a novel approach to testing the influence of social desirability bias in the context of chronic disease self-management programs. Apart from providing a detailed re-validation of the MC-C [30], with both exploratory and confirmatory analyses, a sophisticated model of partial mediation was developed that should have detected an association of social desirability if there had been any. However, it cannot be ruled out that the MC-C scale did not perform sufficiently well, while a potential co-existence of equivalent models also needs to be acknowledged [68, 69]. For example, it would have been plausible to define ‘defensiveness’ as a predictor of both pre-test and post-test or define a model of moderated mediation [57, 59], with variables such as age, gender, or education operating as moderating variables. It is possible that there was a mediating effect of socially desirable responding in older participants but not in their younger counterparts. The sample size of the dataset, however, did not allow for such modeling. Further, it is possible that social desirability moderated – rather than mediated – the effect between pre-test and post-test. However, as ‘social desirability’ was defined as a response style that was hypothesized to improve the prediction of post-test levels – i.e. the variable ‘social desirability’ was defined as part of the causal chain [56, 57] – current model definition was assumed to be most appropriate to test for socially desirable responding. In view of our specific research questions, the present model is a logical and theoretically sound approach [61]. That is, the path between pre-test and post-test was understood as the primary path in the model, and social desirability was defined as a response style that potentially partially mediated the relationship between heiQ pre-test and post-test data.


The analyses of this study also provided support for the measurement qualities of the heiQ. That is, data derived from this questionnaire appear robust against bias through socially desirable responding. Based on the present research, the use of the heiQ within the traditional method of assessing change (post-test minus pre-test) appears immune to potential confounding effects through social desirability. However, further research is necessary to ascertain whether this bias is present at the subject-level. To advance the field, a combination of qualitative and quantitative approaches at group-level and individual-level is needed and questionnaires other than Reynold’s short-form should be used to further explore whether social desirability bias exists in the evaluation of chronic disease self-management programs. With the assumption that Reynold’s short-form of the Marlowe-Crowne Social Desirability scale captures the propensity for individuals to provide socially desirable answers, change scores in patient education program evaluation are not biased by social desirability.

Ethical adherence

The study was approved by the Human Research Ethics Committee of the University of Melbourne.



Health education impact questionnaire


Marlowe-Crowne Social Desirability scale


13-item short form Marlowe-Crowne Social Desirability scale


Robust maximum likelihood


Structural equation modeling.


  1. 1.

    DeMaio TJ: Social desirability and survey measurement: a review. In Surveying subjective phenomena. Volume 2. Edited by: Turner C, Martin E. New York: Russell Sage; 1984:257–282.

    Google Scholar 

  2. 2.

    Holbrook AL, Green MC, Krosnick JA: Telephone versus face-to-face interviewing of national probability samples with long questionnaires: comparisons of respondent satisficing and social desirability response bias. Public Opin Quart 2003, 67: 79–125. 10.1086/346010

    Article  Google Scholar 

  3. 3.

    Schwarz N, Oyserman D: Asking questions about behavior: cognition, communication and questionnaire construction. Am J Eval 2001, 22: 127–160.

    Article  Google Scholar 

  4. 4.

    Nunnally JC, Bernstein IH: Psychometric theory. 3rd edition. New York: McGraw-Hill; 1994.

    Google Scholar 

  5. 5.

    Paulhus DL: Two-component models of socially desirable responding. J Pers Soc Psychol 1984, 46: 598–609.

    Article  Google Scholar 

  6. 6.

    Crowne DP, Marlowe D: The approval motive: studies in evaluative dependence. New York, London, Sydney: John Wiley & Sons; 1964.

    Google Scholar 

  7. 7.

    Millham J: Two components of need for approval score and their relationship to cheating following success and failure. J Res Pers 1974, 8: 378–392. 10.1016/0092-6566(74)90028-2

    Article  Google Scholar 

  8. 8.

    Moorman RH, Podsakoff PM: A meta-analytic review and empirical test of the potential confounding effects of social desirability response sets in organizational behavior research. J Occup Organ Psych 1992, 65: 131–149. 10.1111/j.2044-8325.1992.tb00490.x

    Article  Google Scholar 

  9. 9.

    Ray JJ: Lie scales and the elderly. Pers Indiv Differ 1988, 9: 417–418. 10.1016/0191-8869(88)90106-7

    Article  Google Scholar 

  10. 10.

    Visser AP, Breemhaar B, Kleijnen JGVM: Social desirability and program evaluation in health care. Impact Assessment Bulletin 1989, 7: 99–112. 10.1080/07349165.1989.9726015

    Article  Google Scholar 

  11. 11.

    Kalliopuska M: Social desirability related to social class among adults. Psychol Rep 1992, 70: 808–810. 10.2466/pr0.1992.70.3.808

    Article  Google Scholar 

  12. 12.

    Deshields T, Tait R, Gfeller J, Chibnall J: Relationship between social desirability and self-report in chronic pain patients. Clin J Pain 1995, 11: 189–193. 10.1097/00002508-199509000-00005

    CAS  PubMed  Article  Google Scholar 

  13. 13.

    Komarahadi FL, Maurischat C, Harter M, Bengel J: Zusammenhänge von Depressivität und Ängstlichkeit mit sozialer Erwünschtheit bei chronischen Schmerzpatienten. Der Schmerz 2004, 18: 38–44. 10.1007/s00482-003-0282-2

    CAS  PubMed  Article  Google Scholar 

  14. 14.

    Pedregon CA, Farleya RL, Davisa A, Wooda JM, Clark RD: Social desirability, personality questionnaires, and the “better than average” effect. Pers Indiv Differ 2012, 52: 213–217. 10.1016/j.paid.2011.10.022

    Article  Google Scholar 

  15. 15.

    Edwards AL: The social desirability variable in personality assessment and research. New York: Holt, Rinehart and Winston; 1957.

    Google Scholar 

  16. 16.

    Crowne DP, Marlowe D: A new scale of social desirability independent of psychopathology. J Consult Psychol 1960, 24: 349–354.

    CAS  PubMed  Article  Google Scholar 

  17. 17.

    Edwards AL, Walsh JA: Response sets in standard and experimental personality scales. Am Educ Res J 1964, 1: 52–61. 10.3102/00028312001001052

    Article  Google Scholar 

  18. 18.

    Hays RD, Hayashi T, Stewart AL: A five-item measure of socially desirable response set. Educ Psychol Meas 1989, 49: 629–636. 10.1177/001316448904900315

    Article  Google Scholar 

  19. 19.

    Barger S: The Marlowe-Crowne affair: short forms, psychometric structure, and social desirability. J Pers Assess 2002, 79: 286–305. 10.1207/S15327752JPA7902_11

    PubMed  Article  Google Scholar 

  20. 20.

    Paulhus DL: Measurement and control of response bias. In Measures of personality and social psychological attitudes. Edited by: Robinson J, Shaver P, Wrightsman L. New York: Academic Press; 1991:17–59.

    Chapter  Google Scholar 

  21. 21.

    Leite WL, Beretvas SN: Validation of scores on the Marlowe-Crowne Social Desirability Scale and the Balanced Inventory of desirable responding. Educ Psychol Meas 2005, 65: 140–154. 10.1177/0013164404267285

    Article  Google Scholar 

  22. 22.

    Ramanaiah NV, Schill T, Leung LS: A test of the hypothesis about the two-dimensional nature of the Marlowe-Crowne social desirability scale. J Res Pers 1977, 11: 251–259. 10.1016/0092-6566(77)90022-8

    Article  Google Scholar 

  23. 23.

    Crino MD, Svoboda M, Rubenfeld S, White MC: Data on the Marlowe-Crowne and Edwards Social Desirability scales. Psychol Rep 1983, 53: 963–968. 10.2466/pr0.1983.53.3.963

    Article  Google Scholar 

  24. 24.

    Ballard R: Short forms of the Marlowe-Crowne Social Desirability Scale. Psychol Rep 1992, 71: 1155–1160.

    CAS  PubMed  Article  Google Scholar 

  25. 25.

    Loo R, Thorpe K: Confirmatory factor analyses of the full and short versions of the Marlowe-Crowne Social Desirability Scale. J Soc Psychol 2000, 140: 628–635. 10.1080/00224540009600503

    CAS  PubMed  Article  Google Scholar 

  26. 26.

    Loo R, Loewen P: Confirmatory factor analyses of scores from full and short versions of the Marlowe-Crowne Social Desirability scale. J Appl Soc Psychol 2004, 34: 2343–2352. 10.1111/j.1559-1816.2004.tb01980.x

    Article  Google Scholar 

  27. 27.

    Ballard R, Crino M, Rubenfeld S: Social desirability response bias and the Marlowe-Crowne Social Desirability Scale. Psychol Rep 1988, 63: 227–237. 10.2466/pr0.1988.63.1.227

    Article  Google Scholar 

  28. 28.

    Fischer DG, Fick C: Measuring social desirability: short forms of the Marlowe-Crowne Social Desirability Scale. Educ Psychol Meas 1993, 53: 417–424. 10.1177/0013164493053002011

    Article  Google Scholar 

  29. 29.

    Fraboni M, Cooper D: Further validation of three short forms of the Marlowe-Crowne scale of Social Desirability. Psychol Rep 1989, 65: 595–600. 10.2466/pr0.1989.65.2.595

    Article  Google Scholar 

  30. 30.

    Reynolds WM: Development of reliable and valid short forms of the Marlowe-Crowne Social Desirability Scale. J Clin Psychol 1982, 38: 119–125. 10.1002/1097-4679(198201)38:1<119::AID-JCLP2270380118>3.0.CO;2-I

    Article  Google Scholar 

  31. 31.

    Zook A, Sipps GJ: Cross-validation of a short form of the Marlowe-Crowne Social Desirability Scale. J Clin Psychol 1985, 41: 236–238. 10.1002/1097-4679(198503)41:2<236::AID-JCLP2270410217>3.0.CO;2-H

    Article  Google Scholar 

  32. 32.

    Frasure-Smith N, Lespérance F, Juneau M, Talajic M, Bourassa MG: Gender, depression, and one-year prognosis after myocardial infarction. Psychosom Med 1999, 61: 26–37.

    CAS  PubMed  Article  Google Scholar 

  33. 33.

    Leake R, Friend R, Wadhwa N: Improving adjustment to chronic illness through strategic self-presentation: an experimental study on a renal dialysis unit. Health Psychol 1999, 18: 54–62.

    CAS  PubMed  Article  Google Scholar 

  34. 34.

    Andrews P, Meyer R: Marlowe-Crowne Social Desirability Scale and Short Form C: forensic norms. J Clin Psychol 2003, 59: 483–492. 10.1002/jclp.10136

    PubMed  Article  Google Scholar 

  35. 35.

    Robinette RL: The relationship between the Marlowe-Crowne Form C and the validity scales of the MMPI. J Clin Psychol 1991, 47: 396–399. 10.1002/1097-4679(199105)47:3<396::AID-JCLP2270470311>3.0.CO;2-K

    CAS  PubMed  Article  Google Scholar 

  36. 36.

    Loevinger J: Theory and techniques of assessment. Annu Rev Psychol 1959, 10: 287–316. 10.1146/

    Article  Google Scholar 

  37. 37.

    Nolte S: Approaches to the measurement of outcomes of chronic disease self-management interventions using a self-report inventory. Global Studies, Social Science & Planning: RMIT University; 2008.

    Google Scholar 

  38. 38.

    Lorig KR, González VM, Laurent DD: The chronic disease self-management program: leaders manual. Palo Alto: Stanford University; 1999.

    Google Scholar 

  39. 39.

    Osborne RH, Hawkins M, Sprangers MAG: Change of perspective: a measurable and desired outcome of chronic disease self-management intervention programs that violates the premise of preintervention/postintervention assessment. Arthrit Care Res 2006, 55: 458–465. 10.1002/art.21982

    Article  Google Scholar 

  40. 40.

    Nolte S, Elsworth GR, Sinclair AJ, Osborne RH: The extent and breadth of benefits from participating in chronic disease self-management courses: a national patient-reported outcomes survey. Patient Educ Couns 2007, 65: 351–360. 10.1016/j.pec.2006.08.016

    PubMed  Article  Google Scholar 

  41. 41.

    Osborne RH, Elsworth GR, Whitfield K: The health education impact questionnaire (heiQ): an outcomes and evaluation measure for patient education and self-management interventions for people with chronic conditions. Patient Educ Couns 2007, 66: 192–201. 10.1016/j.pec.2006.12.002

    PubMed  Article  Google Scholar 

  42. 42.

    Osborne RH, Batterham R, Livingston J: The evaluation of chronic disease self-management support across settings: the international experience of the health education impact questionnaire monitoring system. Nurs Clin N Am 2011, 46: 255–270. 10.1016/j.cnur.2011.05.010

    Article  Google Scholar 

  43. 43.

    Browne MW, Cudeck R, Tateneni K, Mels G: CEFA: Comprehensive Exploratory Factor Analysis, Version 3.04 [Computer software and manual]. 2010. Retrieved from

    Google Scholar 

  44. 44.

    McDonald RP: Semiconfirmatory factor analysis: the example of anxiety and depression. Struct Equ Modeling 2005, 12: 163–172. 10.1207/s15328007sem1201_9

    Article  Google Scholar 

  45. 45.

    Browne MW: An overview of analytic rotation in exploratory factor analysis. Multivar Behav Res 2001, 36: 111–150. 10.1207/S15327906MBR3601_05

    Article  Google Scholar 

  46. 46.

    Jöreskog KG, Sörbom D: LISREL 8: User’s reference guide. 2nd edn. Lincolnwood: IL: Scientific Software International; 1996–2001.

    Google Scholar 

  47. 47.

    Jöreskog KG: Structural equation modeling with ordinal variables using LISREL. Scientific Software International; 2002–2005. Retrieved from

    Google Scholar 

  48. 48.

    Bollen K, Long J: Introduction. In Testing structural equation models. Edited by: Bollen K, Long J. Newbury Park, London, New Delhi: Sage Publications; 1993.

    Google Scholar 

  49. 49.

    Tanaka J: Multifaceted conceptions of fit in structural equation models. In Testing structural equation models. Edited by: Bollen K, Long J. Newbury Park, London, New Delhi: Sage Publications; 1993.

    Google Scholar 

  50. 50.

    Marsh HW, Balla JR, Hau K: An evaluation of incremental fit indices: a clarification of mathematical and empirical properties. In Advanced structural equation modeling: issues and techniques. Edited by: Marcoulides GA, Schumacker RE. Mahwah NJ: Lawrence Erlbaum Associates; 1996:315–353.

    Google Scholar 

  51. 51.

    Gerbing D, Anderson J: Monte Carlo evaluations of goodness-of-fit indices for structural equation models. In Testing structural equation models. Edited by: Bollen K, Long J. Newbury Park, London, New Delhi: Sage Publications; 1993.

    Google Scholar 

  52. 52.

    Bollen KA: Structural equations with latent variables. New York: John Wiley & Sons; 1989.

    Book  Google Scholar 

  53. 53.

    Browne M, Cudeck R: Alternative ways of assessing model fit. In Testing structural equation models. Edited by: Bollen K, Long J. Newbury Park, London, New Delhi: Sage Publications; 1993.

    Google Scholar 

  54. 54.

    Hair JF, Black WC, Rabin BJ, Anderson RE, Tatham RL: Multivariate data analysis. 6th edition. Upper Saddle River, NJ: Pearson Education, Inc.; 2006.

    Google Scholar 

  55. 55.

    Hu L, Bentler P: Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling 1999, 6: 1–55. 10.1080/10705519909540118

    Article  Google Scholar 

  56. 56.

    Judd CM, Kenny DA: Process analysis: estimating mediation in treatment evaluations. Evaluation Rev 1981, 5: 602–619. 10.1177/0193841X8100500502

    Article  Google Scholar 

  57. 57.

    Baron R, Kenny D: The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol 1986, 51: 1173–1182.

    CAS  PubMed  Article  Google Scholar 

  58. 58.

    Kenny DA, Kashy DA, Bolger N: Data analysis in social psychology. In The handbook of social psychology. Volume 1. 4th edition. Edited by: Gilbert D, Fiske S, Lindzey G. New York: McGraw-Hill; 1998:233–265.

    Google Scholar 

  59. 59.

    Shrout PE, Bolger N: Mediation in experimental and nonexperimental studies: new procedures and recommendations. Psychol Methods 2002, 7: 422–445.

    PubMed  Article  Google Scholar 

  60. 60.

    Sobel ME: Asymptotic confidence intervals for indirect effects in structural equations models. In Sociological methodology. Edited by: Leinhart S. San Francisco: Jossey-Bass; 1982:290–312.

    Google Scholar 

  61. 61.

    Little TD, Card NA, Bovaird JA, Preacher KJ, Crandall CS: Structural equation modeling of mediation and moderation with contextual factors. In Modeling contextual effects in longitudinal studies. Edited by: Little TD, Bovaird JA, Card NA. Mahwah NJ: Lawrence Erlbaum Associates; 2007:207–230.

    Google Scholar 

  62. 62.

    Cronbach LJ: Response sets and test validity. Educ Psychol Meas 1946, 6: 475–494.

    Google Scholar 

  63. 63.

    Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Series B (Methodological) 1977, 39: 1–38.

    Google Scholar 

  64. 64.

    Satorra A, Bentler PM: Scaling corrections for chi-square statistics in covariance structure analysis. Proceedings of the Business and Economic Statistics Section of the American Statistical Association 1988, 1988: 308–313.

    Google Scholar 

  65. 65.

    Satorra A, Bentler PM: Corrections to test statistics and standard errors in covariance structure analysis. In Latent variable analysis: applications for developmental research. Edited by: von Eye A, Clogg C. CA: Sage Publications: Thousand Oaks; 1994:399–419.

    Google Scholar 

  66. 66.

    Cohen J: Statistical power analysis for the behavioural sciences. 2nd edition. Hillsdale, NJ: Lawrence Erlbaum Associates; 1988.

    Google Scholar 

  67. 67.

    Millham J, Kellogg RW: Need for social approval: impression management or self-deception? J Res Pers 1980, 14: 445–457. 10.1016/0092-6566(80)90003-3

    Article  Google Scholar 

  68. 68.

    MacCallum RC, Wegener DT, Uchino BN, Fabrigar LR: The problem of equivalent models in applications of covariance structure analysis. Psychol Bull 1993, 114: 185–199.

    CAS  PubMed  Article  Google Scholar 

  69. 69.

    Frazier PA, Tix AP, Barron KE: Testing moderator and mediator effects in counseling psychology research. J Couns Psychol 2004, 51: 115–134.

    Article  Google Scholar 

Download references


The authors would like to thank Amanda Springer, Dianne Ferguson and Luke Tellefson for data collection and management support as well as all participants who kindly took part in the study and the coordinators and course leaders who administered the questionnaires.

Author information



Corresponding author

Correspondence to Sandra Nolte.

Additional information

Competing interests

The authors state that there are no conflicts of interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Nolte, S., Elsworth, G.R. & Osborne, R.H. Absence of social desirability bias in the evaluation of chronic disease self-management interventions. Health Qual Life Outcomes 11, 114 (2013).

Download citation


  • Chronic disease
  • Patient education
  • Chronic disease self-management
  • Program evaluation
  • Structural equation modeling
  • Social desirability
  • Statistical bias