Skip to main content

Migraine screen questionnaire: further psychometric evidence from categorical data methods

Abstract

Background

Psychometric investigations of tools used in the screening of migraine including the migraine screen questionnaire (MS-Q), using an adequate statistical approach is needed. We assessed the psychometric properties of the migraine screen questionnaire (MS-Q) using categorical data methods.

Material and methods

A total of 343 students at Mizan-Tepi University, Ethiopia, age range = 18–35 years were selected by a simple random sampling method to participate in a cross-sectional study. The respondents completed the MS-Q, a semi-structured socio-demographic questionnaire, and a visual analog scale for attention (VAS-A).

Results

The cumulative variance rule (> 40%), the Kaiser’s criteria (Eigenvalue> 1), the Scree test and, the parallel analysis (minimum rank) identified a 1-factor model for the MS-Q with the factor loadings in the range of 0.78 to 0.84. Fit indices favored a 1-factor model of the MS-Q as indicated by comparative fit index (0.993), weighted root mean square residual (0.048), root mean square error of approximation (0.067), the goodness of fit index (1.00), and non-normed fit index (0.987). The values of the Factor Determinacy Index (0.953), marginal reliability (0.909), H-latent (0.909), H-observed (0.727), explained common variance (0.906) and the mean item residual absolute loadings (0.225) further complimented finding of the 1-Factor model. McDonald’s Omega (0.903) suggested adequate internal consistency. Discriminative validity was supported by significantly higher scores for the total and all the MS-Q items except one among those with complaints of attention.

Conclusion

The categorical methods support the psychometric validity of the MS-Q in the study population.

Introduction

Migraine is one of the most common types of primary headaches [1]. Migraine is being recognized as a significant health problem affecting the quality of life [2]. During university life, students often report increased levels of stress, depression, anxiety [3], and irregular sleep, all of which are associated with migraines. Based on available data, migraine is on the rise in both general populations [4] as well as university students [5]. A recent systematic review of the prevalence of migraines in university students has reported a pooled prevalence of 16.1% among males and 21.7% among female students [6]. Therefore, it is pragmatic to have a brief, easy, and self-administered screening tool with psychometric validity for screening migraines in the student population.

Despite the significant disability caused, migraine continues to be an under-diagnosed condition [7]. Previous authors have suggested the use of standardized questionnaires for diagnostic screening [8] that would aid in the proper diagnosis and management of migraine. Several such instruments were developed in the past to assist primary care physicians in the screening of migraine [9,10,11,12]. A brief, reliable, and valid questionnaire will be helpful to the primary care physicians in the screening of migraine and decrease its under-diagnosis.

Migraine screen questionnaire (MS-Q) is a brief measure of migraine screening with favorable diagnostic validity, test-retest reliability, and internal consistency-as determined by the Cronbach’s alpha [13]. The MS-Q is based on the International Headache Society (IHS) criteria for the diagnosis of migraine and can be easily administered. The clinical usefulness of the MS-Q and its ability to detect a hidden migraine was also confirmed in a recent study [14].

The psychometric characterization of migraine and headache tools has been inclined towards an examination of test-re-test reliability, concurrent validity, and internal consistency [7, 9,10,11,12,13]. Given the value of the MS-Q as a potential and clinically useful screening tool, further investigation of its measurement properties including internal consistency, factorial validity, and discriminative validity, especially taking account of the categorical nature of the MS-Q item score is needed. Previous works did not consider categorical data assumptions for assessing internal consistency [7, 13]. Factorial validity assessment is essential to establish the relationship between item scores and examine the validity of the theoretical construct. Factor analysis inspects and determines the proper way of interpretation of items scores and address issues of multicollinearity, singularity, and redundancy of items [15]. Therefore, the present study examined the factorial validity, internal consistency, and discriminative validity of the MS-Q according to categorical data assumptions in university students.

Material and methods

Participants and study design

Participants in this cross-sectional study were university students recruited from the Mizan campus of the Mizan-Tepi University, Mizan-Aman, Bench Maji Zone, Southern Nations, Nationalities, and Peoples' Region, Ethiopia, using a simple random sampling method. Three hundred and forty–three students with an age range of 18–35 years completed this study. Students with memory problems were excluded, as this would lead to a compromised data quality. The Institutional Ethics Committee, College of Medicine and Health Sciences, Mizan-Tepi University approved the research. The guidelines of Good Clinical Practice and the norms of the World Medical Association (WMA) Declaration of Helsinki (DoH), and ethical principles for medical research involving human subjects were followed. Objectives and procedures of the study were explained to all participants, and written informed consent was obtained.

Procedures and measurements

An interviewer-administered study questionnaire package was provided to all participants. The package included a migraine screen questionnaire (MS-Q), a semi-structured socio-demographic questionnaire, and a visual analog scale for attention (VAS-A). These questionnaires were administered in English, considering the participants' inconsistent proficiency levels for reading Amharic, the official language of Ethiopia. Moreover, the medium of instruction in the Ethiopian universities is English.

Migraine screen questionnaire

The migraine screen questionnaire (MS-Q) is a five-item migraine screening questionnaire developed for use in clinical practice and research settings both in the general population and occupational medicine [13]. The questionnaire is based on the international headache society criteria (IHS) on migraine diagnosis [16]. Each of the five items in this structured questionnaire has a dichotomous response option of yes/no. A score of 0 is assigned for each “NO” response and of 1 for each “YES” response. The total score is 5, where a cut-off point of ≥4 was used to indicate a case of migraine [13].

Visual analog scale for attention

The visual analog scale for attention (VAS-A) was used to evaluate the self-reported level of the problem in maintaining attention. A 100 mm horizontal line where ‘0’ indicated ‘never’, a middle score of ‘5’ denoted ‘hardly ever’ and ‘10’ indicated ‘yes definitely’ was placed next to the question, ‘Do you have difficulties in paying attention?’. The participants were instructed to mark on the line that they feel represents their perception about the problem in maintaining attention. Those with a 0–5 score on the VAS-A were categorized as having normal attention, and respondents with a score of 6–10 were categorized as having attention complaints.

Socio-demographic questionnaire

This semi-structured socio-demographic questionnaire consisted of five-items; one open-ended and four close-ended. These items collected information regarding age, attendance level in the classes, grade at last examination, gender, and religion. Height in meters and weight in kilograms was measured separately to predict body mass index.

Statistical analysis

SPSS software (version 23; Chicago, IL, USA) and Factor 10.8.04 were used for data analysis. Participants’ characteristics descriptions were examined using mean, standard deviation, frequency, and percentage. Univariate descriptive statistics were analyzed using skewness and kurtosis. Spearman correlation between the MS-Q item and total score indicated homogeneity and item discrimination. Mardia’s skewness and Mardia’s kurtosis were used to assess multivariate distribution. The sample size adequacy and suitability of the MS-Q score for factor analysis were determined by Bartlett’s test of Sphericity, Determinant, Kaiser-Meyer-Olkin (KMO) Test of Sampling Adequacy (95% confidence interval), communality and inter-item tetra-choric correlations.

Tetra-choric correlations (estimated using bootstrap sampling) for inter-item scores of the MS-Q were used for factor analysis because these are dichotomous variables. Exploratory Factor analysis (EFA) was performed using robust diagonally weighted least squares (RDWLS) with Promin rotation. Kaiser’s criteria (Eigenvalue≥1), the Cumulative variance explained rule (> 40%), Scree test, and the robust parallel analysis based on minimum rank were employed as measures of factor retention. Multiple fit indices from different categories were employed according to recommended norms [17,18,19]. Discrepancy functions, such as robust mean and variance-adjusted χ2, χ2/df and weighted root mean square residual (WRMR), absolute fit index- the goodness of fit index (GFI), tests comparing target model with the null model like comparative fit index (CFI) and non-normed Fit Index (NNFI), and non-centrality indices like the root mean square error of approximation (RMSEA) were employed [17]. RMSEA (≤ .08), WRMR (≤ 0.05) and χ2/df (≤ 3) indicated acceptable and/or excellent fit [20, 21]. For CFI, NNFI, and GFI, a value greater than 0.95 implied an excellent fit [20, 21].

The quality and effectiveness of the explored factor structure of the MS-Q were assessed using Factor Determinacy Index (FDI) and marginal reliability. FDI is the correlation between factor score(s) and is employed to assess closeness between individual differences and true individual differences in the factor score [22]. A value of 0.9 of FDI is required for individual assessment [22]. Marginal reliability is square of the FDI; Brown and Croudace, 2015 emphasized its application as a measure of the reliability of the corresponding factor score [22]. The construct replicability measure, i.e., H-index [23] including H-latent and H-observed [23] were employed. A value of 0.7 indicated a reasonable level of construct replicability [24]. H-latent is a measure of correlations between the factor and the continuous latent response score that is supposed to underlie the observed categorical scores of the MS-Q items. H-observed is a measure of correlations between the factors and the observed item scores and is necessarily lower than the H-latent [23]. Explained common variance (ECV) was used to explore closeness to unidimensionality. ECV is the fraction of common variance that is attributed to the general factor with a value of 0.70–0.85, implying acceptability of unidimensionality [25]. Item Explained Common Variance (I-ECV) is the percent of item common variance that can be attributed to a factor. Items with an I-ECV value of 0.8 and above can be selected for a factor or a unidimensional construct [23]. Item residual absolute loadings (I-REAL< 0.3) was used to explore the departure from unidimensionality. It is a measure of the absolute loadings of the MS-Q item scores on the second factor of minimum rank factor analysis. MIREAL is the mean of such absolute loadings, a value above 0.3 indicates a departure from unidimensionality [23].

The internal consistency was assessed by the greatest lower bound to reliability and the McDonald’s Omega. Discriminative validity was assessed by the Mann Whitney U test.

Results

Participants’ characteristics and preliminary item analysis

Table 1 details the participants’ characteristics of enrolled university students. The majority of the participants (77.8%) were in the age group between 20 and 24 years, and 67% of them had a normal body mass index (BMI) with an average BMI of 21.2 ± 3.4 kg/m2 (Table 1). More than half of the participants (56.5%) were in good academic standing, with grades ranging from good to excellent (Table 1). About one-fifth (19.5%) of the students had migraine (Table 1). Univariate descriptive statistics, homogeneity, and item discrimination results were reported in Table 2. Two item scores had skewness more than 1.0, and the three items score had a kurtosis index above 1.0, suggesting the application of categorical data methods for the factor analysis and the use of McDonald’s omega for the internal consistency [26, 27]. As shown, all the MS-Q individual item scores were significantly associated with the total MS-Q score (r = 0.68 to 0.76, p < 0.01).

Table 1 Participant characteristics
Table 2 Univariate descriptive statistics, closeness to dimensionality measures, communality and factor loadings of the Migraine Screen-Questionnaire (MS-Q) scores in Ethiopian university students

Factorial validity

Sample adequacy and sample suitability for factor analysis

The MS-Q scores in the studied university students fulfilled the conditions for the factor analysis. There were adequate linear combinations between the MS-Q item scores, as indicated by the results of Bartlett’s test of sphericity (< 0.001) [28]. No problems of multicollinearity and singularity were present in the MS-Q item scores, as suggested by the determinant score (0.282) [28]. There was a meritorious level of shared variance between the MS-Q item scores as implied by the Kaiser-Meyer-Olkin test of sampling adequacy (0.80) (Table 3) [28]. The inter-item tetra-choric correlations were in the range of 0.571 to 0.742, indicating moderate to strong correlations between MS-Q items (Table 4). This further supported the factorability of the MS-Q scores by substantiating evidence of the absence of both problems of multicollinearity and singularity [29]. All the item scores showed adequate communality conditions, i.e., all were above 0.4 for retention (Table 2) in the factor analysis [30].

Table 3 Multivariate descriptive, sample size adequacy, quality and effectiveness of factor score, construct replicability and reliability measures of the Migraine Screen-Questionnaire (MS-Q) scores in Ethiopian university students
Table 4 Inter-item tetra-choric correlation matrix of the Migraine Screen-Questionnaire (MS-Q) scores in Ethiopian university students

Exploratory factor analysis

Exploratory factor analysis results are presented in Table 5. Four tests were utilized to identify the number of factors (s) in EFA, i.e., the cumulative variance rule (> 40%), the Kaiser’s criteria (Eigenvalue> 1), the Scree test and the parallel analysis based on minimum rank, which is one of the robust measures of factor retention. As shown in Table 5, all the above said measures identified a 1-factor model for the MS-Q. The factor loadings of the MS-Q items ranged from 0.78 to 0.84 (Table 2).

Table 5 Summary of the factor extraction measures used in exploratory factor analysis of the Migraine Screen-Questionnaire (MS-Q) scores in Ethiopian university students

Fit indices

Except for a robust mean and variance-adjusted χ2 (df = 5) = 12.692, p = 0.027, all fit statistics provided adequate model fit to the data, i.e., the 1-factor model approximated the specified guidelines for CFI (0.993), GFI (1.00), WRMR (0.048), RMSEA (0.067 (0.00–0.111)) and NNFI (0.987). Furthermore, χ2/df was in the ideal range, i.e., 2.538 [19,20,21].

Quality and effectiveness of factor score estimates, construct replicability and measures of closeness to unidimensionality

The values of the FDI and marginal reliability were 0.953 and 0.909, respectively, for the one-factor structure of the MS-Q in the study population. H-latent and H-observed for the one-factor structure of the MS-Q were 0.909 and 0.727, respectively. ECV was 0.906, while I-ECV had a range of values between 0.816 and 0.999 for the 5-items of the MS-Q (Table 2). MIREAL was 0.225, while I-REAL had a range of values between 0.030 and 0.428 for the 5-items of the MS-Q (Table 2).

Internal consistency and item discrimination

As shown in Table 3, adequate internal consistency was demonstrated by the greatest lower bound to reliability (0.932) [31] and the McDonald’s Omega (0.903) [32].

Discriminative validity

The students with self-reported complaints of attention had significantly higher scores for all the MS-Q item scores (except MS-Q item-3) (p < .01) as well as the MS-Q total score (p < .01) than those with no complaints of attention (Table 6). A ROC curve analysis with the dichotomous variable of attention (No problem/Attention problem) and the MS-Q total score as test variable revealed an area under the curve of 0.66 with a 95% confidence interval of 0.58 to 0.74. At the cut-off score of 2.5 for the MS-Q total score, a sensitivity of 58.5% and a specificity of 69.7% was found to differentiate between those with attention problems and those without attention problems.

Table 6 Discriminative validity: Comparison of the Migraine screen questionnaire (MS-Q) scores between Ethiopian university students with attention complaints and no complaints of attention

Discussion

This is the first paper to investigate the factorial validity, internal consistency, and discriminative validity of the original MS-Q using an appropriate analytical framework employing categorical data methods. A novel approach involving complementary measures was used to examine the quality and effectiveness of factor score estimates, construct replicability, and measures of closeness to unidimensionality. It is worth mentioning that this is the first psychometric examination of a migraine assessing tool in a previously uninvestigated population. Evidence showed that the unidimensional model of the MS-Q had adequate factorial validity, excellent internal consistency, strong internal homogeneity, and sufficient discriminative validity and item discrimination in the university students.

Sample adequacy and sample suitability for factor analysis

The decision to conduct an EFA followed once all the indices of sample size adequacy measures indicated that the MS-Q scores were suitable for factor analysis as determined by KMO, Bartlett’s Test of Sphericity, the value of the determinant and moderate to strong inter-item tetra-choric correlations. All the five items were relevant for the construct validity of the MS-Q in the study population, as implied by the communality criteria [30].

Exploratory factor analysis

In the EFA, all measures of factor extraction, including the robust measure of the parallel analysis based on the minimum rank [33], unanimously found a one-factor structure for the MS-Q. Furthermore, all five items in the MS-Q loaded on a single factor with factor loadings ranging from 0.78 to 0.84, which is higher than the minimum recommended factor loading score of 0.32 [34]. The range of factor loadings suggests that there was an excellent level of correlation between the MS-Q items and the factor score estimate [34].

Model fit

The model fit indices analyses performed in the current study favored the one-factor structure of the MS-Q. Results of the majority of the model fit indices suggested that the one-factor model of the MS-Q adequately fits the data from our sample [19]. This is the first study examining the factorial validity and model fit of the MS-Q; hence a direct comparison with previously studied populations cannot be performed. Therefore, complementary measures like those assessing the quality and effectiveness of factor score estimates, construct replicability, and measures of closeness to unidimensionality were employed to establish findings of the factorial validity further. Factorial validity examinations have been generally under-utilized by studies investigating the psychometric validation of tools to screen migraine and headache. Though, factor analysis was employed to establish the multidimensional structure of the Headache Symptom Questionnaire-Revised in a pediatric population three decades ago [35]. Recently, Wang et al. 2017 employed factor analysis to determine the multidimensionality of a 27-item self-report of headache [36].

Quality and effectiveness of factor score estimates, construct replicability and measures of closeness to unidimensionality

The values of the FDI for the MS-Q in the study population implied that there was an excellent level of comparability between the individual differences and true individual differences [25]. Therefore, the MS-Q met the condition for individual assessment of patients for screening migraine [25]. The reliability of the 1-Factor structure of the MS-Q was excellent as determined by the marginal reliability [22]. ECV further reinforced unidimensionality evidence found by EFA and model fit indices, because it was much higher than the minimum level for acceptance of unidimensionality requirements [25]. All the 5-items of the MS-Q fulfilled the criteria to load on the same factor as indicated by the I-ECV values above 0.8 [23]. Though there was a little concern about the departure from unidimensionality for the MS-Q item-4 because I-REAL for this item was above 0.3. However, the MIREAL, i.e., the average of all the I-REAL was well within the required limit, indicating no overall issues of departure from unidimensionality [23].

Internal consistency and item discrimination

The present study reports a strong internal consistency of the MS-Q questionnaire, as evidenced by a McDonald’s omega value of 0.90. This indicates a strong relationship between each of the five MS-Q items. The original MS-Q development and evaluation study [13] reported similar internal consistency; however, the measure used in that study was Cronbach’s alpha coefficient. The present study used McDonald’s omega based on the fact that this a better alternative to Cronbach’s alpha when assessing the internal consistency of the scales with dichotomous responses [27].

Moreover, the choice of this estimate fulfilled the condition that the MS-Q items were found to measure a single latent construct where a one-factor model adequately represented the data [37]. All the item-total/Factor correlations were above 0.3; in fact, the lowest value was 0.65 (between MS-Q item-3 and the total score). This supports the conclusion that all the items of the MS-Q measured the same construct and, at the same time, showed sufficient item discrimination [38].

Discriminative validity

Poor attention is associated with migraine; and headaches, both tension-type and primary headaches [39]. Inattention is 2.6 times higher in children and adolescents with headache [40]. Neurotransmitters like dopamine and noradrenaline are perhaps the pathophysiological connecting link between the attention deficit and migraine [41]. There is an inferential indication of overlap between neuro-anatomical cerebral circuits of headache and attention [42]. Therefore, significantly higher scores for the total and all the MS-Q items except one support a known group: discriminative validity of the MS-Q in the study population.

A brief account of the strengths and weaknesses of the present study is worth discussing here. The strengths include assessment of the factorial validity, internal consistency, internal homogeneity, known group: discriminative validity, and item discrimination using categorical data methods. Notably, scale development and evaluation research were criticized previously because of the inaccuracies in results reported due to the limited investigations in psychometric properties and validities of the scales [43]. One key aspect of this limited reporting is the factorial validation of the tools. This criticism applies to most migraine screening tools such as the MS-Q and ID migraine [9], as these tools have not undergone factorial validation using sound measures. The present study addresses this gap for migraine screening by reporting the results of factorial validation on the MS-Q. Another merit is the use of McDonald’s omega for assessing internal consistency following the requirements of the univariate distribution [27]. However, this study was limited by the narrow age group of the sample (from a university-level student population). The generalizability of the results may be limited to this age group from the socio-demographic group studied. It is recommended that in the future, multicentric studies with longitudinal design should be performed. Such studies may help investigate temporal and socio-demographic invariance of the factor structure of the MS-Q.

Conclusions

Overall, the study findings provide further psychometric validation by providing evidence of adequate factorial validity, excellent internal consistency, strong internal homogeneity, and adequate discriminative validity and item discrimination in the study population. The findings of this study, along with those of previously published diagnostic accuracy studies in clinical populations, provide strong evidence for its use in screening migraine in both clinical as well as research settings.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

MS-Q:

Migraine screen questionnaire

EFA:

Exploratory factor analysis

DWLS:

Diagonally weighted least squares

KMO:

Kaiser-Meyer-Olkin Test of Sampling Adequacy

CFI:

Comparative Fit Index

WRMR:

Weighted root mean square residual

RMSEA:

Root mean square error of approximation

GFI:

Goodness of fit index

NNFI:

non-normed fit index

References

  1. 1.

    Mier RW, Dhadwal S. Primary headaches. Dent Clin N Am. 2018;62(4):611–28. https://doi.org/10.1016/j.cden.2018.06.006.

    Article  PubMed  Google Scholar 

  2. 2.

    Jacobsena BA, Dyb G, Hagen K, Stovner LJ, Holmen TL, Zwart JA. The Nord-Trondelag health study shows increased prevalence of primary recurrent headaches among adolescents over a four-year period. Scand J Pain. 2018;2(3):148–52. https://doi.org/10.1016/j.sjpain.2011.03.002.

    Article  PubMed  Google Scholar 

  3. 3.

    Chen CY, Yu NW, Huang TH, Wang WS, Fang JT. Harm avoidance and depression, anxiety, insomnia, and migraine in fifth-year medical students in Taiwan. Neuropsychiatr Dis Treat. 2018;14:1273–80. https://doi.org/10.2147/ndt.%20S163021.

    Article  PubMed  PubMed Central  Google Scholar 

  4. 4.

    Benamer HT, Deleu D, Grosset D. Epidemiology of headache in Arab countries. J Headache Pain 2010;11 1:1–3; doi: https://doi.org/10.1007/s10194-009-0173-8.

  5. 5.

    Al-Hashel JY, Ahmed SF, Alroughani R, Goadsby PJ. Migraine among medical students in Kuwait University. J Headache Pain. 2014;15:26. https://doi.org/10.1186/1129-2377-15-26.

    Article  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Wang X, Zhou HB, Sun JM, Xing YH, Zhu YL, Zhao YS. The prevalence of migraine in university students: a systematic review and meta-analysis. Eur J Neurol. 2016;23(3):464–75. https://doi.org/10.1111/ene.12784.

    CAS  Article  PubMed  Google Scholar 

  7. 7.

    De Diego EV, Lanteri-Minet M. Recognition and management of migraine in primary care: influence of functional impact measured by the headache impact test (HIT). Cephalalgia. 2005;25(3):184–90. https://doi.org/10.1111/j.1468-2982.2004.00820.x.

    Article  PubMed  Google Scholar 

  8. 8.

    Dowson AJ, Sender J, Lipscombe S, Cady RK, Tepper SJ, Smith R, et al. Establishing principles for migraine management in primary care. Int J Clin Pract. 2003;57(6):493–507.

    CAS  PubMed  Google Scholar 

  9. 9.

    Lipton RB, Dodick D, Sadovsky R, Kolodner K, Endicott J, Hettiarachchi J, et al. A self-administered screener for migraine in primary care: the ID migraine validation study. Neurology. 2003;61(3):375–82.

    CAS  Article  Google Scholar 

  10. 10.

    Maizels M, Burchette R. Rapid and sensitive paradigm for screening patients with headache in primary care settings. Headache. 2003;43(5):441–50.

    Article  Google Scholar 

  11. 11.

    Russell MB, Rasmussen BK, Brennum J, Iversen HK, Jensen RA, Olesen J. Presentation of a new instrument: the diagnostic headache diary. Cephalalgia. 1992;12(6):369–74. https://doi.org/10.1111/j.1468-2982.1992.00369.x.

    CAS  Article  PubMed  Google Scholar 

  12. 12.

    Tom T, Brody M, Valabhji A, Turner L, Molgaard C, Rothrock J. Validation of a new instrument for determining migraine prevalence: the UCSD migraine questionnaire. Neurology. 1994;44(5):925–8.

    CAS  Article  Google Scholar 

  13. 13.

    Lainez MJ, Dominguez M, Rejas J, Palacios G, Arriaza E, Garcia-Garcia M, et al. Development and validation of the migraine screen questionnaire (MS-Q). Headache. 2005;45(10):1328–38. https://doi.org/10.1111/j.1526-4610.2005.00265.x.

    Article  PubMed  Google Scholar 

  14. 14.

    Lainez MJ, Castillo J, Dominguez M, Palacios G, Diaz S, Rejas J. New uses of the migraine screen questionnaire (MS-Q): validation in the primary care setting and ability to detect hidden migraine. MS-Q in primary care. BMC Neurol. 2010;10:39. https://doi.org/10.1186/1471-2377-10-39.

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Williams B, Onsman A, Brown T. Exploratory factor analysis: a five-step guide for novices. Austr J Paramed. 2010;8:3.

    Google Scholar 

  16. 16.

    Olesen J, Lipton RB. Migraine classification and diagnosis. International headache society criteria. Neurology. 1994;44(6 Suppl 4):S6–10.

    CAS  PubMed  Google Scholar 

  17. 17.

    Jaccard J, Wan CK, Jaccard J. LISREL approaches to interaction effects in multiple regression. Thousand Oaks: Sage; 1996.

  18. 18.

    Manzar MD, Zannat W, Hussain ME, Pandi-Perumal SR, Bahammam AS, Barakat D, et al. Dimensionality of the Pittsburgh sleep quality index in the young collegiate adults. Springerplus. 2016;5(1):1550.

    Article  Google Scholar 

  19. 19.

    Manzar MD, Zannat W, Moiz JA, Spence DW, Pandi-Perumal SR, Bahammam AS, et al. Factor scoring models of the Pittsburgh sleep quality index: a comparative confirmatory factor analysis. Biol Rhythm Res. 2016;47(6):851–64.

    Article  Google Scholar 

  20. 20.

    Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model Multidiscip J. 1999;6(1):1–55.

    Article  Google Scholar 

  21. 21.

    Manzar MD, BaHammam AS, Hameed UA, Spence DW, Pandi-Perumal SR, Moscovitch A, et al. Dimensionality of the Pittsburgh sleep quality index: a systematic review. Health Qual Life Outcomes. 2018;16(1):89.

    Article  Google Scholar 

  22. 22.

    Brown A, Croudace T. Scoring and estimating score precision using multidimensional IRT. In: Reise SP, Revicki DA, editors. Handbook of item response theory modeling: applications to typical performance assessment (a volume in the multivariate applications series). New York: Routledge/Taylor & Francis Group; 2015. p. 307–33.

    Google Scholar 

  23. 23.

    Ferrando PJ, Lorenzo-Seva U. Assessing the quality and appropriateness of factor solutions and factor score estimates in exploratory item factor analysis. Educ Psychol Meas. 2017;78(5):762–80.

  24. 24.

    Hancock GR. Rethinking construct reliability within latent variable systems. In: Structural equation modeling: Present and future; 2001. p. 195–216.

    Google Scholar 

  25. 25.

    Rodriguez A, Reise SP, Haviland MG. Evaluating bifactor models: calculating and interpreting statistical indices. Psychol Methods. 2016;21(2):137–50. https://doi.org/10.1037/met0000045.

    Article  PubMed  Google Scholar 

  26. 26.

    Muthen B, Kaplan D. A comparison of some methodologies for the factor analysis of non-normal Likert variables: a note on the size of the model. Br J Math Stat Psychol. 1992;45(1):19–30.

    Article  Google Scholar 

  27. 27.

    Trizano-Hermosilla I, Alvarado JM. Best alternatives to Cronbach's alpha reliability in realistic conditions: congeneric and asymmetrical measurements. Front Psychol. 2016;7:769. https://doi.org/10.3389/fpsyg.2016.00769.

    Article  PubMed  PubMed Central  Google Scholar 

  28. 28.

    Field A. Discovering statistics using IBM SPSS statistics. London: sage; 2013.

  29. 29.

    Tabachnick BG, Fidell LS. Using multivariate statistics, 5th. Needham Height: Allyn & Bacon; 2007.

    Google Scholar 

  30. 30.

    Costello AB, Osborne JW. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess Res Eval. 2005;10(7):1–9.

    Google Scholar 

  31. 31.

    Woodhouse B, Jackson PH. Lower bounds for the reliability of the total score on a test composed of non-homogeneous items: II: a search procedure to locate the greatest lower bound. Psychometrika. 1977;42(4):579–91.

    Article  Google Scholar 

  32. 32.

    McDonald RP. Test theory: a unified treatment. New York: Psychology Press; 2013.

  33. 33.

    Timmerman ME, Lorenzo-Seva U. Dimensionality assessment of ordered polytomous items with parallel analysis. Psychol Methods. 2011;16(2):209–20. https://doi.org/10.1037/a0023353.

    Article  PubMed  Google Scholar 

  34. 34.

    Comrey AL, Lee HB. A first course in factor analysis. New York: Psychology Press; 2013.

  35. 35.

    Mindell JA, Andrasik F. Headache classification and factor analysis with a pediatric population. Headache. 1987;27(2):96–101.

    CAS  Article  Google Scholar 

  36. 36.

    Wang J, Zhang B, Shen C, Zhang J, Wang W. Headache symptoms from migraine patients with and without aura through structure-validated self-reports. BMC Neurol. 2017;17(1):193. https://doi.org/10.1186/s12883-017-0973-4.

    Article  PubMed  PubMed Central  Google Scholar 

  37. 37.

    The SAGE Encyclopedia of Educational Research. Measurement, and evaluation. Thousand Oaks: SAGE Publications, Inc, http://sk.sagepub.com/reference/sage-encyclopedia-of-educational-research-measurement-evaluation; 2018.

    Google Scholar 

  38. 38.

    Wang M, Batt K, Kessler C, Neff A, Iyer NN, Cooper DL, et al. Internal consistency and item-total correlation of patient-reported outcome instruments and hemophilia joint health score v2. 1 in US adult people with hemophilia: results from the pain, functional impairment, and quality of life (P-FiQ) study. Patient Prefer Adherence. 2017;11:1831.

    Article  Google Scholar 

  39. 39.

    Genizi J, Gordon S, Kerem NC, Srugo I, Shahar E, Ravid S. Primary headaches, attention deficit disorder and learning disabilities in children and adolescents. J Headache Pain. 2013;14:54. https://doi.org/10.1186/1129-2377-14-54.

    Article  PubMed  PubMed Central  Google Scholar 

  40. 40.

    Hooker WD, Raskin NH. Neuropsychologic alterations in classic and common migraine. Arch Neurol. 1986;43(7):709–12.

    CAS  Article  Google Scholar 

  41. 41.

    Villa TR, Correa Moutran AR, Sobirai Diaz LA, Pereira Pinto MM, Carvalho FA, Gabbai AA, et al. Visual attention in children with migraine: a controlled comparative study. Cephalalgia. 2009;29(6):631–4. https://doi.org/10.1111/j.1468-2982.2008.01767.x.

    CAS  Article  PubMed  Google Scholar 

  42. 42.

    Young WB, Peres MF, Rozen TD. Modular headache theory. Cephalalgia. 2001;21(8):842–9. https://doi.org/10.1046/j.1468-2982.2001.218254.x.

    CAS  Article  PubMed  Google Scholar 

  43. 43.

    Howard MC, Jayne BS. An analysis of more than 1,400 articles, 900 scales, and 17 years of research: the state of scales in cyberpsychology, behavior, and social networking. Cyberpsychol Behav Soc Netw. 2015;18(3):181–7. https://doi.org/10.1089/cyber.2014.0418.

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

We are grateful to the participants of the study. The authors extend their appreciation to the Deanship of Scientific Research at Majmaah University for funding this work under Project Number No (RGP-2019-40).

Funding

There was no formal research funding for this study. Md Dilshad Manzar was later on supported by a publication support program - the Deanship of Scientific Research at Majmaah University funded this work under Project Number No (RGP-2019-40).

Author information

Affiliations

Authors

Contributions

MDM: Concept development; Study design; Analysis and interpretation; Manuscript preparation; Critical revision of the manuscript. MS: Concept development; Analysis and interpretation. DN: Concept development; Analysis and interpretation. WW: Concept development; Analysis and interpretation. MYAK: Concept development; Analysis and interpretation. MA: Concept development; Analysis and interpretation. AA: Concept development; Analysis and interpretation. ASB: Concept development; Manuscript preparation; Critical revision of the manuscript. SRP: Concept development; Manuscript preparation; Critical revision of the manuscript. MDM; MS; DN; WW; MYAK; MA; AA; ASB; SRP Approval of the final manuscript.

Corresponding author

Correspondence to Mohammed Salahuddin.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the Human Institutional Ethics Committee Mizan-Tepi University, and written informed consent was obtained from all participants.

All authors have approved the final draft.

Competing interests

All the authors declare that they have no competing interests and no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Manzar, M.D., Hameed, U.A., Salahuddin, M. et al. Migraine screen questionnaire: further psychometric evidence from categorical data methods. Health Qual Life Outcomes 18, 113 (2020). https://doi.org/10.1186/s12955-020-01361-9

Download citation

Keywords

  • Headache
  • Student
  • Africa
  • Factor analysis
  • McDonald’s omega