Skip to main content

Development and testing of an instrument to measure contextual factors influencing self-care decisions among adults with chronic illness

Abstract

Background

Decisions about how to manage bothersome symptoms of chronic illness are complex and influenced by factors related to the patient, their illness, and their environment. Naturalistic decision-making describes decision-making when conditions are dynamically evolving, and the decision maker may be uncertain because the situation is ambiguous and missing information. Contextual factors, including time stress, the perception of high stakes, and input from others may facilitate or complicate decisions about the self-care of symptoms. There is no valid instrument to measure these contextual factors. The purpose of this study was to develop and test a self-report instrument measuring the contextual factors that influence self-care decisions about symptoms.

Methods

Items were drafted from the literature and refined with patient input. Content validity of the instrument was evaluated using a Delphi survey of expert clinicians and researchers, and cognitive interviews with adults with chronic illness. Psychometric testing included exploratory factor analysis to test dimensionality, item response theory-based approaches for item recalibration, confirmatory factor analysis to generate factor determinacy scores, and evaluation of construct validity.

Results

Ten contextual factors influencing decision-making were identified and multiple items per factor were generated. Items were refined based on cognitive interviews with five adults with chronic illness. After a two round Delphi survey of expert clinicians (n = 12) all items had a content validity index of > 0.78. Five additional adults with chronic illness endorsed the relevance, comprehensiveness, and comprehensibility of the inventory during cognitive interviews. Initial psychometric testing (n = 431) revealed a 6-factor multidimensional structure that was further refined for precision, and high multidimensional reliability (0.864). In construct validity testing, there were modest associations with some scales of the Melbourne Decision Making Questionnaire and the Self-Care of Chronic Illness Inventory.

Conclusion

The Self-Care Decisions Inventory is a 27-item self-report instrument that measures the extent to which contextual factors influence decisions about symptoms of chronic illness. The six scales (external, urgency, uncertainty, cognitive/affective, waiting/cue competition, and concealment) reflect naturalistic decision making, have excellent content validity, and demonstrate high multidimensional reliability. Additional testing of the instrument is needed to evaluate clinical utility.

Background

Adults with chronic illness often experience symptoms that interfere with daily life. For example, shortness of breath may limit the distance someone with asthma can walk without taking a break. Self-care of chronic illness includes evaluating changes in physical and emotional signs and symptoms, determining if action is needed, and deciding which action to take [1]. Self-care management involves the implementation and evaluation of the effectiveness of the chosen action (e.g., use inhaler for shortness of breath).

How adults with chronic illness make decisions about what to do when experiencing symptoms is poorly understood. The naturalistic decision making framework may help to explain how such decisions are made. Naturalistic decision making focuses on how people use experience to make decisions and how contextual factors influence this process [2]. The decision maker may experience uncertainty when the situation is ambiguous, the environment is changing, or necessary information is missing. For example, a symptom may be new, or an individual may be unsure what caused the symptom. Decisions may also be influenced by time stress (e.g., symptom changes quickly), the perception that there is much at stake (e.g., symptom is severe), and conflicting input from multiple individuals [2].

Previous work suggests that self-care decisions made by adults fit within the naturalistic decision making framework. In a qualitative analysis, Riegel, Dickson [3] found that the decisions made by adults with chronic heart failure were influenced by experience, decision characteristics (e.g., uncertainty, ambiguity, high stakes, urgency, illness characteristics, and involvement of others in the decision making process), and personal goals. Further, situation awareness (i.e., recognition and interpretation of the symptom) and mental simulation (i.e., mentally thinking through options for “what to do”) were integral to the decision-making process.

In spite of evidence that patients engage in naturalistic decision-making in response to symptoms and that contextual factors influence self-care decisions, there are no valid instruments to measure these factors. Instruments are available to assess decision-making style (e.g., spontaneous, intuitive, rational) [4,5,6] or management of the decision-making process (e.g., coping with decisional conflict) [7, 8]. These instruments are helpful for understanding the patient’s decision-making in general, but they do not assess how contextual factors affect the decision-making process nor are they specific to self-care decisions about symptoms. Measurement of contextual factors influencing self-care decisions about symptoms is important for advancing research in self-care and improving the clinical care of adults with chronic illness. If investigators can identify factors that influence self-care decisions, they can design tailored interventions to address specific barriers. The aims of this study were to (i) Develop a theoretically based and clinically relevant self-report instrument that measures contextual factors influencing self-care decisions about symptoms, and (ii) Test its psychometric properties, including dimensionality, construct validity, precision, and reliability.

Methods

This study was conducted in two phases: (i) Instrument development and (ii) Formal psychometric testing (Fig. 1).

Fig. 1
figure 1

Instrument development and formal psychometric testing process

The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN; www.cosmin.nl) guided the instrument development and content validity testing, and item response theory guided the initial psychometric testing. Institutional Review Board approval for this study was obtained from the University of Pennsylvania.

Instrument development

Step 1: item generation

First, contextual factors were identified from the literature that are thought to influence self-care decisions. Next, a preliminary list of items was generated. The items described how these contextual factors influence the response to bothersome symptoms based on the foundational work on naturalistic decision making [2] as well as the application of naturalistic decision making to self-care decisions in adults with heart failure [3]. The authors discussed and revised the items as well as the instrument instructions and scoring format until consensus was reached on an initial instrument draft.

Step 2: item refinement with patient input

We then conducted cognitive interviews with adults with chronic illness. The purpose of these interviews was three-fold: (1) To assess the relevance of the proposed items to the experience of having a chronic illness, (2) To ensure that patients understood the items, and (3) To improve the comprehensiveness of the instrument by asking if any items were missing. Adults with at least one of five chronic illnesses (arthritis, asthma, chronic obstructive pulmonary disease, diabetes mellitus, and/or heart failure) were recruited through Researchmatch.org, a website supported by the National Institutes of Health on which people from the United States can volunteer to participate in research. These conditions were selected because they are common and often symptomatic. Eligibility criteria included age > 18 years and currently experiencing at least one symptom of a chronic illness. There were no exclusion criteria. Interviews were completed by the first author either by phone or video conference. SP, BR, TJ, AS, HW, and EV discussed the results of the cognitive interviews and reached consensus on changes to items.

Step 3: content validity testing

Content validity is the degree to which the content of the instrument reflects the construct (i.e., naturalistic decision making) that the instrument was designed to measure [9]. The COSMIN methodology for evaluating content validity defines three properties of content validity (relevance, comprehensiveness, and comprehensibility) and further recommends that both patients and professionals are involved in the validation process [10]. Thus, we evaluated the content validity of the instrument in two ways: (i) A Delphi survey of clinicians and researchers and (ii) Cognitive interviews with adults with chronic illness.

Step 3a: Delphi survey

The Delphi technique uses structured questionnaires that are distributed in iterative rounds to a group of experts who remain anonymous to each other throughout the process [11]. For the Delphi survey, we defined experts as (i) Clinicians who routinely help adults make decisions about their chronic illnesses and (ii) Researchers who have published on decision making related to chronic illness in the scientific literature. Experts were identified through a Facebook discussion on the topic of decision-making in self-care, a literature search on decision-making in chronic illness, and the professional networks of the study authors. The Delphi survey was completed electronically using Qualtrics (Provo, UT). Respondents rated the relevance of items to the construct of naturalistic decision making on a 4-point scale (not relevant, somewhat relevant, quite relevant, highly relevant). The comprehensibility of items was rated dichotomously (clear, not clear). Respondents had the opportunity to suggest new items to support comprehensiveness of the instrument and ensure that no facets of the construct were omitted. Finally, respondents provided feedback on the clarity of the proposed instrument instructions and the scoring format.

After each round, the Content Validity Index (CVI) of each item (I-CVI) was calculated by dividing the number of respondents reporting that an item was “quite relevant” or “highly relevant” by the total number of respondents [12]. An I-CVI greater than 0.78 is considered evidence of good content validity [12]. Thus, to be retained without revision, the I-CVI had to be 0.78 or higher. Consensus on clarity was defined as at least 75% of the respondents agreeing that the item was clear. SP, BR, TJ, AS, HW, and EV met to discuss responses following each round of the Delphi survey. Items were retained, revised, or deleted following discussion of the I-CVI and clarity data as well as the respondents’ open-ended suggestions.

The Content Validity of the Scale (S-CVI) was calculated at the conclusion of the Delphi survey. We report the average of the I-CVIs for all items on the scale (i.e., S-CVI/Ave). According to Polit, Beck [12], a S-CVI/Ave greater than 0.90 indicates excellent content validity.

Step 3b: cognitive interviews

Following the Delphi survey, cognitive interviews with a second set of adults with chronic illness were completed to ensure that the revised items remained relevant to their experience and to assess comprehensiveness and comprehensibility of the instrument. Participants were again recruited through Researchmatch.org using the same inclusion criteria previously described. Participants were read the instrument instructions followed by each item. Per the instrument instruction, they rated how much the item influenced their decision on a 5-point Likert Scale from “not at all” to “a great deal”. Participants were encouraged to “think aloud” and describe how they arrived at each answer. They also provided feedback on the clarity of the instrument instructions and Likert scale. To elicit more information, three types of verbal probing techniques were used: 1) comprehensiveness/ interpretation probes (e.g., why do you think…?), 2) paraphrasing (e.g., please repeat that statement in your own words), 3) general probes (e.g., how did you arrive at that answer?) [13].

Formal psychometric testing

Sample

Participants were recruited through Reaserchmatch.org for psychometric testing of the newly developed Self- Care Decisions Inventory. Invitations to participate were sent to adults (age > 18y) with at least one chronic condition. Chronic condition was defined as any of the symptomatic physical or mental health conditions that are included on the list of chronic conditions published by the Office of the Assistant Secretary for Health in the Department of Health and Human Services of the United States [14]. Additional eligibility criteria included currently experiencing at least one symptom of the chronic illness. Surveys were completed electronically using Qualtrics (Provo, UT).

Step 4: dimensionality & recalibration

Descriptive statistics of central tendency and dispersion were used to describe the sample. Exploratory factor analysis was used to test dimensionality; response options were handled as ordered categorical data, and weighted least squares mean and variance adjustment and geomin oblique rotation (with a primary loading cutoff > 0.40, and significant loading (p < 0.05) on alternative factors) were used [15]. Models ranging from 1 to 8 factors were compared using cutoff values of model fit (i.e., root mean square error of approximation (RMSEA) < 0.05, comparative fit index (CFI) and Tucker-Lewis index (TLI) of ≥ 0.95, and standardized root mean square residual (SRMR) < 0.08) [16, 17]. Velicer’s minimum average partial correlation was calculated post-estimation along with Horn’s parallel analysis to confirm the number of factors [18,19,20], assuming that a correctly identified multidimensional model also can result in local independence [21].

Graded response item response theory (IRT)-based approaches were used within each factor for recalibration using information on a) item discrimination (slope and significance), b) item difficulty (graded response model slopes and standard errors as well as boundary and category characteristic curves), as well as c) item and test information (item and test information curves) [22].

Step 5: construct validity

No measure of the contextual factors influencing decision making as described in the naturalistic decision making framework exists, so we chose to assess convergent validity, the degree to which the new measure is related to other measures of decision-making. We compared each recalibrated Self-Care Decisions Inventory with the Melbourne Decision-Making questionnaire (Melbourne DMQ) domains. The Melbourne DMQ measures four patterns for coping with decisional conflict: vigilance, hypervigilance, buck passing and procrastination [7]. The coping pattern of vigilance involves clarifying objectives, canvassing an array of alternatives, searching for relevant information, assimilating that information, and evaluating alternatives before making a choice. The pattern of hypervigilance involves frantic searching, time pressure, and impulsive choice of a contrived solution. Buck passing is described as an avoidance style associated with defensiveness and dependency. Finally, procrastination is another form of defensive avoidance that involves delaying decision making. Higher scores indicate a preference for that coping pattern and vigilance is negatively correlated with the other patterns. The scale alpha coefficient reliabilities ranged from 0.74 to 0.87 in a sample of 2018 participants from six countries [7]. We hypothesized that each recalibrated scale on the Self-Care Decisions Inventory would be significantly associated with Melbourne DMQ domains. Linear correlations with Bonferroni correction were computed to test these hypotheses.

Criterion validity is the extent to which one measure predicts scores on another measure. To evaluate criterion validity, we assessed the degree to which scores on the Self-Care Decisions Inventory predict adequate self-care, using the Self-Care of Chronic Illness Inventory (SC-CII), a 20-item self-report generic measure of self-care based on the Theory of Self-Care of Chronic Illness [23]. The SC-CII includes three scales: Self-Care Maintenance, Self-Care Monitoring, and Self-Care Management. Scores range from 0 to 100 and higher scores indicate better self-care. A cut-point of ≥ 70 is used to indicate self-care adequacy on each scale [24]. The Self-Care Management scale is multidimensional, thus reliability is calculated using the global reliability index [25]. Reliability of this scale was 0.71 in a sample of 400 adults with chronic illness [23]. We hypothesized that adequate self-care management would be positively associated with the ‘Urgency’ scale in the Self-Care Decisions Inventory and negatively associated with the Self-Care Decisions Inventory ‘Uncertainty’ scale, discussed further below. Scores on the Self-Care Decisions Inventory were standardized to range from 0–100. Two-sample t-tests were used to compare Self-Care Decisions Inventory scores between groups of individuals with adequate and inadequate self-care management. Hedge’s g is reported for effect size.

Step 6: precision & reliability

IRT test information function curves were generated to display the range of each construct where recalibrated scales of the Self-Care Decisions Inventory are most accurate. Multidimensional reliability was quantified using factor determinacy scores for the recalibrated Self-Care Decisions Inventory in confirmatory factor analysis.

Step 7: differential item functioning

Ordinal logistic regression approaches were combined with IRT-based ability estimates to detect differential item functioning related to self-identified gender [26].

Factor analyses were performed in Mplus v8 (Los Angeles, CA), and IRT models and validity testing were performed in Stata v16 (College Station, TX). Full information maximum likelihood estimation (FIML) was used to impute the 0.3% of data that were missing at random.

Results

Instrument development

Step 1: item generation

The instrument instructions directed survey respondents to think about the last time that they had a bothersome symptom of their chronic illness and then rate how much each item influenced their decision about what to do in response to that symptom. Ten contextual factors were derived from the literature: prior experience, competing personal goals, uncertainty and ambiguity, urgency, situation awareness, involvement of multiple individuals, interpretation of symptom meaning, illness characteristics, dynamically evolving conditions, and high stakes [2, 3]. Several items were generated for each contextual factor, resulting in an initial draft of 42 items. From August to October 2020, the investigators discussed and revised items. Consensus discussions centered on ensuring that all contextual factors were adequately represented and that items were clearly worded. For example, for prior experience, we decided to include items that captured both having experience (e.g., I thought about similar past decisions) and lack of experience (e.g., the symptom was new to me). Each item was rated on a 5-point Likert scale with response options of not at all (1), a little (2), some (3), a lot (4), and a great deal (5). Figure 2 displays the process of item selection and revision.

Fig. 2
figure 2

Flow chart of item selection and revision for the Self-Care Decisions Inventory. This flowchart displays the process of item development. Initially 42 items were generated. Items were subsequently retained, revised, added, or deleted based on patient input, a two round Delphi survey, and cognitive interviews with adults with chronic illness

Step 2 : item refinement with patient input

Five women, ages 43–71, completed the cognitive interviews. Each had multiple chronic conditions and had been living with at least one symptomatic chronic illness for more than 10 years. One participant reported having both physical and mental illnesses.

Based on the responses of these adults, 23 items were retained as initially written, 7 items were revised, and 11 items were added. Item revisions were made to improve clarity. For example, “I recognized this from last time” was changed to “I recognized this symptom from last time”. Items were added when participants identified that a factor that influences their decision was not captured by existing items. For example, a participant identified that her decision making is affected by depressive symptoms, so the item “I felt too down, so I put off making a decision” was added. Finally, 12 items were deleted as irrelevant (8 items) or redundant (4 items). The refined draft of the instrument included 41 items.

Step 3a: Delphi survey

Twenty-six experts were invited via email to complete the Delphi survey. There were 12 respondents (9 female, 3 male) in round 1 and all 12 respondents also completed round 2. Experts were from United States (n = 7), Italy (n = 4), and Germany (n = 1). All experts reported that their primary role was as a professor/lecturer at university and ten also reported clinical experience. The average number of years of experience, specifically in the clinical care of adults with chronic conditions was 16 years (range: 4–44). Eleven out of 12 experts had a PhD and one had a master’s degree.

I-CVI and clarity data for each Delphi round are summarized in Table 1. In round 1, I-CVIs ranged from 0.5 to 1.0. Two items, “I didn’t want to look weak” (I-CVI = 0.5) and “I knew I was in trouble” (I-CVI = 0.75) were rated as irrelevant and also had less than 75% agreement on clarity, thus both items were deleted. Twelve items did not reach consensus on clarity (i.e., rated as clear by < 75% experts). Six of these items were deleted because there were other items that evaluated the same contextual factor and scored better in terms of clarity. Six of these items were revised and were subsequently rated as clear by ≥ 75% of experts in the second round. Five items were added and one was deleted based on the open-ended feedback in this round.

Table 1 I-CVI and clarity data by Delphi round

In round 2, the I-CVIs were 1.0 for 26 items, 0.92 for 9 items, and 0.83 for 2 items. All items were rated as clear by ≥ 75% of experts. Based on the open-ended feedback provided by experts, minor revisions to the wording were made to 6 items and 2 items were added.

The Delphi survey was closed after the second round as consensus on item relevance and clarity was achieved. The S-CVI/Ave of this 39-item instrument was excellent at 0.92.

Step 3b: cognitive interviews

Five adults (3 female, 2 male), ages 44–70, completed the second round of cognitive interviews. Four adults had multiple chronic conditions, including one who reported both physical and mental health conditions. Two adults had been diagnosed in the last 3 years, while three adults reported having at least one symptomatic chronic condition for more than 10 years. Despite having chronic conditions for multiple years, one adult was experiencing a new symptom and spoke about decision-making for this new symptom during the cognitive interview.

In these cognitive interviews, respondents reported that items were relevant to their experience and the instrument was comprehensive. No new items were suggested. For three items, participants reported confusion about wording and endorsed multiple interpretations of the item. These three items were deleted because there were other items that captured the same contextual factor and were clearer to participants. One item, “I worried about the cost of treatment”, was deleted based on participant feedback. Participants discussed that worries about cost were directly tied to whether they had adequate insurance coverage. Thus, the item reflected access to insurance coverage rather than a factor that influenced decision making. We aimed to develop an instrument that could be used internationally and since insurance coverage and treatment costs differ across countries, we chose to delete this item. The instrument instructions were also shortened and simplified based on participant feedback. The anchors of the 5-point scale were changed to “No Influence” [1] and “A Lot of Influence” [5]. Following content validity testing, the instrument contained 35 items.

Psychometric testing

Invitations to participate were sent to 1,127 individuals who expressed interested in the study on Researchmatch.org. A total of 431 individuals completed the survey for a response rate of 38.2%. The typical participant was female, White, non-Hispanic, with at least some college education (Table 2). The sample was diverse in terms of the types of chronic conditions, including 22.5% who self-reported having a mental health condition. The Self-Care Decision Inventory instructs participants to think about the last time that they had a worrisome symptom and participants provided a free-text response to the question “what symptom are you thinking about?” Most participants (n = 356, 82.6%) reported a single symptom, while 44 (10.2%) reported multiple symptoms. The most frequently reported symptoms were pain (28.1%), respiratory symptoms (11.5%), mental health symptoms (7.4%), fatigue (7.1%), and gastrointestinal symptoms (7.1%).

Table 2 Participant characteristics (n = 431)

Step 4: dimensionality & recalibration

The 35 Self-Care Decisions Inventory items fit best into a 6-factor multidimensional structure in exploratory factor analysis (RMSEA = 0.05, CFI = 0.96, TLI = 0.94, and SRMR = 0.04). Velicer’s minimum average partial correlation and Horn’s parallel analysis confirmed the 6-factor structure (Additional File 1). Based on primary item loadings (Table 3) we identified six types of contextual factors that influence self-care decisions about symptoms – all significant factor loadings are presented.

Table 3 Self-Care Decisions Inventory item significant (p < 0.05) Geomin loadings and multidimensional structure

Each represents a distinct and separately scored scale on the Self-Care Decisions Inventory. Scales were labeled ‘external,’ ‘urgency,’ ‘uncertainty,’ ‘cognitive/affective,’ ‘waiting/cue competition,’ and ‘concealment’ based on the initial literature review and the content of the items that significantly loaded onto that scale (Table 4). Correlations between scales ranged from 0.35 (urgency and uncertainty) to 0.13 (urgency and concealment).

Table 4 Interpretations of the six scales of the self-care decisions inventory

Four items were associated with the scale we labeled ‘external.’ Although all items were significant discriminators between low and high levels of external factors driving decision-making (Table 5), item 20 “someone else recognized the symptom before I did,” had the lowest value for discrimination, and provided the least information about the influence of external factors (Fig. 3). Further, based on category characteristic curves (Additional File 2), there had to be extremely high levels of the external influence (i.e. outside of the 95% confidence interval) for respondents to choose any response option above 1 (i.e. no influence). Therefore, item 20 was dropped from the ‘external’ scale.

Table 5 Scale-specific item discrimination and difficulty
Fig. 3
figure 3

Self-care decisions inventory item information functions. Each pre-calibration item is shown within the six scales of the Self-Care Decisions Inventory. On the x-axis, theta represents the mean observed trait and the scale is standard errors around theta. On the y-axis, items providing more information about the trait with respect to greater discrimination have higher curves; items providing less information about the trait have lower curves, particularly those with a peak less than one

Six items were associated with the scale we labeled ‘urgency,’ All items were significant discriminators between low and high levels of urgency (Table 5); but item 1, “I thought about decisions I made in the past when I had a similar symptom,” had the lowest value for discrimination and not all response options discriminated significantly. In addition, item 1 provided almost no information about the influence of urgency (Fig. 3), and there was a very low threshold for higher probability of respondents choosing higher response options. Therefore, item 1 was dropped from the ‘urgency’ scale.

Nine items loaded on the scale we labeled ‘uncertainty.’ All items were significant discriminators between low and high levels of uncertainty (Table 5); however, there were redundancies with respect to item information, especially involving these items: item 3 “The symptom was different than what I expected,” and item 35 “The symptom was different than the last time I had it” (Fig. 3). Additionally, item 33 “I recognized this symptom from the last time I had it” was the weakest discriminator and provided the least information about uncertainty. Items 3, 33 and 35 were omitted from the ‘uncertainty’ scale.

Six items were associated with the scale we labeled ‘cognitive/affective.’ All six items discriminated significantly (Table 5). However, for item 32 “I felt uncertain about what to do”, not all response options were significant discriminators (Additional File 2) and item 32 also provided the least information about the influence of the individual’s cognitive/affective state (Fig. 3). Accordingly, item 32 was dropped from the ‘cognitive/affective’ scale.

Seven items loaded on the scale we labeled ‘waiting/cue competition.’ All items discriminated significantly between low and high levels of waiting/cue competition (Table 5). However, items 25 “The symptom changed slowly” and 29 “Someone else needed my attention” had the lowest values for discrimination and provided the least information about the ‘waiting/cue competition’ scale (Fig. 3). Hence, items 25 and 29 were omitted from the ‘waiting/cue competition’ scale.

Finally, three items loaded on the scale we labeled ‘concealment.’ All three items were significant discriminators between low and high levels of concealment (Table 5) and all items provided sufficient information about concealment (Fig. 3). Accordingly, all three items were retained in the ‘concealment’ scale.

Step 5: construct validity

Correlations between the six new Self-Care Decisions Inventory scales and the four domains of the Melbourne DMQ were tested (Table 6).

Table 6 Convergent Validity Testing with Melbourne Decision-Making Questionnaire Domains

The Self-Care Decisions Inventory external scale was modestly associated with buck passing and hypervigilance. The Self-Care Decisions Inventory uncertainty scale was modestly associated with procrastination and hypervigilance. The Self-Care Decisions Inventory cognitive/affective scale was associated with buck passing, procrastination, and hypervigilance. The Self-Care Decisions Inventory waiting/cue competition scale was associated modestly with buck passing and procrastination. The Self-Care Decisions Inventory concealment scale was associated with buck passing, procrastination and hypervigilance. No scale on the Self-Care Decisions Inventory was significantly associated with the Melbourne DMQ vigilance domain, and the Self-Care Decisions Inventory urgency scale was not associated with any Melbourne DMQ domain.

We also evaluated differences in scale scores of the Self-Care Decisions Inventory between individuals with adequate and inadequate self-care management (Table 7). Adequate self-care management is defined as a score ≥ 70 on the SC-CII Management Scale [24].

Table 7 Criterion validity testing comparing the six scales of the self-care decisions inventory with adequate versus inadequate self-care management

There were statistically significant differences in the scores on the external, urgency, and uncertainty scales ranging from small to medium effect sizes. This partially supported our hypothesis that the scales of the Self-Care Decisions Inventory would correlate with adequate self-care. Individuals with higher urgency had statistically significantly higher self-care management, as hypothesized. However, those with higher uncertainty also had higher self-care management scores.

Step 6: precision and reliability

Using IRT, test information function graphs along with plotted standard errors inform the range of underlying contextual factor where the scale is most precise; these data are provided in Fig. 4. Using confirmatory factor analysis with recalibrated domains, multidimensional reliability (i.e., factor determinacy score) was high at 0.86.

Fig. 4
figure 4

Recalibrated test information functions for each scale of the Self-Care Decisions Inventory. Each post-calibration scale of the Self-Care Decisions Inventory is presented regarding the degree to which the factor items collectively inform the trait (left y-axis—information), and range of underlying trait (x-axis with theta representing the mean observed trait and the scale is standard errors around theta) where the scale is most precise (right y-axis – standard error)

Step 7: differential item functioning

No significant uniform or non-uniform differential item functioning was detected by self-identified gender.

Scoring and reference ranges

Separate standardized scoring (fixed score range from 0 to 100) is recommended for the six scales of the Self-Care Decisions Inventory. There is no total score. Mean ± standard deviation of standardized scores were external (26.30 ± 24.28), urgency (58.57 ± 25.38), uncertainty (33.03 ± 25.04), cognitive/affective (26.00 ± 26.04), waiting/cue competition (40.04 ± 23.13), and concealment (34.89 ± 29.48) in this derivation sample (Fig. 5).

Fig. 5
figure 5

Standardized scores on the Self-Care Decisions Inventory. The mean and standard deviation of the standardized scores for each scale of the Self-Care Decisions Inventory in the current sample are displayed

Discussion

The Self-Care Decisions Inventory is a 27-item self-report instrument measuring contextual factors influencing self-care decisions about symptoms with six scales: ‘external,’ ‘urgency,’ ‘uncertainty,’ ‘cognitive/affective,’ ‘waiting/cue competition,’ and ‘concealment.’ To our knowledge, this is the first instrument to operationalize naturalistic decision making to measure the contextual factors that influence self-care decisions.

A core premise of naturalistic decision making is that decisions take place in real-world environments that are dynamically evolving [2]. As such, decisions are often made with incomplete information. The ‘uncertainty’ scale assesses uncertainty that arises from ambiguity about the cause or meaning of a symptom. Situational factors also influence decision making and the ‘urgency’ scale measures the influence of feeling that the response to a symptom is time sensitive. The ‘waiting/cue competition’ scale assesses the influence of competing priorities. Together, these three scales (uncertainty, urgency, and waiting/cue competition) provide insight into how patients use information about their symptoms to make decisions. For example, a patient with a high uncertainty score may need support in learning how to assess the severity of their symptoms.

The involvement of multiple individuals (e.g., family, clinicians) may enhance or complicate decision-making. Individuals who score high on the ‘external’ scale are influenced strongly by the input of others. In The Theory of Dyadic Illness Management, the relationship between patients and their care partners is transactional and interdependent as they navigate the patient’s illness together [27]. Decision-making is a dyadic management behavior and there is variability in how patients and their care partners collaborate to make decisions. Prior studies have shown that indeed self-care is a dyadic phenomenon in chronic illness; [28, 29] but the dyadic nature of decision-making in response to symptoms is unknown. Further research on caregiver contributions to self-care and dyadic decision-making about symptoms is needed to better understand how patients and their care partners collaborate to manage symptoms of chronic illness. Some adults with chronic illness may instead wish to hide their symptoms from others. The ‘concealment’ scale measures this concept by assessing the extent to which a desire to hide symptoms influences decision making.

The initial draft of the instrument included several items related to prior experience, thought theoretically to inform the assessment of the situation and decision choices. Interestingly, the prior experience items discriminated well between respondents at the extremes (i.e., prior experience having no influence or much influence), but intermediate response options did not discriminate well, and the items were eliminated during recalibration. Respondents to our cognitive interviews universally endorsed prior experience. This is similar to our previous findings in adults with heart failure who reported that prior experience was valuable in improving their ability to recognize and interpret symptoms [3]. A lack of prior experience is reflected in the ‘uncertainty’ scale.

One’s cognitive or affective state at the time when a symptom occurs also influences decision making, a concept measured by the ‘cognitive/ affective’ scale. In this study. individuals who were highly influenced by thoughts or feelings (i.e., higher score on the cognitive/affective scale) had decisional coping styles that were more maladaptive. Indeed, the contextual factors measured by the Self-Care Decisions Inventory can complement assessment of coping with decisional conflict. The Melbourne DMQ [7] pattern of hypervigilance was modestly associated with the external and uncertainty scales, which could suggest that, for some, the input of others and incomplete information leads to a chaotic coping pattern. The concealment scale, which correlated with the Melbourne DMQ patterns of hypervigilance, buck passing, and procrastination, could also be seen as a coping response. The urgency scale was not associated with any coping patterns on the Melbourne DMQ. Perhaps urgency caused by a symptom that is severe or worsening leads to a swift decision rather than decisional conflict. Investigators who are interested in the contextual factors derived from the naturalistic decision making framework and also want to understand how people cope with decisional conflict may want to use both instruments in future research.

Several of the contextual factors measured by this new instrument appear to be amenable to interventions to improve decision-making about symptoms, which may improve self-care. In this study, the influence of external factors, urgency, and uncertainty differed significantly between those with (SC-CII management scale score ≥ 70) and without adequate self-care management. These results confirm findings from other studies that the perception of urgency and importance prompts engagement in self-care [30]. Surprisingly, there was more uncertainty in those with adequate self-care management compared with those with inadequate self-care management. This difference may be explained by considering that the self-care management scale measures responses to symptoms that include calling the provider for guidance. People may be more tempted to call the provider if they feel uncertain about what to do when they have a symptom. Finally, those with adequate self-care were more influenced by the input of others (external scale). This suggests that those with adequate self-care management are more likely to consult with others (e.g., family, clinicians) when making decisions about what to do about symptoms. Patients may be differentially influenced by contextual factors based on the severity of the condition, whether the condition is life-limiting, and social stigma surrounding it. Future research might compare decision-making between groups of individuals with different chronic conditions to gain insights that could inform tailored self-care decision-making interventions.

Limitations include a convenience sample that was predominantly female, White, and residing in the United States. The first five interviewees were women, but content validity was later assessed by a more representative group of two men and three women. All data were cross-sectional. Our response rate was low (38.2%), which is common in online surveys [31]. Since the invited participants were anonymous, we are unable to assess if there were significant differences between those who completed the survey and those who did not, which might have biased our sample. Further testing in more diverse populations is needed to ensure generalizability to all adults with symptomatic chronic illness. Based on simulation studies for IRT models [32, 33], for this 27-item instrument we recommend enrolling a minimum of 500 participants in future studies. We did not evaluate test–retest reliability, so stability of the decision-making pattern(s) is unknown. Although some aspects of decision-making are likely trait-like and stable across contexts [34], naturalistic decision-making is situation specific and variable. Short-term stability should be tested in future research. Finally, responses to many of the questions indicate that 5 response options may not be ideal or even necessary; the lack of significant differential item functioning by gender also will need to be confirmed in future studies. After additional validation, future refinements of the instrument may include limiting response options or even dichotomizing responses.

Conclusion

The 27-item Self-Care Decisions Inventory is a new instrument developed with input from patients, clinicians, and researchers. It measures six contextual factors that influence everyday decision-making about symptoms of chronic illness. Content validity is excellent and the instrument has high multidimensional reliability. While additional testing is indicated, initial psychometric analysis indicates that the Self-Care Decisions Inventory may be useful in research to better understand the processes that persons use to make decisions about their symptoms.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CFI:

Comparative fit index

COSMIN:

COnsensus-based standards for the selection of health measurement instruments

CVI:

Content validity index

I-CVI:

Item content validity index

IRT:

Item response theory

FIML:

Full information maximum likelihood estimation

Melbourne DMQ:

Melbourne decision making questionnaire

RMSEA:

Root mean square error of approximation

SC-CII:

Self-care of chronic illness inventory

S-CVI:

Scale content validity index

SRMR:

Standardized root mean square residual

TLI:

Tucker-Lewis index

References

  1. Riegel B, Jaarsma T, Strömberg A. A middle-range theory of self-care of chronic illness. ANS Adv Nurs Sci. 2012;35(3):194–204.

    Article  Google Scholar 

  2. Zsambok CE, Klein G. Naturalistic Decision Making. Mahwah, New Jersey: Lawrence Erlbaum Associates; 1997.

    Google Scholar 

  3. Riegel B, Dickson VV, Topaz M. Qualitative analysis of naturalistic decision making in adults with chronic heart failure. Nurs Res. 2013;62(2):91–8.

    Article  Google Scholar 

  4. Leykin Y, DeRubeis RJ. Decision-making styles and depressive symptomatology: development of the decision styles questionnaire. Judgm Decis Mak. 2010;5(7):506.

    Google Scholar 

  5. Scott SG, Bruce RA. Decision-making style: the development and assessment of a new measure. Educ Psychol Measur. 1995;55(5):818–31.

    Article  Google Scholar 

  6. Hamilton K, Shih SI, Mohammed S. The development and validation of the rational and intuitive decision styles scale. J Pers Assess. 2016;98(5):523–35.

    Article  Google Scholar 

  7. Mann L, Burnett P, Radford M, Ford S. The melbourne decision making questionnaire: an instrument for measuring patterns for coping with decisional conflict. J Behav Decis Mak. 1997;10(1):1–19.

    Article  Google Scholar 

  8. Miller DC, Byrnes JP. Adolescents’ decision making in social situations: a self-regulation perspective. J Appl Dev Psychol. 2001;22(3):237–56.

    Article  Google Scholar 

  9. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J Clin Epidemiol. 2010;63(7):737–45.

    Article  Google Scholar 

  10. Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, Patrick DL, Alonso J, et al. COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study. Qual Life Res. 2018;27(5):1159–70.

    Article  CAS  Google Scholar 

  11. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008–15.

    CAS  PubMed  Google Scholar 

  12. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30(4):459–67.

    Article  Google Scholar 

  13. Willis GB. Analysis of the Cognitive Interview in Questionnaire Design: Oxford University Press; 2015.

  14. Goodman RA, Posner SF, Huang ES, Parekh AK, Koh HK. Defining and measuring chronic conditions: imperatives for research, policy, program, and practice. Prev Chronic Dis. 2013;10:E66.

    PubMed  PubMed Central  Google Scholar 

  15. Howard MC. A Review of Exploratory Factor Analysis Decisions and Overview of Current Practices: What We Are Doing and How Can We Improve? Int J Hum-Comput Interact. 2016;32(1):51–62.

    Article  Google Scholar 

  16. Lt Hu, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model Multidiscip J. 1999;6(1):1–55.

    Article  Google Scholar 

  17. Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures. Methods Psychol Res. 2003;8(2):23–74.

    Google Scholar 

  18. Velicer WF. Determining the number of components from the matrix of partial correlations. Psychometrika. 1976;41(3):321–7.

    Article  Google Scholar 

  19. Ye ZJ, Liang MZ, Li PF, Sun Z, Chen P, Hu GY, et al. New resilience instrument for patients with cancer. Qual Life Res. 2018;27(2):355–65.

    Article  Google Scholar 

  20. Horn JL. A rationale and test for the number of factors in factor analysis. Psychometrika. 1965;30(2):179–85.

    Article  CAS  Google Scholar 

  21. Edwards MC, Houts CR, Cai L. A diagnostic procedure to detect departures from local independence in item response theory models. Psychol Methods. 2018;23(1):138–49.

    Article  Google Scholar 

  22. Nguyen TH, Han HR, Kim MT, Chan KS. An introduction to item response theory for patient-reported outcome measurement. Patient. 2014;7(1):23–35.

    Article  Google Scholar 

  23. Riegel B, Barbaranelli C, Sethares KA, Daus M, Moser DK, Miller JL, et al. Development and initial testing of the self-care of chronic illness inventory. J Adv Nurs. 2018;74(10):2465–76.

    Article  Google Scholar 

  24. Riegel B, Lee CS, Dickson VV, Carlson B. An update on the self-care of heart failure index. J Cardiovasc Nurs. 2009;24(6):485–97.

    Article  Google Scholar 

  25. Barbaranelli C, Lee CS, Vellone E, Riegel B. The problem with Cronbach’s Alpha: comment on Sijtsma and van der Ark (2015). Nurs Res. 2015;64(2):140–5.

    Article  Google Scholar 

  26. Crane PK, Gibbons LE, Jolley L, van Belle G. Differential item functioning analysis with ordinal logistic regression techniques. DIFdetect and difwithpar. Med Care. 2006;44(11 Suppl 3):S115–23.

    Article  Google Scholar 

  27. Lyons KS, Lee CS. The theory of dyadic illness management. J Fam Nurs. 2018;24(1):8–28.

    Article  Google Scholar 

  28. Bidwell JT, Vellone E, Lyons KS, D’Agostino F, Riegel B, Juarez-Vela R, et al. Determinants of heart failure self-care maintenance and management in patients and caregivers: a dyadic analysis. Res Nurs Health. 2015;38(5):392–402.

    Article  Google Scholar 

  29. Lee CS, Vellone E, Lyons KS, Cocchieri A, Bidwell JT, D’Agostino F, et al. Patterns and predictors of patient and caregiver engagement in heart failure care: a multi-level dyadic study. Int J Nurs Stud. 2015;52(2):588–97.

    Article  Google Scholar 

  30. Xu J, Gallo JJ, Wenzel J, Nolan MT, Budhathoki C, Abshire M, et al. Heart Failure rehospitalization and delayed decision making: the impact of self-care and depression. J Cardiovasc Nurs. 2018;33(1):30–9.

    Article  Google Scholar 

  31. Arafa AE, Anzengruber F, Mostafa AM, Navarini AA. Perspectives of online surveys in dermatology. J Eur Acad Dermatol Venereol. 2019;33(3):511–20.

    Article  CAS  Google Scholar 

  32. Jiang S, Wang C, Weiss DJ. Sample size requirements for estimation of item parameters in the multidimensional graded response model. Front Psychol. 2016. https://doi.org/10.3389/fpsyg.2016.00109.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Kose IA, Demirtasli NC. Comparison of unidimensional and multidimensional models based on item response theory in terms of both variables of test length and sample size. Procedia Soc Behav Sci. 2012;46:135–40.

    Article  Google Scholar 

  34. Ye ZJ, Zhang Z, Zhang XY, Tang Y, Chen P, Liang MZ, et al. State or trait? Measuring resilience by generalisability theory in breast cancer. Eur J Oncol Nurs. 2020;46: 101727.

    Article  Google Scholar 

Download references

Acknowledgements

We thank Victoria Vaughan Dickson, PhD, RN, FAHA, FHFSA, FAAN for her early contributions to item development.

Funding

Australian Catholic University (Grant #: 060.0602.4.581955.xxxx.2406.0077).

Author information

Authors and Affiliations

Authors

Contributions

SP: Methodology, Formal Analysis, Investigation, Writing- Original Draft, Visualization. CL: Methodology, Formal Analysis, Data Curation, Writing – Original Draft, Visualization. SA: Methodology, Formal Analysis, Data Curation, Writing – Review & Editing. KF: Methodology, Formal Analysis, Writing- Review & Editing. AS: Methodology, Formal Analysis, Writing – Review & Editing. EV: Methodology, Formal Analysis, Writing – Review & Editing. HW: Methodology, Formal Analysis, Writing – Review & Editing. DW: Methodology, Writing – Review & Editing. TJ: Conceptualization, Methodology, Formal Analysis, Writing – Review & Editing, Funding Acquisition. BR: Conceptualization, Methodology, Formal Analysis, Writing- Original Draft, Supervision, Funding Acquisition. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shayleigh Dickson Page.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was obtained from the University of Pennsylvania Institutional Review Board (Protocol #: 844892). Ethical approval included a waiver of written documentation of informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

 Velicer’s minimum average partial (MAP) correlation, Horn's output, and Horn’s parallel analysis graph.

Additional file 2:

Category characteristic curves for each item.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Page, S.D., Lee, C., Aryal, S. et al. Development and testing of an instrument to measure contextual factors influencing self-care decisions among adults with chronic illness. Health Qual Life Outcomes 20, 83 (2022). https://doi.org/10.1186/s12955-022-01990-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12955-022-01990-2

Keywords