Discrete choice experiments quantify respondent preferences over different alternative treatments due to their attributes. DCE assume individuals are rational and do not pick dominated options (i.e., where one option is weakly better than another option across all attributes and strictly better than the option across at least one attribute). However, Johnson et al. (2019) finds that about 10-20% of people select dominated options in DCEs.
Why would respondents choose a dominated option? Some reasons include failure to pay attention to the question, not comprehending the question, use of heuristics rather than actual values presented, labeling effects, or latent preferences for action over inaction (even when inaction is the dominant strategy).
If a lack of comprehension is the reason, would providing feedback clarifying that one option is dominant over the other change respondent’s answers? Could this also help improve estimate precision? That is the question a paper by Genie et al. 2026 aims to answer. The authors give respondents the choice between receiving treatment with a device compared to no action. The device treatment has no benefits but does have side effects; thus, no action is the dominant strategy. If people selected a specific device, the feedback provided was “The device offers no improvement in your ability to do daily activities. But it imposes a risk of dying or having complications“. The authors hypothesize that providing this feedback will (i) make people’s choices on subsequent tasks more consistent and (ii) will make it less likely that people choose the ‘no device’ option.
First, the actual number of people who changed their mind overall was very small.
In the dominance-structured training task, 54% of respondents selected a device with associated risks and no benefits. Among those randomized to receive feedback (n = 170), 71% (n = 121) continued to choose a device after the feedback explicitly stated that the device had no benefit and added risk. In other words, despite being explicitly informed that the device provided no benefit and only added risk, a large proportion persisted with the device choice.
Despite the small impact of providing feedback, the magnitude of people changing their opinions was twice that of people receiving no feedback.
The mean predicted probability of choosing “No device” increased from 0.096 without feedback to 0.180 with feedback…an absolute rise of 0.0846 (8.46% points…)…feedback attenuated residual label utility and shifted choices toward the opt-out (No Device). Together, these findings confirm the presence of label effects and demonstrate that the feedback prompt attenuated, but did not eliminate, label-driven utility.
The authors did a latent class analysis and found the impact of feedback varied based on the latent class:
Feedback had no measurable impact on consistency for the majority “Pro-device” class, reduced consistency sharply for the “Pro-device, risk-sensitive” class, and increased consistency for the smaller “Anti-device” class.
In short, I would guess the “pro device” group had a prior that the devices were effective and reminding people that it was not effective may lead them to believe that the device was not effective over the attributes measures but they may believe the device was effective over other attributes not presented.
Healthcare Economist Researcher Tips
What is a researcher to do? First, this paper shows that labelling matters. If the treatments were labelled as Device A, B, and C (or more generally treatment A, B and C) instead of Device A, B and “No treatment”, it is likely that the labelling would have a small impact an lead to the choice of fewer dominated options. Second, comparing choices that consist of active vs. passive approaches may cause respondents to infer unmeasured benefits or harms based on their priors rather than the attributes presented. The impact of this potential bias on the research question of interest must be carefully considered. Third, many DCEs remove responses when people choose dominated choices. This may be a reasonable option if the choice is due to lack of comprehension or attention. However, if the choice is due to very strong priors, you could be removing the individuals with the strongest preferences in favor of a specific type of treatment. The authors note that dominance failures “…can arise because respondents simplify complex tasks using heuristics, adopt lexicographic preferences, or interpret labels and attributes through their own frames, rather than because they lack well-defined preferences.” Thus, careful rationale is needed to justify removing any respondents from the analytic sample. Fourth, the feedback mechanism the authors use is likely not useful for general DCEs. I agree with the authors statement that “adequate training before the DCE begins might be preferable to mid-stream feedback that could be misconstrued as a ‘correct-answer’ cue.”
You can read the full paper here.