{"id":11893,"date":"2026-03-10T01:06:34","date_gmt":"2026-03-10T01:06:34","guid":{"rendered":"https:\/\/medical-article.com\/?p=11893"},"modified":"2026-03-10T01:06:34","modified_gmt":"2026-03-10T01:06:34","slug":"respondent-selection-of-dominated-choices-in-discrete-choice-experiments","status":"publish","type":"post","link":"https:\/\/medical-article.com\/?p=11893","title":{"rendered":"Respondent selection of dominated choices in discrete choice experiments"},"content":{"rendered":"<p>Discrete choice experiments quantify respondent preferences over different alternative treatments due to their attributes.  DCE assume individuals are rational and do not pick dominated options (i.e., where one option is weakly better than another option across all attributes and strictly better than the option across at least one attribute).  However, <a href=\"https:\/\/www.valueinhealthjournal.com\/article\/S1098-3015(18)33233-9\/fulltext\">Johnson et al. (2019)<\/a> finds that about 10-20% of people select dominated options in DCEs.  <\/p>\n<p>Why would respondents choose a dominated option?  Some reasons include failure to pay attention to the question, not comprehending the question, use of heuristics rather than actual values presented, labeling effects, or latent preferences for action over inaction (even when inaction is the dominant strategy).  <\/p>\n<p>If a lack of comprehension is the reason, would providing feedback clarifying that one option is dominant over the other change respondent\u2019s answers?  Could this also help improve estimate precision?  That is the question a paper by <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1002\/hec.70093?campaign=wolearlyview\">Genie et al. 2026<\/a> aims to answer.  The authors give respondents the choice between receiving treatment with a device compared to no action.  The device treatment has no benefits but does have side effects; thus, no action is the dominant strategy.   If people selected a specific device, the feedback provided was \u201c<em>The device offers no improvement in your ability to do daily activities. But it imposes a risk of dying or having complications<\/em>\u201c.   The authors hypothesize that providing this feedback will (i) make people\u2019s choices on subsequent tasks more consistent and (ii) will make it less likely that people choose the \u2018no device\u2019 option.  <\/p>\n<p>First, the actual number of people who changed their mind overall was very small.  <\/p>\n<p>In the dominance-structured training task, 54% of respondents selected a device with associated risks and no benefits. Among those randomized to receive feedback (<em>n<\/em>\u00a0=\u00a0170), 71% (<em>n<\/em>\u00a0=\u00a0121) continued to choose a device after the feedback explicitly stated that the device had no benefit and added risk. In other words, despite being explicitly informed that the device provided no benefit and only added risk, a large proportion persisted with the device choice.\u00a0 <\/p>\n<p>Despite the small impact of providing feedback, the magnitude of people changing their opinions was twice that of people receiving no feedback. <\/p>\n<p>The mean predicted probability of choosing \u201cNo device\u201d increased from 0.096 without feedback to 0.180 with feedback\u2026an absolute rise of 0.0846 (8.46% points\u2026)\u2026feedback attenuated residual label utility and shifted choices toward the opt-out (No Device). Together, these findings confirm the presence of label effects and demonstrate that the feedback prompt attenuated, but did not eliminate, label-driven utility. <\/p>\n<p>The authors did a latent class analysis and found the impact of feedback varied based on the latent class: <\/p>\n<p>Feedback had no measurable impact on consistency for the majority \u201cPro-device\u201d class, reduced consistency sharply for the \u201cPro-device, risk-sensitive\u201d class, and increased consistency for the smaller \u201cAnti-device\u201d class.\u00a0 <\/p>\n<p>In short, I would guess the \u201cpro device\u201d group had a prior that the devices were effective and reminding people that it was not effective may lead them to believe that the device was not effective over the attributes measures but they may believe the device was effective over other attributes not presented.  <\/p>\n<p><strong>Healthcare Economist Researcher Tips<\/strong><\/p>\n<p>What is a researcher to do?  First, this paper shows that labelling matters.  If the treatments were labelled as Device A, B, and C (or more generally treatment A, B and C) instead of Device A, B and \u201cNo treatment\u201d, it is likely that the labelling would have a small impact an lead to the choice of fewer dominated options.  Second, comparing choices that consist of active vs. passive approaches may cause respondents to infer unmeasured benefits or harms based on their priors rather than the attributes presented.  The impact of this potential bias on the research question of interest must be carefully considered.  Third, many DCEs remove responses when people choose dominated choices.  This may be a reasonable option if the choice is due to lack of comprehension or attention.  However, if the choice is due to very strong priors, you could be removing the individuals with the strongest preferences in favor of a specific type of treatment. The authors note that dominance failures \u201c\u2026can arise because respondents simplify complex tasks using heuristics, adopt lexicographic preferences, or interpret labels and attributes through their own frames, rather than because they lack well-defined preferences.\u201d   Thus, careful rationale is needed to justify removing any respondents from the analytic sample. Fourth, the feedback mechanism the authors use is likely not useful for general DCEs.  I agree with the authors statement that \u201cadequate training before the DCE begins might be preferable to mid-stream feedback that could be misconstrued as a \u2018correct-answer\u2019 cue.\u201d<\/p>\n<p>You can read the full paper <strong><a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1002\/hec.70093?campaign=wolearlyview\">here<\/a><\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Discrete choice experiments quantify respondent preferences over different alternative treatments due to their attributes. DCE assume individuals are rational and do not pick dominated options (i.e., where one option is weakly better than another option across all attributes and strictly better than the option across at least one attribute). However, Johnson et al. (2019) finds&#8230;<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-11893","post","type-post","status-publish","format-standard","hentry","category-articles"],"_links":{"self":[{"href":"https:\/\/medical-article.com\/index.php?rest_route=\/wp\/v2\/posts\/11893"}],"collection":[{"href":"https:\/\/medical-article.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/medical-article.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/medical-article.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=11893"}],"version-history":[{"count":0,"href":"https:\/\/medical-article.com\/index.php?rest_route=\/wp\/v2\/posts\/11893\/revisions"}],"wp:attachment":[{"href":"https:\/\/medical-article.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=11893"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/medical-article.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=11893"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/medical-article.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=11893"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}