Segmentation With Choice Data: Latent Class MNL or HB-Then-Cluster?

Last updated: 17 Oct 2023

Our clients often want to run segmentation using data from choice experiments like choice-based conjoint or MaxDiff. We have advised them to use Latent Class MNL to run these analyses for a few reasons, to which we add another in this paper.

First, latent class multinomial logit (LC-MNL) is a one-step procedure. Following hierarchical Bayesian (HB) MNL utility estimation with cluster analysis (or latent class analysis) on the HB utilities is a sequential, two-step procedure. The latter treats any estimation error resulting from the first step as useful information in creating segments in the second step, clearly a bad idea.

Eagle (2013, 2019) presents several critiques on the sequential HB-then-cluster approach, most importantly that how one scales the HB utilities can influence the resulting cluster solution. Lyon (2019) expands on this critique by showing that, under the sequential HB-then-cluster approach, the (arbitrary) decision of here to locate the origin (the zero level) of the utility scale can strongly affect the segmentation solution, so that different placements of the origin lead to different solutions. In pointing out its lack of robustness, Eagle and Lyon dig a deep grave for the HB-then-cluster approach to segmentation. Importantly, LC-MNL results do not depend on arbitrary decisions about the origin.

But we’ve noticed that the sequential approach won’t stay buried. We still see it being used and advocated. For this reason, at the recent Turbo Choice Modeling event we decided to approach this topic from another direction.

Research Design

Drawing on 4 empirical studies, two CBC and two MaxDiff, we created “known” segments of respondents by cluster analyzing the human respondents’ utilities.

Study 1 – MaxDiff with 27 items
Study 2 – MaxDiff with 10 items
Study 3 – CBC experiment 44 x 22
Study 4 – CBC experiment 33 x 42 x 54 x 6 x 10

For each study we created 3-, 4-, and 5-segment solutions. We then assigned those mean utilities from each segment to different robotic respondents and we created conditions where segment sizes were all equal or where they were unequal (the kth segment is 1/k as large as the first segment). So, for each of the 4 empirical studies we had 3 (different numbers of segments) x 2 (relative segment sizes) = 6 cells. For each cell, we know how many segments there really are and which respondents are in which segments.

Finally, we analyzed these 24 data sets with both LC-MNL and the sequential HB-then-cluster approach. This allows us to compare the two approaches in terms of

  • Their ability to identify the correct (known) number of segments.
  • Their ability, given the correct number of segments, to put the right robotic respondents in the right segments.

Results

This table summarizes the 24 studies, their conditions (in terms of the number of segments and relative segment sizes) and the success of LC-MNL and HB-then-cluster:

Table 1

Study Type Segments Relative Size LC-MNL Correct HB+Cluster Correct
1 MaxDiff 3 Even X X
1 MaxDiff 4 Even X
1 MaxDiff 5 Even X X
1 MaxDiff 3 Uneven X X
1 MaxDiff 4 Uneven X
1 MaxDiff 5 Uneven X
2 MaxDiff 3 Even X X
2 MaxDiff 4 Even X X
2 MaxDiff 5 Even X X
2 MaxDiff 3 Uneven X
2 MaxDiff 4 Uneven X
2 MaxDiff 5 Uneven X
3 CBC 3 Even X
3 CBC 4 Even
3 CBC 5 Even
3 CBC 3 Uneven X
3 CBC 4 Even X
3 CBC 5 Uneven
4 CBC 3 Even
4 CBC 4 Even
4 CBC 5 Even
4 CBC 3 Uneven
4 CBC 4 Uneven
4 CBC 5 Uneven X

The LC-MNL approach identifies the correct number of segments for 14 of the 24 data sets, versus only 8 of the 24 for HB-then-cluster. Interestingly, LC-MNL gets the number of segments right for all 12 of the MaxDiff data sets, while both LC-MNL and HB-then-cluster struggle with segmenting CBC data – each gets the number of segments right in only 2 of the 12 CBC data sets.

We also looked at how well each method put the proper respondents into the correct segments, assuming that we already had the number of segments right. Table 2 shows that LC-MNL again performs better in terms of the Adjusted Rand Index (ARI).

Table 2

Study Type Segments Relative Size LC-MNL Correct HB+Cluster Correct
1 MaxDiff 3 Even 0.92 0.88
1 MaxDiff 4 Even 0.85 0.83
1 MaxDiff 5 Even 0.86 0.82
1 MaxDiff 3 Uneven 0.83 0.82
1 MaxDiff 4 Uneven 0.78 0.77
1 MaxDiff 5 Uneven 0.74 0.69
2 MaxDiff 3 Even 0.93 0.92
2 MaxDiff 4 Even 0.92 0.90
2 MaxDiff 5 Even 0.89 0.60
2 MaxDiff 3 Uneven 0.85 0.46
2 MaxDiff 4 Uneven 0.85 0.69
2 MaxDiff 5 Uneven 0.82 0.80
3 CBC 3 Even 0.83 0.74
3 CBC 4 Even 0.59 0.53
3 CBC 5 Even 0.47 0.38
3 CBC 3 Uneven 0.86 0.85
3 CBC 4 Even 0.74 0.67
3 CBC 5 Uneven 0.64 0.41
4 CBC 3 Even 0.67 0.58
4 CBC 4 Even 0.46 0.37
4 CBC 5 Even 0.35 0.32
4 CBC 3 Uneven 0.60 0.39
4 CBC 4 Uneven 0.34 0.32
4 CBC 5 Uneven 0.40 0.29

LC-MNL correctly assigns a higher proportion of respondents to their true segments in all 24 stories than does HB-then-cluster. On average across the 24 studies, LC-MNL correctly assigns 72% of respondents versus only 63% for HB-then-cluster.

Conclusions

The formidable arguments forwarded by Eagle and Lyon should suffice to convince analysts to create choice-based segments using Latent Class Multinomial Logit (rather than HB-then-cluster). To these, we merely pile on a bit more evidence: LC-MNL simply works better than HB-then-cluster, both in terms of identifying the right number of segments and in terms of putting respondents into the right segments.

References

  • Eagle, T.C. (2013) “Segmenting choice and non-choice data simultaneously,” Sawtooth Software Conference Proceedings, 231-250.
  • Eagle, T.C. and J. Magidson (2019) “Segmenting choice and non-choice data simultaneously: Part deux,” Sawtooth Software Conference Proceedings, 247-280.
  • Lyon, D.W. (2019) “Comments on ‘Segmenting choice and non-choice data simultaneously,’” Sawtooth Software Conference Proceedings, 281-288.