Supernodes cause trouble for any graph database, and in Cassandra in particular can cause wide partitions. Common solutions for supernodes include "cutting vertices", ie, locating edges on separate partitions from supernodes instead of the default behavior which places edges on the same node as incident incoming vertices.
For example, this is the solution suggested here: https://www.experoinc.com/post/dse-graph-partitioning-part-2-taming-your-supernodes
However, while that was applicable in DSE 5.0 (as per the article) and 6.8 (according to this answer), in 6.7 this is not supported (according to this answer, note especially the comments which confirm).
This being the case, assuming we can't just avoid creating supernodes in the first place, what are the best practices for handling supernodes in 6.7? Specific concerns include but are not limited to:
- How to avoid creating wide partitions where supernodes are
- How to mitigate the effects of wide partitions (ie, how to improve performance and avoid other problems associated with wide partitions)
- How to avoid other problems related to supernodes (traffic skew, etc)