question

shehzadjahagirdar_185613 avatar image
shehzadjahagirdar_185613 asked Erick Ramirez commented

Which JVM garbage collector should be used for Apache Cassandra 3.11.3 production cluster?

I have a 5 node cluster with hardware as 16 cores & 120 Gb of Ram.Which JVM garbage collector should be used for Apache Cassandra 3.11.3 version production cluster.


jvm
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered Erick Ramirez commented

By default, Cassandra 3.11 uses CMS (ConcurrentMarkSweep garbage collector). We recommend it for heap sizes up to 24GB. For very low production workloads, we recommend allocating 8GB to the heap as a minimum. For moderate workloads, allocate at least 16GB. I would go as high as 20GB, maybe 24GB max on CMS.

For heap sizes larger than 20GB we find that G1 GC (garbage-first collector) performs better. Of course, YMMV.

Go as high as 31GB with G1 GC. Otherwise, you'll find that heap sizes larger than 32GB has less addressable objects on 64-bit machines compared to 31GB. Fabian Lange has a nice explanation about this in his blog post Why 35GB Heap is Less Than 32GB – Java JVM Memory Oddities.

In any case, I think there are limited use cases where very large heap sizes are suitable, at least in my experience. You get diminishing returns with large heaps for pure-Cassandra workloads and in my opinion, you are better off having more nodes in your cluster compared to fewer nodes with large RAMs. Cheers!

2 comments Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Can we change garbage collector setting now on existing production cluster which is running from 3 years with large data on it.
0 Likes 0 ·
Erick Ramirez avatar image Erick Ramirez ♦♦ shehzadjahagirdar_185613 ·

Yes, you can change the garbage collector even for existing nodes/clusters.

As always, make sure you test, test and test. Cheers!

0 Likes 0 ·