This post relates to a problem where nodes experience elevated memory utilisation after upgrading a cluster or when nodes reach a certain data density.
Bringing together the Apache Cassandra experts from the community and DataStax.
Want to learn? Have a question? Want to share your expertise? You are in the right place!
Not sure where to begin? Getting Started
Users have reported performance issues not observed when clusters were running with Cassandra 2.1. Shortly after upgrading to Cassandra 2.2 or 3.x, nodes experience issues despite factors such as application load, cluster traffic, access patterns, time of day/week/month, and hardware/network resources being equal.
In a rare case, Cassandra being started on a node fails to finish the startup sequence because most of the memory gets consumed very quickly and is eventually exhausted so the Linux
oom-killer terminates the Cassandra process.
Apache Cassandra uses memory-mapped file I/O through the Unix system call
mmap for short). The mmap system call allows Cassandra to use the operating system's virtual memory to hold copies of data files so reading SSTables is fast. A hidden
cassandra.yaml property called
disk_access_mode determines how data files are accessed. The valid options are:
auto(default) - both SSTable data and index files are mapped on 64-bit systems; only index files are mapped for 32-bit systems
mmap- both data and index files are mapped to memory
mmap_index_only- only index files are mapped to memory
standard- Cassandra uses standard IO and no files are mapped to memory
In versions of Cassandra 2.1 or earlier, reading compressed SSTables involved the data being copied on-heap then sent to an on-heap buffer to be decompressed and behaved as if disk access mode was set to
mmap_index_only despite the default mode being
With the added support for direct buffer decompression in Cassandra 2.2 (CASSANDRA-8464), the behaviour for disk access mode changed to the way it was designed, i.e. default
auto mode on 64-bit systems now
mmap() both SSTable data and index files.
In cases where there are lots of random reads, and the set of SSTables being heavily read is larger than the available memory, the affected nodes will have a high number of page faults. In some cases, the affected servers run out of memory and the Linux
oom-killer terminates Cassandra.
Since CASSANDRA-8464 allows mapping compressed data directly, it is more efficient to map only index files.
With the default
disk_access_mode: auto during startup, Cassandra logs an entry similar to below:
INFO [main] 2019-05-02 12:33:21,572 DatabaseDescriptor.java:350 - \ DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
After restarting Cassandra, an entry in the logs will be similar to:
INFO [main] 2019-05-02 17:53:50,437 DatabaseDescriptor.java:356 - \ DiskAccessMode is standard, indexAccessMode is mmap
This log entry indicates that SSTable data files are used with standard disk IO but index files will be mapped to memory.
JIRA - CASSANDRA-15531 Improve docs on disk_access_mode, specifically post CASSANDRA-8464
Republished from DataStax Support KB article Increased memory use on nodes after upgrading to DSE 5.0 or DSE 5.1.
5 People are following this question.