Erick Ramirez avatar image
Erick Ramirez asked Erick Ramirez edited

Why do nodes have increased memory use after upgrading Cassandra?

This post relates to a problem where nodes experience elevated memory utilisation after upgrading a cluster or when nodes reach a certain data density.

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered Erick Ramirez edited


Users have reported performance issues not observed when clusters were running with Cassandra 2.1. Shortly after upgrading to Cassandra 2.2 or 3.x, nodes experience issues despite factors such as application load, cluster traffic, access patterns, time of day/week/month, and hardware/network resources being equal.

Symptoms include:

  • Significant increase in off-heap memory usage (for example, when monitored using Linux utilities such as top or sar).
  • Increased P99 and/or maximum read latencies (like from nodetool tablehistograms output).
  • Intermittent read request timeouts in the extreme.

In a rare case, Cassandra being started on a node fails to finish the startup sequence because most of the memory gets consumed very quickly and is eventually exhausted so the Linux oom-killer terminates the Cassandra process.


Apache Cassandra uses memory-mapped file I/O through the Unix system call mmap() (or mmap for short). The mmap system call allows Cassandra to use the operating system's virtual memory to hold copies of data files so reading SSTables is fast. A hidden cassandra.yaml property called disk_access_mode determines how data files are accessed. The valid options are:

  • auto (default) - both SSTable data and index files are mapped on 64-bit systems; only index files are mapped for 32-bit systems
  • mmap - both data and index files are mapped to memory
  • mmap_index_only - only index files are mapped to memory
  • standard - Cassandra uses standard IO and no files are mapped to memory

In versions of Cassandra 2.1 or earlier, reading compressed SSTables involved the data being copied on-heap then sent to an on-heap buffer to be decompressed and behaved as if disk access mode was set to mmap_index_only despite the default mode being auto.

With the added support for direct buffer decompression in Cassandra 2.2 (CASSANDRA-8464), the behaviour for disk access mode changed to the way it was designed, i.e. default auto mode on 64-bit systems now mmap() both SSTable data and index files.

In cases where there are lots of random reads, and the set of SSTables being heavily read is larger than the available memory, the affected nodes will have a high number of page faults. In some cases, the affected servers run out of memory and the Linux oom-killer terminates Cassandra.


Since CASSANDRA-8464 allows mapping compressed data directly, it is more efficient to map only index files.

With the default disk_access_mode: auto during startup, Cassandra logs an entry similar to below:

INFO  [main] 2019-05-02 12:33:21,572 - \
  DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap

Set disk_access_mode to mmap_index_only in cassandra.yaml:

disk_access_mode: mmap_index_only

After restarting Cassandra, an entry in the logs will be similar to:

INFO [main] 2019-05-02 17:53:50,437 - \
  DiskAccessMode is standard, indexAccessMode is mmap

This log entry indicates that SSTable data files are used with standard disk IO but index files will be mapped to memory.


  • Branimir Lambov
  • Jake Luciani
  • Mark Curtis
  • Michael Keeney
  • Stefania Alborghetti
  • Thanh Tranh
  • Wei Deng

See also

JIRA - CASSANDRA-15531 Improve docs on disk_access_mode, specifically post CASSANDRA-8464

Republished from DataStax Support KB article Increased memory use on nodes after upgrading to DSE 5.0 or DSE 5.1.

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.