DataStax Academy FAQ

DataStax Academy migrated to a new learning management system (LMS) in July 2020. We are also moving to a new Cassandra Certification process so there are changes to exam bookings, voucher system and issuing of certificates.

Check out the Academy FAQ pages for answers to your questions:


judhviraj_177959 avatar image
judhviraj_177959 asked ·

How do I limit the number of connections to a Cassandra node?

We are using a Cassandra cluster of 10 nodes.each node is allocated a heap size of 64 GB. Number of connections made to a node increases drastically sometimes and it results in increase of heap size(going above 45 GB) this makes all the incoming requests failed. So is there any way to limit the connections.we are using Cassandra V2.0.10.

10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered ·


This is a really tough question to answer because C* 2.0.10 was released 6 years ago and has reached end-of-service-life which means it hasn't been supported for a number of years now. There's not a lot of people still familiar with C* 2.0 so you're running a huge risk if this is a production cluster.

Max connections

To answer your question, limiting the number of connections depends on whether the nodes are configured with the sync (default) or hsha RPC server.

The synchronous RPC server (sync) uses one thread per Thrift connection. The unlimited rpc_max_threads means that the node will accept as many concurrent requests as memory would allow. Note that each RPC thread uses a minimum stack size of 180KB although your cluster is probably running with the default 256KB (-Xss256k) in

With hsha (half synchronous, half asynchronous), there is a known issue where unlimited RPC threads can crash Cassandra with an OutOfMemoryError (CASSANDRA-8116). In C* 2.0.15, a configuration exception will get thrown on startup if the max RPC threads is not set.

The workaround is to set rpc_max_threads to a value which makes sense for your use case. You can try 256 or 512 but ultimately, you need to run multiple tests to determine the optimal value for your cluster.


The ConcurrentMarkSweep garbage collector (CMS) performs well for Cassandra workloads when the heap size is in the 16-20GB range, maybe up to 24GB as the upper limit.

In my experience, you get diminishing returns once you go beyond 24GB because the amount of heap to cleanup (GC) is so large that the stop-the-world pauses get longer.

I imagine you keep increasing the heap size because your nodes are running out of memory. This is counter-productive. From what I've seen in lots of other companies, increasing the heap allocation is the wrong approach. You should really consider resizing your cluster and add more nodes to cope with the load. Cheers!

1 comment Share
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

[Post deleted as spam advertising]

0 Likes 0 · ·