backerman_127179 avatar image
backerman_127179 asked Erick Ramirez edited

Unable to achieve local quorum, getting WriteTimeoutException

I currently have a 6 node cassandra ring with 2 datacenters and 3 nodes within each datacenter.

I have a keyspace that has 3 replicas in each datacenter. When attempting to issue a write with Local Quorum is fails saying it cant reach any local replicas (error below)

When i run the query locally with consistency one it works no problem. Is there anything i should be looking at to troubleshoot this? i dont see anything glaring in the system.log or debug.log outside of tombstone warnings but as far as i know that would only impact reads.

DSE 5.1.11

Cassandra.WriteTimeoutException: Server timeout during write query at consistency LOCALQUORUM (0 peer(s) acknowledged the write over 2 required)
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered

@backerman_127179 the symptoms you describe indicate that the WriteTimeoutException is coming from the driver which indicates that the application query is possibly writing a lot of data leading to the timeout. Otherwise, it is also possible that there is a high network latency between your application servers and the cluster.

If you know the query that is failing, you can try running it via cqlsh with TRACING ON and CONSISTENCY LOCAL_QUORUM. The trace output will give you an idea of which nodes are failing.

If you are seeing dropped mutations on the nodes and hints being stored on coordinators, they are indicative of the commitlog disk not able to keep up with the write IO so you will need to either (a) throttle down the amount of writes coming from the application, (b) place the commitlog on a separate disk from the data directory, or (c) increase the capacity of your cluster by adding more nodes. Cheers!

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.