ranjeet_ranjee avatar image
ranjeet_ranjee asked ranjeet_ranjee edited

Cassandra not returning consistent results when READ consistency level is LOCAL_ONE

I have a 2 node Cassandra server. The consistency level for reads is LOCAL_ONE. However, when one of the nodes goes down, my read query returns no result. If both are up, then the results return fine as expected

Both the node is in the same DC.

[root@cassandra-7 ~]# nodetool status
Datacenter: singaporedo
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Tokens       Owns    Host ID                               Rack
UN  Node-1-IP   406.66 GiB  1            ?       9159fc01-f08a-4334-bb29-5b3cf7d5727f  rack-1
UN  Node-2-IP  380.15 GiB  1            ?       2aca8bd7-f211-4913-888f-1749d7e000a4  rack-1

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Erick Ramirez avatar image
Erick Ramirez answered

This is expected behaviour if you're most likely writing with a consistency level of LOCAL_ONE as well. The issue is that the replicas are out-of-sync and you need to run repairs regularly.

I'm aware that your on the startup program but 2 nodes in production isn't ideal. We recommend you have at least 3 replicas in each of the data centres so your application can tolerate an outage to 1 node and can use a stronger consistency of LOCAL_QUORUM (recommended consistency).

You've posted several questions now about things not working when you bring a node down but your tests are invalid in your current setup. You need to address the underlying issues or your efforts are going to be futile.

At the very least you need to verify that the DSE configuration is identical on both nodes and run repairs regularly at least once a week. Cheers!

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

saravanan.chinnachamy_185977 avatar image
saravanan.chinnachamy_185977 answered saravanan.chinnachamy_185977 commented


Cassandra stores data replicas on multiple nodes to ensure reliability and fault tolerance. The replication strategy for each keyspace determines the nodes where replicas are placed.

The total number of replicas for a keyspace across a Cassandra cluster is referred to as the keyspace's replication factor.

A replication factor of one means that there is only one copy of each row in the Cassandra cluster. A replication factor of two means there are two copies of each row, where each copy is on a different node.

You can inspect which nodes has the data that you inserted into the table using the following command.

nodetool <options> getendpoints -- <keyspace> <table> key

$ nodetool getendpoints killervideo emp_by_id "1003"

So please make sure to inspect the following and try again.

  1. What is your replication factor for the schema in question?
  2. Where does the data live (nodes)?

In your case, please set the RF=2 and then test again.

4 comments Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

ranjeet_ranjee avatar image ranjeet_ranjee commented ·

If I check RF its 2.

adsz@cqlsh> describe asderv;

CREATE KEYSPACE adserv WITH replication = {'class': 'NetworkTopologyStrategy', 'singaporedo': '2'}  AND durable_writes = true;

0 Likes 0 ·
saravanan.chinnachamy_185977 avatar image saravanan.chinnachamy_185977 ranjeet_ranjee commented ·

Can you also run the following command and post your response?

 nodetool  getendpoints asderv <table> <partiton_key>

Also please post your table details.

0 Likes 0 ·
ranjeet_ranjee avatar image ranjeet_ranjee saravanan.chinnachamy_185977 commented ·

Here you go details.

[root@cassandra-8 ~]# nodetool  getendpoints adserv conversion_by_uid 3b5d83cf-13eb-41c1-93ce-7ccc6db5bdba



Please check my question for the table description.

0 Likes 0 ·
Show more comments