Bringing together the Apache Cassandra experts from the community and DataStax.

Want to learn? Have a question? Want to share your expertise? You are in the right place!

Not sure where to begin? Getting Started

 

question

rakshit.sourabh21_99595 avatar image
rakshit.sourabh21_99595 asked ·

Would it be preferable for the driver to contact replicas directly instead of the coordinator?

when we say abt driver getting reads we say driver hashes and gets token chooses a coordinator and coordinator helps to reach out to node containing data and also replica node.

then wont it get easy if driver hashes and gets the node ip which contains the data directly from coordinator and then driver uses the ip and replica ip cached in its memory and reach out to token ip directly and read data and if read fails driver directly retry with other replica ip cached in memory and once read is done cache is removed . so it doesnt have to traverse from coordinator again and again

driver
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

joao.reis avatar image
joao.reis answered ·

All DataStax drivers perform token awareness routing by default which seems to be what you are looking for. What this means is that the driver will always attempt to select the nodes that "own" the data as coordinators for each request assuming that the driver is provided the routing information. This routing information is provided automatically if you use PreparedStatements for example (assuming that all partition key components are bound as variables).

For a more detailed explanation, you can check out the Load Balancing section of the java driver's manual

1 comment Share
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

This helps . and further i can get round robin , retry to distribute the load .


0 Likes 0 · ·