Bringing together the Apache Cassandra experts from the community and DataStax.

Want to learn? Have a question? Want to share your expertise? You are in the right place!

Not sure where to begin? Getting Started



dba.pramod882_50132 avatar image
dba.pramod882_50132 asked Erick Ramirez answered

How do I set the consistency level for reads and writes in the spark-cassandra-connector?


We have a table(history) where we want to delete records by searching records which are not partition key.

Here we can use spark if required as our nodes were spark enabled. But seems like as spark doesn't allow delete, we need to know if there is any possibility to do it in any other way or so.

Basically searching the records of non-primary key and find its partition key and issuing a delete.

But here, I am trying to just count before I can use the same deleteFromCassandra() function.

Question: I need to know read/write how to use tunable consistency?

Related question on SOF:


In this use pyspar-cassandra-0.9.0.jar, where I am trying to read at local_quorum. git-hub link pointed in SOF.

2) I guess official Datastax repository for connector. Here I don't see any option for tunable consistency. please point if I am missing in documentation or so.


Tried searching here a bit.But can't find it.

Please help, Thanks.

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered

To tune the consistency level for reads, set the following property:


For writes, configure the following property:


For details and other configuration properties, see the Spark Cassandra connector Configuration Reference page. Cheers!

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.