Scenario:
We have a table(history) where we want to delete records by searching records which are not partition key.
Here we can use spark if required as our nodes were spark enabled. But seems like as spark doesn't allow delete, we need to know if there is any possibility to do it in any other way or so.
Basically searching the records of non-primary key and find its partition key and issuing a delete.
But here, I am trying to just count before I can use the same deleteFromCassandra() function.
Question: I need to know read/write how to use tunable consistency?
Related question on SOF:
1) https://stackoverflow.com/questions/57896906/pyspark-cassandra-connector-cassandratable-error
In this use pyspar-cassandra-0.9.0.jar, where I am trying to read at local_quorum. git-hub link pointed in SOF.
2) I guess official Datastax repository for connector. Here I don't see any option for tunable consistency. please point if I am missing in documentation or so.
https://github.com/datastax/spark-cassandra-connector
API: https://docs.datastax.com/en/dse-spark-connector-api/6.7/#com.datastax.spark.connector.GettableData
Tried searching here a bit.But can't find it.
Please help, Thanks.