question

dba.pramod882_50132 avatar image
dba.pramod882_50132 asked Erick Ramirez answered

How do I set the consistency level for reads and writes in the spark-cassandra-connector?

Scenario:

We have a table(history) where we want to delete records by searching records which are not partition key.

Here we can use spark if required as our nodes were spark enabled. But seems like as spark doesn't allow delete, we need to know if there is any possibility to do it in any other way or so.

Basically searching the records of non-primary key and find its partition key and issuing a delete.

But here, I am trying to just count before I can use the same deleteFromCassandra() function.

Question: I need to know read/write how to use tunable consistency?

Related question on SOF:

1) https://stackoverflow.com/questions/57896906/pyspark-cassandra-connector-cassandratable-error

In this use pyspar-cassandra-0.9.0.jar, where I am trying to read at local_quorum. git-hub link pointed in SOF.

2) I guess official Datastax repository for connector. Here I don't see any option for tunable consistency. please point if I am missing in documentation or so.

https://github.com/datastax/spark-cassandra-connector

API: https://docs.datastax.com/en/dse-spark-connector-api/6.7/#com.datastax.spark.connector.GettableData

Tried searching here a bit.But can't find it.

Please help, Thanks.

spark-cassandra-connector
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered

To tune the consistency level for reads, set the following property:

spark.cassandra.input.consistency.level

For writes, configure the following property:

spark.cassandra.output.consistency.level

For details and other configuration properties, see the Spark Cassandra connector Configuration Reference page. Cheers!

Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.