Hi,
I am using spark-cassandra-connector_2.11-2.0.8 and I have been getting exceptions while writing to Cassandra like this:
Cassandra timeout during write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)
What is not clear to me is what happens to my job when an exception happens. Clearly this is an exception that could be retried. But looking at the class TableWriter starting at line 241:
queryExecutor.getLatestException().map { case exception => throw new IOException( s"""Failed to write statements to $keyspaceName.$tableName. The |latest exception was | ${exception.getMessage} | |Please check the executor logs for more exceptions and information """.stripMargin) }
I see an exception being raised but It appears at this point the current task is aborted thus failing the job. Is this the correct understanding?
If so why wouldn't the driver retry again instead of failing a job?
Thanks