I have 3 Spark streams writing data to cassandra using spark-connector 3.0.0-beta. After 108324 completed tasks, the streams got stucked forever:
This is pretty dangerous behavior, because after huge amount of successful tasks the stream just stopped writing data, without any errors. Cassandra DB is in healthy state, all nodes UP.
My cassandra DB version is: 3.11.5
Driver thread dump: thread_dump_driver.txt
Executor thread dump: thread_dump_cassandra.pdf
All worker threads are doing:
CassandraConnector.closeResourceAfterUsed -> Session.refreshSchema
Is it possible to specify timeout for operations like refreshSchema ?
Recently I've reported issue with stucked streams which happened on stream start. But this one happened after lot of completed tasks.