Hi ,
I am connecting and extracting data from source cluster and copying same to destination cluster with same topology using spark it works perfectly but sometime it gives below error.
if everything is right then it should work always this is very inconsistent issue for me.
Could you please suggest if i need to do any configuration on cluster or spark to fix this issue.
At the same time when i got this error i am able to connect node 192.168.100.51}:9042 from same system where spark job is running so i think this is not related to network issue .
anything other which need to be check here as per suggestion ?
18:43:10 [Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned accumulator 0 18:43:10 [Executor task launch worker for task 24] INFO com.datastax.driver.core.ClockFactory - Using native clock to generate timestamps. 18:43:17 [pool-17-thread-1] INFO com.datastax.spark.connector.cql.CassandraConnector - Disconnected from Cassandra cluster: devcass3 18:43:17 [Executor task launch worker for task 11] INFO com.datastax.driver.core.ClockFactory - Using native clock to generate timestamps. 18:43:17 [Executor task launch worker for task 24] ERROR org.apache.spark.executor.Executor - Exception in task 24.0 in stage 0.0 (TID 24) 18:43:17 java.io.IOException: Failed to open native connection to Cassandra at {192.168.100.51}:9042 18:43:17 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:168) 18:43:17 at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154) 18:43:17 at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154) 18:43:17 at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32) 18:43:17 at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69) 18:43:17 at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57) 18:43:17 at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79) 18:43:17 at com.datastax.spark.connector.cql.DefaultScanner.<init>(Scanner.scala:27) 18:43:17 at com.datastax.spark.connector.cql.CassandraConnectionFactory$class.getScanner(CassandraConnectionFactory.scala:30) 18:43:17 at com.datastax.spark.connector.cql.DefaultConnectionFactory$.getScanner(CassandraConnectionFactory.scala:35) 18:43:17 at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:361) 18:43:17 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346) 18:43:17 at org.apache.spark.rdd.RDD.iterator(RDD.scala:310) 18:43:17 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) 18:43:17 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346) 18:43:17 at org.apache.spark.rdd.RDD.iterator(RDD.scala:310) 18:43:17 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) 18:43:17 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346) 18:43:17 at org.apache.spark.rdd.RDD.iterator(RDD.scala:310) 18:43:17 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) 18:43:17 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346) 18:43:17 at org.apache.spark.rdd.RDD.iterator(RDD.scala:310) 18:43:17 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) 18:43:17 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) 18:43:17 at org.apache.spark.scheduler.Task.run(Task.scala:123) 18:43:17 at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) 18:43:17 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 18:43:17 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 18:43:17 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 18:43:17 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 18:43:17 at java.lang.Thread.run(Thread.java:748) 18:43:17 Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: 192.168.100.51/192.168.100.51:9042 (com.datastax.driver.core.exceptions.TransportException: [192.168.100.51/192.168.100.51:9042] Cannot connect)) 18:43:17 at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233) 18:43:17 at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79) 18:43:17 at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1483) 18:43:17 at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:399)