Hello All,
can anyone help on below issue, it is connecting from some of the nodes, but from some of the nodes, getting below error.
bash-4.2$ dse -u cassandra -p cassandra spark-sql --driver-memory 20G --num-executors 20 --executor-cores 5 --executor-memory 20G The log file is at /var/lib/datastax-agent/.spark-sql-shell.log 2020-05-01 08:35:51 [main] ERROR o.a.s.d.DseSparkSubmitBootstrapper - Spark application failed java.lang.RuntimeException: com.datastax.bdp.fs.model.InternalServerException: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_QUORUM (1 required but only 0 alive) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:525)
Here is the content of the log files. Actually this is our POC environment, which we are testing, and if it testings are good enough, we want to proceed with DSE.
but we are getting some weired results like always on sql is keep on restarting, and spark sql is not connecting from some of the nodes.
2020-05-01 18:01:56 [main] INFO o.a.h.hive.metastore.HiveMetaStore - 0: get_all_databases 2020-05-01 18:01:56 [main] INFO o.a.h.h.m.HiveMetaStore.audit - ugi=cassandra ip=unknown-ip-addr cmd=get_all_databases 2020-05-01 18:01:56 [main] INFO c.d.b.h.h.m.SchemaManagerService - Updating Cassandra Keyspace to Metastore Database Mapping 2020-05-01 18:01:56 [main] INFO c.d.b.h.h.m.SchemaManagerService - Refresh cluster meta data 2020-05-01 18:01:56 [main] INFO c.d.b.h.h.m.SchemaManagerService - adding dse_graph keyspace if needed 2020-05-01 18:01:56 [main] INFO o.a.h.hive.metastore.HiveMetaStore - 0: get_functions: db=datamanager pat=* 2020-05-01 18:01:56 [main] INFO o.a.h.h.m.HiveMetaStore.audit - ugi=cassandra ip=unknown-ip-addr cmd=get_functions: db=datamanager pat=* 2020-05-01 18:01:56 [main] INFO c.d.b.h.h.m.CassandraHiveMetaStore - in getFunctions with dbName: datamanager and functionNamePattern: * 2020-05-01 18:01:56 [main] INFO o.a.h.hive.metastore.HiveMetaStore - 0: get_functions: db=default pat=* 2020-05-01 18:01:56 [main] INFO o.a.h.h.m.HiveMetaStore.audit - ugi=cassandra ip=unknown-ip-addr cmd=get_functions: db=default pat=* 2020-05-01 18:01:56 [main] INFO c.d.b.h.h.m.CassandraHiveMetaStore - in getFunctions with dbName: default and functionNamePattern: * 2020-05-01 18:01:56 [main] INFO o.a.h.hive.metastore.HiveMetaStore - 0: get_functions: db=killrvideo pat=* 2020-05-01 18:01:56 [main] INFO o.a.h.h.m.HiveMetaStore.audit - ugi=cassandra ip=unknown-ip-addr cmd=get_functions: db=killrvideo pat=* 2020-05-01 18:01:56 [main] INFO c.d.b.h.h.m.CassandraHiveMetaStore - in getFunctions with dbName: killrvideo and functionNamePattern: * 2020-05-01 18:01:56 [main] INFO o.a.h.hive.metastore.HiveMetaStore - 0: get_functions: db=system_backups pat=* 2020-05-01 18:01:56 [main] INFO o.a.h.h.m.HiveMetaStore.audit - ugi=cassandra ip=unknown-ip-addr cmd=get_functions: db=system_backups pat=* 2020-05-01 18:01:56 [main] INFO c.d.b.h.h.m.CassandraHiveMetaStore - in getFunctions with dbName: system_backups and functionNamePattern: * 2020-05-01 18:01:57 [main] INFO o.a.h.hive.ql.session.SessionState - Created local directory: /tmp/d37a91e1-c519-41ed-bec5-79e6ac379094_resources 2020-05-01 18:01:57 [main] ERROR o.a.s.d.DseSparkSubmitBootstrapper - Spark application failed java.lang.RuntimeException: com.datastax.bdp.fs.model.InternalServerException: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_QUORUM (1 required but only 0 alive) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:525) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:133) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.DseSparkSubmit.org$apache$spark$deploy$DseSparkSubmit$$runMain(DseSparkSubmit.scala:626) at org.apache.spark.deploy.DseSparkSubmit.doRunMain$1(DseSparkSubmit.scala:145) at org.apache.spark.deploy.DseSparkSubmit.submit(DseSparkSubmit.scala:173) at org.apache.spark.deploy.DseSparkSubmit.doSubmit(DseSparkSubmit.scala:64) at org.apache.spark.deploy.DseSparkSubmit$$anon$2.doSubmit(DseSparkSubmit.scala:680) at org.apache.spark.deploy.DseSparkSubmit$.main(DseSparkSubmit.scala:689) at org.apache.spark.deploy.DseSparkSubmitBootstrapper$.main(DseSparkSubmitBootstrapper.scala:114) at org.apache.spark.deploy.DseSparkSubmitBootstrapper.main(DseSparkSubmitBootstrapper.scala) Caused by: com.datastax.bdp.fs.model.InternalServerException: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_QUORUM (1 required but only 0 alive) at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:283) at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:251) at spray.json.JsValue.convertTo(JsValue.scala:33) at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:48) at com.datastax.bdp.fs.rest.RestResponse$stateMachine$macro$331$1.apply(RestResponse.scala:44) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:465) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at com.datastax.bdp.fs.util.DaemonThreadFactory$$anon$1.run(DaemonThreadFactory.scala:37) at java.lang.Thread.run(Thread.java:748) Caused by: com.datastax.bdp.fs.model.InternalServerException: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_QUORUM (1 required but only 0 alive) at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.read(DseFsJsonProtocol.scala:283) at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$$anonfun$cause$lzycompute$1$1.apply(DseFsJsonProtocol.scala:262) at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$$anonfun$cause$lzycompute$1$1.apply(DseFsJsonProtocol.scala:262) at scala.Option.map(Option.scala:146) at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.cause$lzycompute$1(DseFsJsonProtocol.scala:262) at com.datastax.bdp.fs.model.DseFsJsonProtocol$ThrowableReader$.cause$1(DseFsJsonProtocol.scala:262) ... 12 common frames omitted 2020-05-01 18:01:57 [Thread-1] INFO o.a.spark.util.ShutdownHookManager - Shutdown hook called 2020-05-01 18:01:57 [Thread-1] INFO o.a.spark.util.ShutdownHookManager - Deleting directory /tmp/spark-a6645efc-7e3b-4c2c-b47a-16689aa8f2fe 2020-05-01 18:01:57 [Serial shutdown hooks thread] INFO c.d.s.c.cql.CassandraConnector - Disconnected from Cassandra cluster: pdc_dm 2020-05-01 18:01:57 [Serial shutdown hooks thread] INFO c.d.s.c.cql.CassandraConnector - Disconnected from Cassandra cluster: pdc_dm 2020-05-01 18:01:57 [Serial shutdown hooks thread] INFO c.d.s.c.util.SerialShutdownHooks - Successfully executed shutdown hook: Clearing session cache for C* connector