PLANNED MAINTENANCE

Hello, DataStax Community!

We want to make you aware of a few operational updates which will be carried out on the site. We are working hard to streamline the login process to integrate with other DataStax resources. As such, you will soon be prompted to update your password. Please note that your username will remain the same.

As we work to improve your user experience, please be aware that login to the DataStax Community will be unavailable for a few hours on:

  • Wednesday, July 15 16:00 PDT | 19:00 EDT | 20:00 BRT
  • Thursday, July 16 00:00 BST | 01:00 CEST | 04:30 IST | 07:00 CST | 09:00 AEST

For more info, check out the FAQ page. Thank you for being a valued member of our community.


question

sridhar.addanki_188870 avatar image
sridhar.addanki_188870 asked ·

Why does the AlwaysOn SQL Service (AOSS) keep restarting?

We had issue with one of our cassandra node due to hardware issue. We have removed that from ring and restarted DSE in other nodes. From then Alwayson-sql service keeps restarting. Found below error from service.log under /var/log/spark/always_on

Because of this we are unable use spark-sql from datastax sudio for JOIN queries. Please help.

Earlier this used to work.

========================================

ERROR 2020-05-01 20:31:17,230 org.apache.spark.deploy.DseSparkSubmitBootstrapper: Spark application failed
org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: com.datastax.bdp.fs.model.InternalServerException: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_QUORUM (1 required but only 0 alive);
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:214)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:53)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
dseaoss
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered ·

@sridhar.addanki_188870 Based on the stack trace you provided, it looks like the ThriftServer which serves up Hive to AOSS is having issues during initialisation.

Check for the existence of /tmp/hive/cassandra on the node. Is it owned by root or cassandra?

You can either (a) grant global (all users) write access to the directory or (b) delete the directory itself. Then check the AOSS service.log again to see if the service is attempting to restart. If it isn't, you can do a clean stop then restart by running these commands on the node:

$ dse client-tool alwayson-sql stop
$ dse client-tool alwayson-sql start

Let me know how you go. Cheers!

Share
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.