question

tallavi avatar image
tallavi asked Erick Ramirez edited

Unable to perform authorization of super-user permission: Operation timed out - received only 0 responses

Hi, I'm getting this error when running a select query. The table is quite large and perhaps overwhelming my machine. Making it very large (24 cores) did not help. If I extend the query timeout, I eventually get the error below. I found *no trace* for this error anywhere on the internet.

Any leads would be very appreciated. Thanks, Tal

java.io.IOException: Exception during execution of <my query> ALLOW FILTERING: Unable to perform authorization of permissions: Unable to perform authorization of super-user permission: Operation timed out - received only 0 responses.
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:352)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:368)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$17.apply(CassandraTableScanRDD.scala:368)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:41)
    at org.apache.spark.RangePartitioner$$anonfun$13.apply(Partitioner.scala:306)
    at org.apache.spark.RangePartitioner$$anonfun$13.apply(Partitioner.scala:304)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.oss.driver.api.core.servererrors.UnauthorizedException: Unable to perform authorization of permissions: Unable to perform authorization of super-user permission: Operation timed out - received only 0 responses.
    at com.datastax.oss.driver.api.core.servererrors.UnauthorizedException.copy(UnauthorizedException.java:49)
    at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
    at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
    at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
    at com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:210)
    at com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:53)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:43)
    at com.sun.proxy.$Proxy53.execute(Unknown Source)
    at com.datastax.spark.connector.cql.DefaultScanner.scan(Scanner.scala:38)
    at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:345)
    ... 27 more
spark-cassandra-connector
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered Erick Ramirez edited

The exception stack you posted doesn't point to the issue being related to the number of cores.

What it indicates is that authentication with authorisation is failing. The 2 causes which come to mind are:

  1. Incorrect use of superuser role.
  2. Incorrect replication for the authentication keyspace.

Role

If you're using the default cassandra superuser, we recommend you provision a service account for your Spark application.

The default superuser is expensive to use since it authenticates with a consistency of QUORUM. In contrast, any other role uses a consistency of ONE.

Replication

We recommend you reconfigure replication on the authentication keyspace to:

  • use NetworkTopologyStrategy even for single-DC clusters, and
  • have 3 replicas in each DC.

If you've left the default replication setting:

CREATE KEYSPACE system_auth WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}

then it means that authentication attempts can fail if the sole replica is unavailable. Cheers!

1 comment Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

tallavi avatar image tallavi commented ·

Thanks for the answer @Erick Ramirez,

Actually we resolved it after noticing that it's not a general error but on a specific job.

This line was the culprit:

.filter(col("field").equalTo("VALUE"))

Somehow this filter, which is not even part of the keys, got pushed down, and got the entire system stuck! I couldn't find a way to not push it down aside from:

.filter((FilterFunction<Row>) value -> value.getString(value.fieldIndex("field")).equals("VALUE"))

But that did the trick.

Thanks!

0 Likes 0 ·