DataStax Academy FAQ

DataStax Academy migrated to a new learning management system (LMS) in July 2020. We are also moving to a new Cassandra Certification process so there are changes to exam bookings, voucher system and issuing of certificates.

Check out the Academy FAQ pages for answers to your questions:


question

rakeshgouda1992_96602 avatar image
rakeshgouda1992_96602 asked ·

Error indexing partition com.datastax.bdp.search.SearchReadBeforeWriteException: Unable to complete Search read-before-write caused by AsyncReadTimeoutException

Hi,

We are observing an error while indexing a partition because of read timeout exception during a SSTable file read.Please find below the complete error:

ERROR [CoreThread-14] 2020-07-05 00:19:25,593 AbstractSolrSecondaryIndex.java:1534 - Error indexing partition '91d99352-4334-4901-8d09-32acda8e80b3' on 'keyspace_name.table_name':
com.datastax.bdp.search.SearchReadBeforeWriteException: Unable to complete Search read-before-write.
    at com.datastax.bdp.search.solr.AbstractSolrSecondaryIndex.readBeforeWriteError(AbstractSolrSecondaryIndex.java:2124)
    at io.reactivex.internal.operators.single.SingleResumeNext$ResumeMainSingleObserver.onError(SingleResumeNext.java:73)
    at org.apache.cassandra.utils.flow.Flow$1SingleFromFlow$1ReduceToSingle.signalError(Flow.java:1539)
    at org.apache.cassandra.utils.flow.Flow$DisposableReduceSubscriber.onErrorInternal(Flow.java:1483)
    at org.apache.cassandra.utils.flow.Flow$ReduceSubscriber.onError(Flow.java:1221)
    at org.apache.cassandra.utils.flow.FlatMap.onError(FlatMap.java:133)
    at org.apache.cassandra.utils.flow.FlatMap.onError(FlatMap.java:133)
    at org.apache.cassandra.utils.flow.FlowTransformBase.onError(FlowTransformBase.java:38)
    at org.apache.cassandra.utils.flow.FlowTransformBase.onError(FlowTransformBase.java:38)
    at org.apache.cassandra.utils.flow.FlowTransformBase.onError(FlowTransformBase.java:38)
    at org.apache.cassandra.utils.flow.FlowTransformBase.onError(FlowTransformBase.java:38)
    at org.apache.cassandra.utils.flow.FlowTransformBase.onError(FlowTransformBase.java:38)
    at org.apache.cassandra.utils.flow.FlatMap$FlatMapChild.onError(FlatMap.java:185)
    at org.apache.cassandra.utils.flow.FlowTransformBase.onError(FlowTransformBase.java:38)
    at org.apache.cassandra.utils.flow.FlatMap$FlatMapChild.onError(FlatMap.java:185)
    at org.apache.cassandra.io.sstable.format.AsyncPartitionReader$PartitionReader.onError(AsyncPartitionReader.java:383)
    at org.apache.cassandra.io.sstable.format.AsyncPartitionReader.lambda$readWithRetry$1(AsyncPartitionReader.java:243)
    at org.apache.cassandra.io.util.Rebufferer$NotInCacheException.lambda$accept$2(Rebufferer.java:209)
    at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
    at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
    at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
    at org.apache.cassandra.io.util.Rebufferer$NotInCacheException.lambda$accept$1(Rebufferer.java:175)
    at org.apache.cassandra.concurrent.TPCTimeoutTask.run(TPCTimeoutTask.java:43)
    at org.apache.cassandra.concurrent.TPCHashedWheelTimer.lambda$onTimeout$0(TPCHashedWheelTimer.java:56)
    at org.apache.cassandra.utils.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:498)
    at org.apache.cassandra.utils.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:573)
    at org.apache.cassandra.utils.HashedWheelTimer$Worker.run(HashedWheelTimer.java:329)
    at org.apache.cassandra.concurrent.TPCRunnable.run(TPCRunnable.java:68)
    at org.apache.cassandra.concurrent.EpollTPCEventLoopGroup$SingleCoreEventLoop.process(EpollTPCEventLoopGroup.java:920)
    at org.apache.cassandra.concurrent.EpollTPCEventLoopGroup$SingleCoreEventLoop.processTasks(EpollTPCEventLoopGroup.java:892)
    at org.apache.cassandra.concurrent.EpollTPCEventLoopGroup$SingleCoreEventLoop.runScheduledTasks(EpollTPCEventLoopGroup.java:980)
    at org.apache.cassandra.concurrent.EpollTPCEventLoopGroup$SingleCoreEventLoop.processEvents(EpollTPCEventLoopGroup.java:774)
    at org.apache.cassandra.concurrent.EpollTPCEventLoopGroup$SingleCoreEventLoop.run(EpollTPCEventLoopGroup.java:441)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.cassandra.io.util.AsyncReadTimeoutException: Timed out async read from org.apache.cassandra.io.sstable.format.AsyncPartitionReader for file /opt/dse/cassandra/data/keyspace_name/table_name-26cb81e1810d11e9b21f2bc5ae755f22/ac-12345-bti-Data.db


Timeout settings from cassandra.yaml:

read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
aggregated_request_timeout_in_ms: 120000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
cross_node_timeout: false
user_function_timeout_policy: die
client_timeout_sec: 600
cancel_timeout_sec: 5

Timeout settings from dse.yaml:

netty_client_request_timeout: 180000

Please let me know if any further information required.

Thanks,

Rakesh

dsesearch
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered ·

The failure occurred as a result of the AsyncReadTimeoutException which in turn indicates that the data disk didn't respond and timed out. This almost always turns out to be due to nodes being overloaded and the disks can't keep up with the IO.

If you require assistance with this problem, I suggest you log a ticket with DataStax Support so that one of our engineers can work with your directly since support issues like this can not be easily handled in a Q&A format. Cheers!

2 comments Share
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Erick Ramirez avatar image Erick Ramirez ♦♦ rakeshgouda1992_96602 ·

Not a problem. Cheers!

0 Likes 0 · ·