how can we reduce the nodetool drain command run time. it is taking around 3 hours in some of the node. not sure what is going on. and i don't have much data it is around 500 GB per node.
how can we reduce the it.
how can we reduce the nodetool drain command run time. it is taking around 3 hours in some of the node. not sure what is going on. and i don't have much data it is around 500 GB per node.
how can we reduce the it.
Did you check your /var/log/cassandra/system.log and see if there is any error/exception thrown during the drain process? Are you able to see some entry like "StorageService.java:nnnn - DRAINED" at the end of the 2-3 hours?
Normally it shouldn't take this long to drain, as it just flushes all memtables and shuts down gossip, native transport and (possibly) RPC services, so even if you have 500GB data, the amount of data in memory (i.e. memtable) is much smaller than that, so flushing should be done much sooner than 2-3 hours.
@kranthij29_188881 If it is the same nodes as the ones you reported in this post, it's like the draining process got stuck trying to shutdown gossip because it's not running.
You should have tried to bring gossip back online with:
$ nodetool enablegossip
In any case, it's strange for gossip to not be running and it indicates that there were other problems with the node and it wasn't exclusive a "drain" issue. Cheers!
5 People are following this question.
DataStax Enterprise is powered by the best distribution of Apache Cassandra ™
© 2023 DataStax, Titan, and TitanDB are registered trademarks of DataStax, Inc. and its subsidiaries in the United States and/or other countries.
Apache, Apache Cassandra, Cassandra, Apache Tomcat, Tomcat, Apache Lucene, Lucene, Apache Solr, Apache Hadoop, Hadoop, Apache Spark, Spark, Apache TinkerPop, TinkerPop, Apache Kafka and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States and/or other countries.
Privacy Policy Terms of Use