Shannu avatar image
Shannu asked Shannu commented

DSE Search 5.1.20 nodetool cleanup on table with Solr index throws NullPointerException

I have bootstrapped the nodes into the running cluster after the added has come up UN...I tried running nodetool cleanup which throws null pointer exception

Here are the logs:

INFO [CompactionExecutor:11] 2020-10-14 15:57:26,157 - Cleaning up BigTableReader(path='/cassandra/data/produc/user_role_map-ba0ae1510de111ebb73467f6454e3a64/md-3-big-Data.db')
ERROR [CompactionExecutor:11] 2020-10-14 15:57:26,167 - Exception in thread Thread[CompactionExecutor:11,5,main]
java.lang.NullPointerException: null
    at org.apache.cassandra.index.SecondaryIndexManager$CleanupGCTransaction.commit(
    at org.apache.cassandra.index.SecondaryIndexManager.deletePartition(
    at org.apache.cassandra.db.compaction.CompactionManager$CleanupStrategy$Full.cleanup(
    at org.apache.cassandra.db.compaction.CompactionManager.doCleanupOne(
    at org.apache.cassandra.db.compaction.CompactionManager.access$500(
    at org.apache.cassandra.db.compaction.CompactionManager$6.execute(
    at org.apache.cassandra.db.compaction.CompactionManager$
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$
    at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(

Is there any work around for this?

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Erick Ramirez avatar image
Erick Ramirez answered Shannu commented

As part of the cleanup tasks, the SecondaryIndexManager class needs to delete all the data for a given partition from all indexes including Search (Solr) indexes. The Cql3SolrSecondaryIndex.cleanupPartition() method tries to delete the data from the index.

The NullPointerException is thrown when the SolrQueries.queryFromKeys() tries to get the Solr schema but fails.

In my experience, the most likely reason for the failure is that the Solr core for produc.user_role_map failed to load. If the Solr core is not available, the cleanup operation will not be able to delete indexes for partitions no longer owned by the node.

You will need to investigate why the Solr core for produc.user_role_map is unavailable and resolve it then try to run the cleanup command again. Cheers!

1 comment Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Shannu avatar image Shannu commented ·

Thanks for the work around....You were spot on about the solr core and Iam able to clear the error and run the cleanup command. I can see the data that doesnt belong to corresponding nodes have been cleansed but still load percentage showing is not so effective

Datacenter: dc1



|/ State=Normal/Leaving/Joining/Moving/Stopped

-- Address Load Tokens Owns (effective) Host ID Rack

UN 49.04 GiB 8 58.1% 5445f18e-74d5-49e9-9442-a5eb85de643a rack2

UN 50.21 GiB 8 73.0% aa05d297-3e31-431b-ac78-4b56bbfc4248 rack3

UN 51.29 GiB 8 68.9% caeeec56-f665-41ed-ba23-52f09a268a22 rack1

As you can see...load percentages are not looking good....Is this normal?

solr enabled, RF =2, DSE 5.1.20

0 Likes 0 ·
smadhavan avatar image
smadhavan answered Shannu commented

@Shannu, could you please elaborate on the activities that you've performed on the cluster, the version of the DSE, etc., Also, you will have lots of follow up questions which will be challenging to deal with in the Q&A format of this forum so please log a ticket with DataStax Support so our engineers can assist you directly. Cheers!

1 comment Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Shannu avatar image Shannu commented ·

Dropped the core and recreated it , which helped me further cleanup the disowned data

0 Likes 0 ·