Bringing together the Apache Cassandra experts from the community and DataStax.

Want to learn? Have a question? Want to share your expertise? You are in the right place!

Not sure where to begin? Getting Started



acucciarre_144605 avatar image
acucciarre_144605 asked acucciarre_144605 commented

DSE instance does not recover after rebooting Kubernetes node

I have successfully installed dse in my kubernetes environment using the Kubernetes Operator and the instructions in

With nodetool I checked that all pod successfully joined the ring. The problem is that when I reboot one of the kubernetes node the cassandra pod that was running on that node never recover:

[root@node1 ~]# kubectl exec -it -n cassandra cluster1-dc1-r2-sts-0 -c cassandra nodetool status
Datacenter: dc1
|/ State=Normal/Leaving/Joining/Moving/Stopped
-- Address Load Tokens Owns (effective) Host ID Rack
UN 153.82 KiB 1 77.9% 053cc18e-397c-4abe-bb1b-d48a3fef3c93 r3
DS 136.09 KiB 1 26.9% 8ae31e1c-856e-44a8-b081-c5c040b535b9 r1
UN 202.8 KiB 1 95.2% 06200794-298c-4122-b8ff-4239bc7a8ded r2

[root@node1 ~]# kubectl get pods -n cassandra
cass-operator-56f5f8c7c-w6l2c 1/1 Running 0 17h
cluster1-dc1-r1-sts-0 1/2 Running 2 17h
cluster1-dc1-r2-sts-0 2/2 Running 0 17h
cluster1-dc1-r3-sts-0 2/2 Running 0 17h
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

@acucciarre_144605 Have you checked the logs to see why it can't be started? That should be your first port of call. The errors in the logs will give you an idea of what's going on.

Without any information on why the node won't start, we're limited in our ability to help you resolve it. Cheers!

0 Likes 0 ·

@Erick Ramirez I have looked into the logs but I can't figure out what is the problem.

The kubectl logs command return the logs attached.

The error null appears also when cassandra starts successfully.

So what remains is the error:

"address=/ url=/api/v0/probes/readiness status=500 Internal Server Error"

which doesn't say much to me.

The kubectl describe shows the following

Type Reason Age  From Message
---- ------ ---- ---- -------
Warning Unhealthy 4m41s (x6535 over 18h) kubelet, node2 Readiness probe failed: HTTP probe failed with statuscode: 500

In the cassandra container only this process is running:

 java -Xms128m -Xmx128m -jar /opt/dse/resources/management-api/management-api- --dse-socket /tmp/dse.sock --host tcp://

and in the /var/log/cassandra/system.log I can't point out any error

0 Likes 0 ·
error-logs.txt (3.7 KiB)

It looks like this question was also asked on Stack Overflow and successfully answered there. Please see for more information.

0 Likes 0 ·

Thanks for the heads-up @bradfordcp! I've posted the workaround here on @weideng_84207's behalf. Cheers!

0 Likes 0 ·
wdeng avatar image
wdeng answered

The error null is a harmless message about a transient error during the Cassandra pod starting up and healthcheck.

I was able to reproduce the issue you ran into. If you run kubectl get pods you should see the affected pod showing 1/2 under "READY" column, this means the Cassandra container was not brought up in the auto-restarted pod. Only the management API container is running. I suspect this is a bug in the operator and I'll work with the developers to sort it out.

As a workaround you can run kubectl delete pod/<pod_name> to recover your Cassandra cluster back to a normal state (in your case kubectl delete pod/cluster1-dc1-r1-sts-0). This will redeploy the pod and remount the data volume automatically, without losing anything.

10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

acucciarre_144605 avatar image
acucciarre_144605 answered acucciarre_144605 published

I believe I have figured out when the work-around works and when it doesn't.

It works only when the rebooting kube node is not the one where the seed node was running.

It seems that the seed node is always one node and when the work-around doesn't work the "kubectl get ep" returns none endpoint for cluster1-seed-service.

kubectl get pods -n dse -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
cass-operator-56f5f8c7c-7l8b7   1/1     Running   1          17h    node3   <none>           <none>
cluster1-dc1-r1-sts-0           1/2     Running   0          9m14s    node2   <none>           <none>
cluster1-dc1-r2-sts-0           2/2     Running   0          22m    node3   <none>           <none>
cluster1-dc1-r3-sts-0           2/2     Running   0          3h38m   node1   <none>           <none>

kubectl get ep -n dse
NAME                            ENDPOINTS                                                               AGE
cass-operator-metrics ,                                   17h
cluster1-dc1-all-pods-service,,                              3h41m
cluster1-dc1-service  ,, + 1 more...   3h41m
cluster1-seed-service           <none>                                                                  3h41m

10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.