question

mishra.anurag643_153409 avatar image
mishra.anurag643_153409 asked Erick Ramirez answered

Getting error "Connection closed due to transport exception" in Spark job

I am reading cassandra table using pyspark job , but it is throwing an error :

Caused by: com.datastax.driver.core.exceptions.TransportException: <ip> Connection has been closed
    at com.datastax.driver.core.Connection$ConnectionCloseFuture.force(Connection.java:1210)
    at com.datastax.driver.core.Connection$ConnectionCloseFuture.force(Connection.java:1195)
    at com.datastax.driver.core.Connection.defunct(Connection.java:445)

nodetool csftstas for table:

   Space used (live): 21.32 GiB
                Space used (total): 21.32 GiB
                Space used by snapshots (total): 0 bytes
                Off heap memory used (total): 14.96 MiB

I have three nodes cassandra cluster:

cluster configurations:

cores :

CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  2
Socket(s):           1
NUMA node(s):        1

memory:

              total        used        free      shared  buff/cache   available
Mem:             15           2           0           0          12          11
Swap:             0           0           0
grep MemTotal /proc/meminfo
MemTotal:       15923288 kB

When I read data from pyspark it is throwing error , My question is :

are these configs not enough for cassandra node with table size given by nodetool command , so to consume data cassandra does not overload or does not throw an error .

There is no other job running in the cassandra cluster.

spark-cassandra-connector
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered

You didn't provide the full error message + full stack trace so we're limited in our analysis but in my experience, the driver is reporting the transport exception because the nodes are overloaded and become unresponsive.

Servers with 4 vCPUs + 15GB RAM are only suitable for development environments where you are running minimal loads. If you're just developing your Spark app in a non-production environment then you should throttle the throughput of your app so it doesn't overload the cluster.

For production environments, we recommend machines with at least 8 vCPUs + 30GB RAM (but 48GB RAM or more is preferable) so you can allocate at least 16-20GB to the heap. Cheers!

Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.