Bringing together the Apache Cassandra experts from the community and DataStax.

Want to learn? Have a question? Want to share your expertise? You are in the right place!

Not sure where to begin? Getting Started

 

question

vijay.ramcse_71736 avatar image
vijay.ramcse_71736 asked ·

Why do the Cassandra-connector metrics all report zero?

I have followed the instruction to emit metrics from cassandra-connector. I am able to see the metrics but all them are zeros all the time. I tried 2.3.1 and 2.4.0 versions. I am seeing other spark driver metrics emitted properly.

I made sure the proper values in metrics.properties.

executor.source.cassandra-connector.class=org.apache.spark.metrics.CassandraConnectorSource
driver.source.cassandra-connector.class=org.apache.spark.metrics.CassandraConnectorSource

I enabled in the spark context.

spark.cassandra.input.metrics and spark.cassandra.output.metrics

sample console sink output

-- Meters ----------------------------------------------------------------------
allspark.v1.driver.cassandra-connector.read-byte-meter
             count = 0
         mean rate = 0.00 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.00 events/second
    15-minute rate = 0.00 events/second
allspark.v1.driver.cassandra-connector.read-row-meter
             count = 0
         mean rate = 0.00 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.00 events/second
    15-minute rate = 0.00 events/second
allspark.v1.driver.cassandra-connector.write-byte-meter
             count = 0
         mean rate = 0.00 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.00 events/second
    15-minute rate = 0.00 events/second
allspark.v1.driver.cassandra-connector.write-row-meter
             count = 0
         mean rate = 0.00 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.00 events/second
    15-minute rate = 0.00 events/second
spark-cassandra-connectormonitoring
1 comment
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

@vijay.ramcse_71736 Sorry for the delay in the response. We're working through an example for you and will post a response soon. Cheers!

0 Likes 0 · ·

1 Answer

bettina.swynnerton avatar image
bettina.swynnerton answered ·

Hi,

the metrics that you refer to are driver metrics. For most Spark jobs, it is not the driver who reads from or writes to Cassandra, but the executors. If the metrics.properties are configured correctly, you should see these meters populated in the executor logs.

I set up a test and configured the metrics.properties like here:

org.apache.spark.metrics.source.JvmSource
#   Note: Currently, JvmSource is the only available common source.
#         It can be added to an instance by setting the "class" option to its
#         fully qualified class name (see examples below).

## List of available sinks and their properties.

org.apache.spark.metrics.sink.ConsoleSink
  Name:   Default:   Description:
  period  10         Poll period
  unit    seconds    Unit of the poll period
  
# Enable ConsoleSink for all instances by class name
*.sink.console.class=org.apache.spark.metrics.sink.ConsoleSink

# Polling period for the ConsoleSink
*.sink.console.period=10
# Unit of the polling period for the ConsoleSink
*.sink.console.unit=seconds

# Polling period for the ConsoleSink specific for the master instance
master.sink.console.period=15
# Unit of the polling period for the ConsoleSink specific for the master
# instance
master.sink.console.unit=seconds
#
driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource
executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource
#
executor.source.cassandra-connector.class=org.apache.spark.metrics.CassandraConnectorSource
driver.source.cassandra-connector.class=org.apache.spark.metrics.CassandraConnectorSource

I then ran a couple of read jobs through the Spark shell.

As in your case, my driver meters did not report any data.

The meters for the executor however have data, here from the executor logs:

-- Meters ----------------------------------------------------------------------
app-20200610161618-0001.0.cassandra-connector.read-byte-meter
             count = 936528
         mean rate = 239.17 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.08 events/second
    15-minute rate = 29.70 events/second
app-20200610161618-0001.0.cassandra-connector.read-row-meter
             count = 3928
         mean rate = 1.00 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.00 events/second
    15-minute rate = 0.12 events/second
app-20200610161618-0001.0.cassandra-connector.write-byte-meter
             count = 0
         mean rate = 0.00 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.00 events/second
    15-minute rate = 0.00 events/second
app-20200610161618-0001.0.cassandra-connector.write-row-meter
             count = 0
         mean rate = 0.00 events/second
     1-minute rate = 0.00 events/second
     5-minute rate = 0.00 events/second
    15-minute rate = 0.00 events/second

Check the executor logs for your Spark jobs to see if the metrics have meaningful data there.

Hope this helps!

1 comment Share
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Sure. right. I think i got confused with the way executors are prefixed. I was greping only for the namespace whereas the executors metrics are emitted with different names. Thanks for the post. I need to figure out how to emit executor metrics with out the prefix of the executor id

0 Likes 0 · ·