Bringing together the Apache Cassandra experts from the community and DataStax.

Want to learn? Have a question? Want to share your expertise? You are in the right place!

Not sure where to begin? Getting Started

 

question

ortizfabio_185816 avatar image
ortizfabio_185816 asked ·

IllegalArgumentException: Mutation of 112.881MiB is too large for the maximum size of 64.000MiB

I am getting a warning messages in the server logs as shown in the gist file below. Is it something that I should be concerned since it is just a warning?

https://gist.github.com/ortizfabio/06b97dab6ba377ce898d37f549a3543b

EDIT: Yes mutation are large because I increased the commit_log_segment_size to 124MB so I can write up 64MB. I am trying to write from spark a large amount of rows as fast as possible. It seems a daunting task in Cassandra. I should probably open a new thread and ask the question there. I am sending 100 rows of up to 2Kb with 3 concurrent writes at speed of 0.01MB/sec using 100 partitions.

If I reduce the speed it does not error out but it takes hours. My cluster has 8 brokers and they are mirror in two datacenters. table is very simple it has the following columns:

id bigint,
type_code text,
ver_nb bigint,
detail_json text,
cre_ts timestamp,
cre_user text,
last_upd timestamp,
PRIMARY KEY (id,type_code, ver_nb)

UPDATE (Feb 24 19:00 UTC):

I finally got the job to finish without overriding the buffer and causing a mutation or taking forever. Here are the stats:

I am inserting 13 million rows with a total size of 13GB on a cluster with 8 brokers. To insert this using the cassandra-spark-connector with 100 partitions it took about 2 hours.

My Spark connector configuration is:

spark.cassandra.output.consistency.level = "LOCAL_ONE"
spark.cassandra.output.concurrent.writes = "5"
spark.cassandra.output.batch.grouping.buffer.size = "10"
spark.cassandra.output.batch.size.rows = "1"
spark.cassandra.output.batch.grouping.key = "partition"
spark.cassandra.output.throughput_mb_per_sec = "0.01"

I wish I could write a lot faster but when I increase the parameter spark.cassandra.output.throughput_mb_per_sec then I get the mutation. If I had increased the commit_log_segment_size then the speed would have to be lower. A picture of the system can be seen below there is an initial bump to 55K transactions per minute at start and then and steady 25K/min.

cassandramutation
561-writespeed.png (22.3 KiB)
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Erick Ramirez avatar image
Erick Ramirez answered ·

@ortizfabio_185816 It's something you should definitely be concerned about. Those messages mean that the very large writes from your application failed. It doesn't matter if your application retries the write, it will always fail because they are too large.

I've noted that the maximum mutation size reported is 64MB which normally is just 16MB (default). This implies that you've bumped up the commitlog segment size to 128MB compared to the default size of 32MB. You need to understand why the application mutations are really large since it might be symptomatic of a problem with your access patterns or a bad data model. Increasing the commitlog segment size shouldn't be your first response when you get warnings about large mutations. Cheers!

2 comments Share
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

That really is a massive mutation. Recommended partition size are 100MB, avoid more than 100k rows and 5MB records. Here depending on data model you may consider to split records. Maybe a DESCRIBE TABLE could help here to see if we can help further.


1 Like 1 · ·

Agreed. I would really want to know more about the mutation being attempted here and the data model as @Cedrick Lunven requested. This is definitely something that needs to be addressed.

0 Likes 0 · ·
ortizfabio_185816 avatar image
ortizfabio_185816 answered ·

Basically the problem here is that from spark you define n partitions each partition has a writer to Cassandra. Let's say that the table has a replication factor of 3. So for each row sent to a broker there will be two more inserts. Let's say there are 100 partitions I can set the speed of each at 1MB/sec. But let's say at a certain moment all the partitions being send from all those 100 partitions correspond to the same broker. In that case that broker might be overwhelm with writes. Therefore the mutation occurs. There is no way to control how fast a cluster will be written to from a Spark process. The only control is how fast a partition writes to the table.

However having said that I figured out that my table partition key is not distributed across all the servers evenly. Therefore there is one server that is being hit almost 10 times more. I will post another question to have a custom partition strategy set for that table.

1 comment Share
10 |1000 characters needed characters left characters exceeded

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Thank you for the update. It really makes sense. Usually the incorrect data model is the key to low performance, exactly like in this example. Could you accept an answer or should I close the question as resolved

0 Likes 0 · ·