Bringing together the Apache Cassandra experts from the community and DataStax.

Want to learn? Have a question? Want to share your expertise? You are in the right place!

Not sure where to begin? Getting Started

 

question

Ckn avatar image
Ckn asked Erick Ramirez answered

Using blob as primary key returns "InvalidQueryException: Key length of XXX is longer than maximum of 65535"

Table{

key1 blob,

key2 blob,

primary key(key1,key2)

}

Using blob as primary key and clustering column throws error while insert,

InvalidQueryException: Key length of XXX is longer than maximum of 65535

Anyway to alter the key length or better way to sort out this?

cassandra
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

smadhavan avatar image
smadhavan answered Erick Ramirez edited

@Ckn, could you elaborate on what exactly were you trying to do when you get this error? Also, what is the version of Cassandra that you're using (DataStax Enterprise / Astra DB / Open Source Apache Cassandra)?

Anyways, I was able to work with a similar type without any issues on this purposely simple demo.

Scenario 1: (blob column just as the primary key itself)

Table Schema:

CREATE TABLE ks.bloby (
    a blob PRIMARY KEY,
    b int
);

DML looks like below:

INSERT INTO ks.bloby(intAsBlob(1), 1);

Output looks like below:

SELECT * FROM ks.bloby;

 a          | b
------------+---
 0x00000001 | 1

Scenario 2: (Both partition and clustering columns are of type blob)

Table Schema:

CREATE TABLE ks.bloby1 (
    a blob,
    b blob,
    PRIMARY KEY (a, b)
) WITH CLUSTERING ORDER BY (b ASC);

DML goes like this:

INSERT INTO ks.bloby1(intAsBlob(1), intAsBlob(1));

Output looks like below:

SELECT * FROM ks.bloby1;

 a          | b
------------+------------
 0x00000001 | 0x00000001
Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Erick Ramirez avatar image
Erick Ramirez answered

The issue doesn't have anything to do with the use of the CQL blob type. The problem is that the resulting length of the key from the blob is too long.

In the Cassandra storage engine, the length of the partition key is stored in the partition header and the length is encoded using an unsigned short integer which has a maximum value of 216-1 or 65535.

Since the partition key length can only have a maximum value of 65535, it is not possible to use extremely long partition keys given it cannot be written to or read from SSTables.

It isn't possible to change the maximum key length since it is hardcoded in the storage engine. We recommend using natural keys for partition keys to minimise the chance of the problem occurring. Cheers!

Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.