Hi! When I run
spark.sql("""CREATE TABLE catalog.keyspace.table1 (userid String,something1 String, something2 String) USING cassandra PARTITIONED BY (userid, something1)""")
I get a table with compound primary key from columns userid and something1. Is there a way to specify that I don't want something1 to be a part of the compound primary key, but a clustering column?
In CQL instead of:
CREATE TABLE keyspace.table1 ( userid text, something1 text, something2 text, PRIMARY KEY (userid, something1) )
I am getting
CREATE TABLE keyspace.table1 ( userid text, something1 text, something2 text, PRIMARY KEY ( (userid, something1) ) )
CLUSTERED BY clause requires INTO num BUCKETS part, which I don't know if even relates to cassandra.
Thank you!