How to aggregate a Cassandra (ver 3.11) table which have too many partitions ?
I need a table which need to store events from devices, to make the partition size below 100 MB, i did combination of device_Id and hour as partition key. So in a partition it may have 3600 events (one event per second). But if i need to aggregate all the events which came in last hour, i have to hit too many partition depends on how many devices were active in last hour.
Is it a good option to use Cassandra in such case where data wont be fit under one partition
or if i distribute data across many partition and later read data from those partitions in a single query ?
Can I design the table some other way? How can i improve the design?
What If am using Spark to read multiple partition, how will be the performance ?