We are using cassandra datastax community edition 3.9.0 in our demo environment. It is currently running with 3 nodes in GCP (w/ 4 vCPUs, 7.5 GB memory, & 500 GB data disks, w/ ubuntu linux). We have seen following exception in the system.log a couple of times. It seems like this is happening during compaction process.
It seems like this error comes when the process has insufficient number of max-open-files settings. I have checked that cassandra process has the recommended settings in /proc/<pid>/limits file.
Max open files 100000 100000 files
Does this exception mean that we need to increase these settings beyond 100000?
WARN [HintsWriteExecutor:1] 2020-04-22 15:28:02,233 CLibrary.java:280 - open(/var/lib/cassandra/hints, O_RDONLY) failed, errno (24). WARN [HintsWriteExecutor:1] 2020-04-22 15:28:22,248 CLibrary.java:280 - open(/var/lib/cassandra/hints, O_RDONLY) failed, errno (24). WARN [HintsWriteExecutor:1] 2020-04-22 15:28:42,251 CLibrary.java:280 - open(/var/lib/cassandra/hints, O_RDONLY) failed, errno (24). ERROR [CompactionExecutor:23417] 2020-04-22 15:28:53,143 CassandraDaemon.java:226 - Exception in thread Thread[CompactionExecutor:23417,1,main] java.lang.RuntimeException: java.nio.file.FileSystemException: /var/lib/cassandra/data/us_where/tagpositionhistory-da427a405fb111e882a03fc4ddb7bb64/mc-143294-big-Data.db: Too many open files at org.apache.cassandra.io.util.ChannelProxy.openChannel(ChannelProxy.java:55) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.util.ChannelProxy.<init>(ChannelProxy.java:66) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.util.ChannelProxy.<init>(ChannelProxy.java:61) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.util.SegmentedFile$Builder.getChannel(SegmentedFile.java:307) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:192) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:281) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:182) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:134) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:65) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:141) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:189) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) ~[apache-cassandra-3.9.0.jar:3.9.0] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264) ~[apache-cassandra-3.9.0.jar:3.9.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_72] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_72] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_72] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_72] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72] Caused by: java.nio.file.FileSystemException: /var/lib/cassandra/data/us_where/tagpositionhistory-da427a405fb111e882a03fc4ddb7bb64/mc-143294-big-Data.db: Too many open files at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[na:1.8.0_72] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.8.0_72] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[na:1.8.0_72] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[na:1.8.0_72] at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[na:1.8.0_72] at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[na:1.8.0_72] at org.apache.cassandra.io.util.ChannelProxy.openChannel(ChannelProxy.java:51) ~[apache-cassandra-3.9.0.jar:3.9.0] ... 20 common frames omitted ERROR [Reference-Reaper:1] 2020-04-22 15:29:04,918 Ref.java:224 - LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@64cb542e) to class org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@1779214023:[Memory@[0..3e4), Memory@[0..26e8)] was not released before the reference was garbage collected ERROR [Reference-Reaper:1] 2020-04-22 15:29:04,920 Ref.java:224 - LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@6e333f5e) to class org.apache.cassandra.io.util.SegmentedFile$Cleanup@724675836:/var/lib/cassandra/data/us_where/tagpositionhistory-da427a405fb111e882a03fc4ddb7bb64/mc-143294-big-Index.db was not released before the reference was garbage collected