anson avatar image
anson asked Erick Ramirez answered

What can I do about an ArithmeticException in DSBulk: Cannot convert from BigDecimal to Float?

Hi all,

I am trying to migrate data from sqlite3 to cassandra. The flow is that I am creating csv out of sqlite3 data and uploading that csv to cassandra using dsbulk loader. I have made sure the schema of sqlite3 is compatible with cassandra.

The scenarioi is that in the sqlite3 , i have a column with FLOAT datatype and then converting that sqlite3 files with the specified FLOAT column to csv . The resulted csv has one of the column value (FLOAT) in that as 1599.6000000000001 . On using this csv in dsbulk loader, i am getting this exception,

java.lang.ArithmeticException: Cannot convert 1599.6000000000001 from BigDecimal to Float
    at com.datastax.oss.dsbulk.codecs.api.util.CodecUtils.conversionFailed(
    at com.datastax.oss.dsbulk.codecs.api.util.CodecUtils.toFloatValueExact(
    at com.datastax.oss.dsbulk.codecs.api.util.CodecUtils.convertNumber(
    at com.datastax.oss.dsbulk.codecs.api.util.CodecUtils.narrowNumber(
    at com.datastax.oss.dsbulk.codecs.text.string.StringToNumberCodec.narrowNumber(
    at com.datastax.oss.dsbulk.codecs.text.string.StringToFloatCodec.externalToInternal(
    at com.datastax.oss.dsbulk.codecs.text.string.StringToFloatCodec.externalToInternal(
    at com.datastax.oss.dsbulk.codecs.api.ConvertingCodec.encode(
    at com.datastax.oss.dsbulk.workflow.commons.schema.DefaultRecordMapper.bindColumn(

The column of sqlite3 FLOAT is mapped in cassandra as FLOAT ( in the cassandra schema) too.

Why is this happening? Will Cassandra automatically convert the values accordingly? How to avoid this without changing the datatype?

Any help would be appreciated


10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered

The issue occurs because the value 1599.6000000000001 doesn't fit into a float. The Java floating point is only 32 bits and has less precision than a double type.

You can specify an overflow strategy in DSBulk with the --codec.overflowStrategy flag to truncate the data to fit in into the target CQL type:

$ dsbulk [options] --codec.overflowStrategy TRUNCATE

Note that you'll be losing precision when you use this strategy.

For details, see the DSBulk Codec options. Cheers!

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.