Bringing together the Apache Cassandra experts from the community and DataStax.

Want to learn? Have a question? Want to share your expertise? You are in the right place!

Not sure where to begin? Getting Started



igor.rmarinho_185445 avatar image
igor.rmarinho_185445 asked Erick Ramirez commented

Loading data with COPY FROM command returning "unhashable type: 'bytearray'"

Hello again,

After fixed almost all my issues with COPY I have this last one, that I couldn't fix it.

[cqlsh 5.0.1 | Cassandra | DSE 5.1.17 | CQL spec 3.4.4 | Native protocol v4]

CREATE TABLE document_template_by_company_id (
    company_id timeuuid,
    id text,
    default_language_tag text,
    i18n map<text, blob>,
    name text,
    status text,
    PRIMARY KEY (company_id, id)
cqlsh "with header=true AND CHUNKSIZE=100 AND NULL='null' " --connect-timeout 30
           Reading options from the command line: {'header': 'true', 'null': 'null', 'chunksize': '100'}
           Using 3 child processes
 unhashable type: 'bytearray',  given up without retries

Any thought?

10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

I'm working through your questions and I'll be posting an answer as soon as I get through a few things. Cheers!

1 Like 1 ·

Thanks a lot Erick, I already everything I could to fix it without success.

0 Likes 0 ·

@igor.rmarinho_185445 It's a bit difficult to work out what the problem is but it's most likely a result of the format of the input data not matching the data type of the corresponding CQL columns.

To help us diagnose the problem, please provide an example row from each CSV input that triggers the errors so we can replicate it ourselves. Cheers!

0 Likes 0 ·

Hi Erick,

I created the second DB using the export of the shema.cql from the source so they are identical.

shift_package_by_id_v2 row

company_id bid_id id assignments created_via_integration display_name external_id original_ids quantity shifts status
29f5c2c0-f88d-11e9-8080-808080808080 ad066113-23c4-4ac4-b498-b68bfa057d88 052f8129-e2f7-4714-8096-913a931e87fb [] FALSE 1
['e1a3d0f5-83a7-44c0-bebc-31764aba6752'] 1 [{day_of_week: 1, start_time_local: 08:30:00.000000000, duration: 'PT4H'}, {day_of_week: 2, start_time_local: 08:30:00.000000000, duration: 'PT4H'}, {day_of_week: 3, start_time_local: 08:30:00.000000000, duration: 'PT4H'}, {day_of_week: 4, start_time_local: 08:30:00.000000000, duration: 'PT4H'}, {day_of_week: 5, start_time_local: 08:30:00.000000000, duration: 'PT4H'}] ACTIVE

0 Likes 0 ·

@igor.rmarinho_185445 I've logged another question on your behalf to deal with the problem with the second table separately. This is just so we're dealing with just one problem in this ticket so it's not confusing. Cheers!

0 Likes 0 ·

1 Answer

Erick Ramirez avatar image
Erick Ramirez answered Erick Ramirez commented

@igor.rmarinho_185445 The problem is possibly an issue with Python but I haven't been successful in tracking down the cause so I don't have a solution. I can replicate it with a simpler table definition like this:

CREATE TABLE community.blobmaptable (
    id text PRIMARY KEY,
    blobmapcol map<int, blob>

With this CSV file:

$ cat blobmap.csv 
c3,{3: 0x74776f}

I get the same error you're reporting when I try to load it:

cqlsh:community> COPY blobmaptable (id, blobmapcol) FROM '~/blobmap.csv' ;
Using 1 child processes
Starting copy of community.blobmaptable with columns [id, blobmapcol].
Failed to import 1 rows: ParseError - Failed to parse {3: 0x74776f} : unhashable type: 'bytearray',  given up without retries
Failed to process 1 rows; failed rows written to import_community_blobmaptable.err
Processed: 1 rows; Rate:       2 rows/s; Avg. rate:       3 rows/s
1 rows imported from 1 files in 0.389 seconds (0 skipped).

I've logged ticket PYTHON-1234 with Engineering to determine if it's a bug in the underlying Python driver that cqlsh uses. Cheers!

2 comments Share
10 |1000

Up to 8 attachments (including images) can be used with a maximum of 1.0 MiB each and 10.0 MiB total.

Thanks a lot Erick! I’ll follow up the ticket.

0 Likes 0 ·

Noting here that I've also had to log CASSANDRA-15679 to have cqlsh investigated in parallel. Cheers!

0 Likes 0 ·