Hi -
When a failure with the export to DB occurs, is it possible to return additional error messaging or perhaps automagically iterate in smaller chunks to identify truly bad data as things are done in Batches?
As an example:
Ref
I’m going to attempt to export to a MySQL DB 10,000 rows per Batch
The Data is Largely a mess but I wanted to ensure I capture the “raw” data and make it query-able if needed at a later time.
I expect I will get at least 1 failure:
Of the 42,546 rows attempting to export 22,546 have failed “#Failed”. I assume this is just based on positioning of bad data that caused a failure.
To minimize this I could set the batch to a smaller number and have more batches, thus slower export but possibly better results in export (if more good data exists than bad)
So if I repeat the exercise and set the batch size from 10,000 to 1000 I went from 22,546 failed rows to 4,546 failed rows.
I know based on past experience that if the DB is accepting rows, then its likely something with the format of some of the data and DB constraints
If we profile it, I see there is a very large amount of data in at least 1 of the records. We know that by looking at the DB it won’t accept it
For Example Sake
SQL Error [1406] [22001]: Data truncation: Data too long for column ‘Column1’ at row 1
Long way of walking through, but it would be very nice if we had a status “OK vs Failure” and if Failure an extended column with the returned message
Not sure if that sort of thing is possible.