Unexpected EOF or 0 bytes from the transport stream

I am facing this error when I am trying to query a table from snowflake.
rows: 10,722,841
Total physical memory: 31.73 GB

Is there any way to resolve this?

Hello @SarthakMistry,

What type of Snowflake connector are you using? Native or ODBC?

Is this error a permanent or an intermittent one?

Can you please make that error appear again and then send the debug log to our support email? You can find the debug log here: “About” menu => “Diagnostic information” => “Debug log” tab.

Hey @andrew.rybka,

I am using a native connector. well the error is permanent in the sense that I am not able to fetch this data. I got another error today:

Error: Error: SQL execution canceled SqlState: 57014, VendorCode: 604, QueryId:
Source: action “Import from database”, table “ODS_PENNANT.ODS_FIN_PFT_DETAILS_SNAPSHOT 1”
This is what I got in debug log.

Also, it takes about 35-40 minutes before I get this error. For heavy tables it takes a very long time to load, is there a solution to this like any setting in the connector or anything I can change from warehouse’s end? Any solution to this will be very helpful

The second error is most likely caused by the query timeout. It’s possible that the first error has the same cause. You can increase the query timeout in the connector configuration dialog at the Advanced tab.
You should increase the “SQL SELECT timeout” option.

@andrew.rybka I set the timeout as 6000. I did not encounter the error but the workflow is running for almost an hour now. I understand that since the data is huge, it takes up the memory. How are workflows with huge data typically developed in easymorph? since, to schedule it in a server with more ram a workflow has to be tested in desktop right?

If the source database table is too big, the right approach is to process it chunk by chunk using loops (iterations) n EasyMorph. For instance, if the source table contains data for the whole year, make a loop and import one month at a time. The algorithm is usually as follows:

  1. Create a module to process one chunk of data (e.g. one month). The module should have a parameter that defines the chunk (e.g. YearMonth) and import a subset of data from the database that corresponds to YearMonth.
  2. In another module, make a SELECT to import a distinct list of all chunks from the database
  3. In that list add the “Iterate” action to create a loop that goes through all the months and for each month it runs the module created in p.1 above.

Tutorial topics: