Processing tons of data on my antiquate PC

I have to process a lot of data on my old 8 GB computer.
These data are structured in 95 text files.
Each file contains 3 record types: first row, last row, and rows in the middle.
The global size of the 95 files is more than 6 GB.
The global number of rows is more than 21 million.
The processing aimsBASTRA_EXP01F.morph (7.0 KB) BASTRA_EXP01E.morph (7.7 KB) to get a global table containing the rows that meet a specific condition in each file (the condition is: a date in records below the first row is different from a date in the first record).
I did a lot of testing to get the best out of my old computer.
My best result is the project BASTRA_EXP01F.morph which takes about 9 minutes to create a result table containing more than 10.8 million rows.
Being a newbie of Easymorph, I would like any suggestion to improve my try.
In addition, I submit another version of my test (BASTRA_EXPO1E.morph) with a different implementation of Module1, which I thought could be more performing than that in BASTRA_EXP01F.morph. The practice has disproved my evaluation; in fact, on my old PC, the processing of this second workflow has never gone to an end. Please, might you explain to me what is wrong with this implementation or why can it requires more resources than the other version?

BASTRA_EXP01F.morph looks OK, can’t see any possible improvements for it.

BASTRA_EXP01E.morph looks very similar to BASTRA_EXP01F.morph. There are subtle differences in performance, but by looking just at the actions it’s hard to say what went wrong. You have to see how it processes actual data. It the workflow never finished on a computer with small RAM, probably it because it ran out of memory. Try running the workflow manually on the biggest file and compare the result with BASTRA_EXP01F. Chances are BASTRA_EXP01E produces a too big table to fit in RAM.

Thanks so much for your answer