Thanks this works. The only problem is that I have 1.5 million records. When I split them I have 204 million records. After that my machine doesnt respond anymore (easymoreph uses 27gb ram at that moment).
Do you have another option?
Otherwise I will split the data in 2 datasets.
Could it help if you process the table (with 1.5 million records) iteratively?
The example attached passes every line of the input data to a submodule (holding Dmitry’s flow) and catches its results, which are appended into a new column.