Data pipeline Automation

Hi @Team

We have a scenario (Example : In a employee database (with 42 tables in the database)- employees whose age is 49 need to pull all the records matching with the scenario from all the tables to the temp tables created in testDB , we are doing the import and export of the tables manually with each table now we are looking for a schedule(Automation) task in a single click so the all the process will be done at a time if new records gets into that tables(Datawarehouse). Below is the manual process we are following.

Step 1: We are creating temp tables in TestDB

Step 2: We are pulling the data from Datawarehouse using Easymorph and altering sensitive records and sending that records TestDB.

Step3: the process is followed for almost 42 tables on the same scenario.

The above steps are done manually in future we want to schedule single task for all the 42 tables so that if new records matches with the scenario will be replicated.

Hello Maheshwar,

maybe I didn’t understand the problem correctly, but I give it a try:

  • You create a project (Main Project) where you put all the condition to filter your employees;
  • In the Main Project you need to have, as a result, a table with the list of Employee Unique ID;
  • After that you’ve got the Employees UID, use the Iteration action on the modules* that will move a single table to the testDB;

*The single module will filter the data of the table based on a parameter, the employee UID, that it will be set by the iteration action.

You need to take you time to write the single module for each table (maybe you can manage to have a single module only, but you need to dynamically read the table list, for each table read the columns, filter the data etc.)

I hope it can help