I have a dset with 500k rows that I'm using as a master table to which I'm then wanting to append new rows - the performance seems very slow. taking over 1 minute for the action to complete even if there no new rows to be added. Is export to dset the correct approach to adding/merging in new records to a master table? is the entire table being read/rewritten on the backend each time this action is called? Or should I be using a SQL database an export command for performance?
The .dset format stores compressed data in columns. Therefore, appending new rows to a dataset can be slow on large tables because appending requires decompressing and re-compressing the whole dataset.
If performance is critical, use a database.
