avatar image
2
Import large dataset
By Created

Hi,

I have 3 semicolon txt files have 8 columns.

The amount of records varies from ±100K, ±250K and ±500K

Importing this is slow, and stability is unpredictable cause me being unable to orchestrate alle the processes around it.

I also had to create indexes for queries used in other processes 

We are now thinking to use a custom microservice that will compare two of the same files on actual delta's to only process that data.

But I do wonder, is there another way in Betty to optimize this import? Can we split the files somehow for batch processing?

Thank in advance.



Hi,

I have 3 semicolon txt files have 8 columns.

The amount of records varies from ±100K, ±250K and ±500K

Importing this is slow, and stability is unpredictable cause me being unable to orchestrate alle the processes around it.

I also had to create indexes for queries used in other processes 

We are now thinking to use a custom microservice that will compare two of the same files on actual delta's to only process that data.

But I do wonder, is there another way in Betty to optimize this import? Can we split the files somehow for batch processing?

Thank in advance.


Answers
Sort by:

Please login to reply to the topic.