Appending large datasets: Magic ETL vs Redshift
I have a need to append large datasets together with Domo, (Google analytics data in Big Query).
According to the documentation when you have inputs larger than 100m rows you should use Redshift to transform the data.
I compared doing large dataset appends in Magic ETL against Redshift and they both took a similar amount of time to complete. I was wondering what is the rationale behind the recommendation to use redshift when there doesn't seem to be an improvement in performance?
- 10.6K All Categories
- 13 Getting Started in the Community
- 29 Beastmode & Analytics
- 2.1K Data Platform & Data Science
- 59 Domo Everywhere
- 2.7K Charting
- 2.4K Ideas Exchange
- 1.3K Connectors
- 362 Workbench
- 300 Use Cases & Best Practices
- 499 APIs
- 118 Apps
- 48 News
- 753 Onboarding
- 1.1K 日本支部
- 4 Private Company Board