We have completed our 2nd full year in Domo and we are starting to see performance issues with the dataflows. We have been integrating data sources left and right and our instance is around 500M rows. Not sure if that is typical or not but it should give you an idea of the data we are processing.
Some of our problems can be alleviated by creating true historical datasets that do not need to be re-processed every day and we will be working on this process this year.
I am looking for recommendations/best practices for staging/reporting on historical data. Typically we are doing yoy, ytd, mtd, dtd, 52 weeks, etc. I have found the Domo POP charts to be too inflexible for our needs. My current approach is having today datasets and historical. The historical dataflows are beginning to take hours to run and cards are starting to lag which is not making people happy. Our business day ends at 6am so if a historical comparison datasets takes 2-3 hours our users are waiting 1/2 the morning for Domo to update. The real problem is when a dataflow mysteriously takes 5-8 hours to complete. I have titlted at this windmill from year 1 with Domo engineering and the answer has always been that the dataflows on average are running at acceptable times regardless of spikes in processing times and that Domo is constantly working on improving their processing/prioritizing of jobs.
Should I create the comparison totals in beast modes? How will the card function based on tens of millions of rows? Do I need to consider aggregating data in a datawarehouse instead of Domo?
Thank you in advance for your recommendations.
- 10.7K All Categories
- 13 Getting Started in the Community
- 41 Beastmode & Analytics
- 2.1K Data Platform & Data Science
- 59 Domo Everywhere
- 2.7K Charting
- 2.5K Ideas Exchange
- 1.3K Connectors
- 362 Workbench
- 303 Use Cases & Best Practices
- 503 APIs
- 120 Apps
- 48 News
- 753 Onboarding
- 1.2K 日本支部
- Private Company Board