conditionally start a dataflow
I am aware of the 3 ways to start a dataflow...manually, on a schedule, and only when datasets are updated.
I have a use case (which I suspect others may as well) where I only want a dataflow to run conditionally.
Example: We are pulling multiple datasets in from an application that on occasion experiences data quality issues (like duplicate rows). I have no control over changes pushed to the app by our devops team which can directly affect the data being pulled into Domo from that App. So, I have built audit dataflows and alerts on these datasets which notify me immediately if any duplicates are encountered so that I am aware if/when this occurs. There are subsequent dataflows that run based upon these datasets being updated. So, by the time I see the notification, the data is already being processed into Domo for use by our cards. In this situation, I have to notify devops to correct the data (or I revert to a previous Domo dataset that was good) which then reprocesses into Domo. These datasets are updated 6 times a day.
Is there any technique that I can use to NOT run the subsequent dataflows if data quality issues are encountered. I would rather have stale data than bad data.
I can tell that if the output dataset of the audit dataflows has any records that there are dups. It would be great if within the subsequent dataflow(s) it could check that audit dataset first to determine if it should proceed.