As with a physical database structure, adding indexes and primary keys help increase the efficiency of SQL execution. Is there a way in DOMO to enforce a primary key or unique constraint? I ask because i have a simple SQL in my dataflow that combines 4 existing datasets, and it is taking quite a long time for the Dataflow to run.
If you're using mySQL dataflows you can add indexes on transform tables. It is definitely recommended for mySQL dataflows. This article in the knowledge base talks about it.
THat works on the specific dataflow, however what if i want to put all those indexes directly on the dataset? So if i have 15 datasets coming from those 4 tables, i dont want to create 4 transformations on each flow. Just apply one to the data table itself, and then it is there.
Is that possible?
ALTER TABLE wo_job_hv ADD INDEX('flctr_bu_id'),add index('work_order_number'), add index('tech_id') , add index('src_db_id')
I did this as it relates to the article, after running for 45 minutes it errored with ...
The database reported a syntax error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''flctr_bu_id'),add index('work_order_number'), add index('tech_id') , add index(' at line 1
Is there something wrong with that syntax?
No, you can't manage your data to that level. Not where it sets in permanent storage ("Vault"). You'd only want to do that if cards weren't rendering fast enough.
If dataflows are too slow, you can, however, write a procedure in each dataflow to create all indexes in one transform, then call the procedure in the next transform. That's two steps instead of four, at least.
That would look like:
CREATE PROCEDURE IndexA()
ALTER TABLE expenses ADD INDEX(`Date`,`ID`);
ALTER TABLE rebeimbursements ADD INDEX(`Date`,`ID`);
Alternatively, you could write your dataflows in Redshift SQL instead. See my recent comment on the post here. Many heavy users of SQL dataflows prefer Redshift, especially for larger datasets.