Is there any way to do an UPSERT with data coming from Postgres?

DataSquirrel
edited February 22 in Dataflows

Our main SQL database is Postgres, and we're populating DataSets with calls similar to 

select * from view_for_domo

We're using full replacement, rather than incremental pulls.

We're interested in using incremental updates rather than full refreshes of the DataSet to speed things up. After checking the options I can find in Domo, and see that UPSERT (INSERT ON CONFLICT in Postgres) and MERGE are not supported for the Postgres connector. 

The replace method is fine, until it's too slow for the amount of data we're accumulating.

The incremental method is likely fine, for an append-only table, but we don't have many of those.

There's no way to reliably fake an UPSERT or MERGE, and we do regularly delete and revise rows that have already been imported into Domo.

Is there some feature, behavior, or beta feature that might help us out here?

Thanks a lot.

Answers

  • MarkSnodgrass
    MarkSnodgrass Portland, Oregon 🟤

    @DataSquirrel you are going to want to look into recursive dataflows. Here are few KB articles to start with depending on how you would like to implement it:

    Implement with Magic ETL 2.0 (will likely run the fastest)

    Implement with SQL

    Implement with Magic ETL 1.0

    You can also find a lot of chatter about it in the Dojo by searching for recursive.

    Hope this helps!

  • GrantSmith
    GrantSmith Indiana 🔴

    Hi @DataSquirrel


    If you're using Workbench 5 there is the ability to define an UPSERT key to replace existing records based on the defined key. It's outlined in the knowledgebase here: https://knowledge.domo.com/Connect/Connecting_to_Data_Using_Workbench_5/Using_Upsert_in_Workbench_5

  • @MarkSnodgrass 

    Thanks for the suggestion and links, I'll check this out. It's been a couple of years since I did a review of Domo's features, looks like there are a lot more options now.


    @GrantSmith 

    Thanks to you too, the Workbench 5 strategy looks pretty good. In our case, we in the process of decommissioning Workbench, but I'm glad to have this option in the back of my mind.

  • jaeW_at_Onyx
    jaeW_at_Onyx Budapest / Portland, OR 🟤

    How much data volume are we talking here?

    How frequently are you updating your raw data in Domo?

    This video is a little less polished than my usual content, but here's how you can enable upsert using the CLI. It doesn't matter if you use Workbench or CLI, the considerations outlined in the video are the same.



    https://knowledge.domo.com/Administer/Other_Administrative_Tools/Command_Line_Interface_(CLI)_Tool

  • Thanks for the answer, and the questions. I was just looking at your Complete Guide to Recursive DataFlows in MagicETL piece, thanks!

    It's been 2-3 years since I last had a good look at Domo's features, and it seems that a ton of great stuff has been added. I'm going to check out the CLI toolset too. In our case, we're working on DataSets populated from Postgres running on RDS. In that environment, you don't have a file system or command line, so I think I'm limited to Workbench and connectors for automation. Since the data is in Postgres, a connector seems the better bet. (We discovered that Excel has some sort of row limit, around 1M+, which isn't enough for many of our DataSets.)

    > How much data volume are we talking here?

    Just checked, and we've got around 30 DataSets populated with data from Postgres now, and most are in the few 10s of thousands, and 8 have row counts over 1M. The largest is currently over 16M rows, and powers over 300 cards. We're adding facilities (more data) and DataSets (more pulls) all of the time. On overage, that DataSet takes around 30 minutes to refresh, but the history screen shows times more than 3x that. In fact, in looking at the larger DataSets, the load times vary by about 3.5x, even with identical queries (I'm always pulling select * from view_name_for_domo.) The rate of change within the DataSets is pretty variable. Some are close to append-only logs, some are replacement snapshots, and some take all manner of revisions. Even our "static" data sometimes needs adjustment. For example, a colleague found a discrepancy in some of the outputs on that 16.3M row DataSet. He tracked it down to 63 records that had an inconsistent default (0 vs. 1). So, we needed to revise those 63 rows, even though they were, in theory, historical. 

    Currently, we're updating daily, but some of these DataSets would be better if they were updated at least hourly.

    What I'm trying to accomplish is something like this:

    * Do a full pull of the initial data.

    * After that, pull only the changes and apply them.

    * Support new, modified, and deleted rows.

    That's close to what Postgres provides in its version of UPSERT, INSERT ON CONFLICT, but a bit more. Because of DELETE, so more like what SQL Server implements as MERGE. I don't have the MagicETL v2 beta, but took your advice and am seeing about getting access to it.

    It looks like a Recursive Data Flow could do the trick....I think. I could not get the "hello world" example in the KB article to work, nor have I been able to with some silly test data. I like to start from a sample that I can spot-check visually. And, here's my initial data:

    month_number   full_name  last_updated_dts       marked_for_deletion

     1            January   2021-02-24 08:26:04.365915  false

     2            February  2021-02-24 08:26:04.365915  false

     3            March    2021-02-24 08:26:04.365915  false

    4             April    2021-02-24 08:26:04.365915  false

     5            May     2021-02-24 08:26:04.365915  false

     6            June    2021-02-24 08:26:04.365915  false

     7           July    2021-02-24 08:26:04.365915  false

     8           August   2021-02-24 08:26:04.365915  false

     9           September  2021-02-24 08:26:04.365915  false

    10           October   2021-02-24 08:26:04.365915  false

    12           Declember  2021-02-24 08:26:04.365915  false

    13           Smarch   2021-02-24 08:26:04.365915  false

    Take month_number as a unique primary key. (Like I said, stripped down example.) There are three errors above:

    * December is misspelled, so we need an UPDATE.

    * November is missing, so we need an INSERT.

    * "Smarch" isn't a month, so we need a DELETE, implemented here as an UPDATE as we usually have a soft-delete flag field for reasons unrelated to Domo.

    Given all of that, here's the delta data:

    month_number   full_name   last_updated_dts            marked_for_deletion

    11             November    2021-02-24 08:26:04.381418  false

    12             December    2021-02-24 08:26:04.381418  false

    13             Smarch      2021-02-24 08:26:04.381418  true


    What I'm after is

    month_number   full_name  last_updated_dts       marked_for_deletion

     1        January   2021-02-24 08:26:04.365915  false

     2        February  2021-02-24 08:26:04.365915  false

     3        March    2021-02-24 08:26:04.365915  false

     4        April    2021-02-24 08:26:04.365915  false

     5        May     2021-02-24 08:26:04.365915  false

     6        June    2021-02-24 08:26:04.365915  false

     7        July    2021-02-24 08:26:04.365915  false

     8        August   2021-02-24 08:26:04.365915  false

     9        September  2021-02-24 08:26:04.365915  false

    10        October   2021-02-24 08:26:04.365915  false

    11        November  2021-02-24 08:26:04.381418  false

    12        December  2021-02-24 08:26:04.381418  false

    13        Smarch   2021-02-24 08:26:04.381418  true


    Even better would be to clear deleted records as a list of IDs to clear in Domo, and end up with

    month_number   full_name  last_updated_dts       marked_for_deletion

     1        January   2021-02-24 08:26:04.365915  false

     2        February  2021-02-24 08:26:04.365915  false

     3        March    2021-02-24 08:26:04.365915  false

     4        April    2021-02-24 08:26:04.365915  false

     5        May     2021-02-24 08:26:04.365915  false

     6        June    2021-02-24 08:26:04.365915  false

     7        July    2021-02-24 08:26:04.365915  false

     8        August   2021-02-24 08:26:04.365915  false

     9        September  2021-02-24 08:26:04.365915  false

    10        October   2021-02-24 08:26:04.365915  false

    11        November  2021-02-24 08:26:04.381418  false

    12        December  2021-02-24 08:26:04.381418  false


    Does a Recursive Data Flow sound like the right tool for these goals?

  • jaeW_at_Onyx
    jaeW_at_Onyx Budapest / Portland, OR 🟤

    lol @DataSquirrel, the scale of your question is a consulting engagement with a Sr. technical consultant at Domo ... or a TSM maybe. Just sayin ;)


    1) i would evaluate if Magic 2.0 can meet the requirements

    2) you could try going the UPSERT route, but you would still have run your data through logic to flag soft deletes. (i would accummulate ALL data into a dataset and then build a DSV on top if it that just has the valid rows. Or maybe put the flagged for deletes in a second dataset.

    3) if you're running UPSERT, as described there is no DELETE function. The two options there are to do a periodic full load (on Saturday or something) OR if you have a list of row_ids then you can run a script that will delete rows from an UPSERT dataset using the UPSERT_KEY (see java Cli)


    this video is rough and unfinished ... but it's got the information you need

    https://www.youtube.com/watch?v=nFSu6hpc8SE


    4) the biggest trick / problem people run into with Domo is figuring out how to architect their dataflows to minimize the volume of data that gets transformed everytime you execute your pipeline. (problem with recursive queries is there's linear growth) ... so questions like "how late in my pipe can i do my JOINs" becomes super important.

    As i said, your root question (performance at scale) is not just a technology question. the tools are there. it's a best practices and architecture planning question... hence why i said this is usually a paid consulting engagement.

    fortunately, i know a guy ;)

  • Thanks for the answer. I think that you're on UTC+0, and I'm on UTC+10, so it's getting later here...

    Hoping to check out Magic 2.0.

    I'm a bit surprised to hear that what I'm asking isn't a straightforward question, but, actually, maybe that makes sense. Architecture <> Mechanics.

    But, speaking of mechanics, I'd just like to try and get a basic example working, that seems like a good first step.

    And, for the soft delete, I can also do a hard delete, or publish a delete change set with only the deleted IDs to do an anti-join on. It just seems crazy to reload a 16M row DataSet from scratch to clear few rows. (For the record, we don't use materialized views in Postgres, for similar reasons. Domo DataSets function like materialized views, in a good way...and also in the bad way. Just trying to make them incrementally updateable.)

  • And thanks for the video link, looks interesting! I'll check it out in the AM.

  • jaeW_at_Onyx
    jaeW_at_Onyx Budapest / Portland, OR 🟤

    @DataSquirrel

    domo datasets are CSVs stored in a data lake environment similar to S3.

    "I'm a bit surprised to hear that what I'm asking isn't a straightforward question, but, actually, maybe that makes sense. Architecture <> Mechanics."

    There are books on building performant data models in powerbi. I recommend having similar expectations for Domo. (Domo does automate a lot of problems away, and does trivialize a lot of optimization... but for performance on bigger sets of data ... we gotta start thinking about it.)

    To be fair... IMHO Domo's strength is how easy it is to get return on time investment. domo's weakness is that it encourages people to not think strategically about what this will look like in a year. :P

  • Yeah, my role is more in PostgresLand, where architectural choices are...many. That, I understand. Domo's internals, not so much. My instinct for some time has been to use Postgres for pre-processing digest/rollup/summary data for Domo. That doesn't always suit because you lose whatever level of granularity you summarize away, but you can get charts to render instantly and your data feeds become quite small. It's not hard to do rollups, incremental rollups, pivots and so on through Postgres. Logical replication makes this much easier. You're able to flow data through a secondary server in commit order, which makes 100% accurate live rollups achievable at low cost with no easte. (The logical subscriber runs its own triggers on incoming data, and can add novel columns onto the table not found in the original publisher.) From there, we can have Domo pull. And, also helpful,you can use calculated columns to materialize calculations in-row, so the Domo pull is only grabbing raw values, not executing any runtime calculations in Postgres.

    That's the thinking for now, at least. Kind of only makes sense if you're in the weeds on Postgres.

  • I just finished your tutorial video, and it is fantastic. You explained more in 15 minutes than I've been able to figure out by hunting through the docs and such for hours. Thanks! Highly recommended.