Simple count of unique responses by location

This should be easy I think.

 

I have an app that produces survey response data in a pattern something like the simple example shown. Each row is a response to a question, and each response is tagged with a "ResponseGroupID" that ties them together as having been entered as part of a single "batch". I'm finding it easy to aggregate individual response values (not shown below) but I'm having trouble simply counting how many "batches" (as identified by their ResponseGroupID) each location has.

 

Raw Data

LocationMeasureIDResponseGroupIDDate

6B

112461/1/2020
6B212461/1/2020
6B312461/1/2020
5LW189541/1/2020
5LW289541/1/2020
5LW389541/1/2020
6B133891/5/2020
6B233891/5/2020
6B333891/5/2020


Table I would like

LocationDistinct ResponseGroupIDs
6B2
5LW1

 

Table I get

Using a mega-table and adding location and my calculated field

LocationDistinct ResponseGroupIDs
6B1
6B1
5LW1

 

The formula I'm using in 'Distinct ResponseGroupIDs is 

 

COUNT(DISTINCT `ResponseGroupID`) 

 

 

Not sure if I have to create separate grouped by tables in the data flow but I'm trying to avoid additional complexity when possible and this seems as though it should be simple. Using Sum(COUNT(DISTINCT `ResponseGroupID`) in the calculated field doesn't work. Should this be done in the SQL flow that produces the model?

Comments

  • So I did achieve the results I desired in this case but not sure if this is the most efficient design pattern.

     

    In my Data Flow, I created a new table

    Select
    `LocationID`,
    `Location`,
    COUNT(DISTINCT `ResponseGroupID`) as 'Distinct ResponseGroupID'
    From `ajh_rounding_recursive_final_output`
    Group BY `LocationID`

     

    Then joined this new table to the main output (shown in the second LEFT JOIN below)

    SELECT 
    rm.`Measure` AS "RoundingMeasures.Measure",
    rm.`Section` AS "RoundingMeasures.Section",
    rm.`ID` AS "RoundingMeasures.ID",
    rd.`MeasureID` ,
    
    rd.`ID`  ,
    rd.`ResponseGroupID` ,
    
    rd.`Comments` ,
    rd.`Created` ,
    rd.`Date` ,
    rd.`SubmittedBy`,
    rd.`Location` ,
    rd.`LocationID` ,
    
    rgi.`Distinct ResponseGroupID`
    
    FROM
    `tbl_rounding_data` rd 
    LEFT JOIN
    `ajh_rounding_measures` rm
    ON rd.`MeasureID` = rm.`ID`
    LEFT JOIN
    `tbl_responsegroupids_by_location` rgi
    ON rd.`LocationID` = rgi.`LocationID`

     

    Then, in a data card, I can use "MAX" as an aggregation (each value is the same for every entry of the location) and it gives me what is shown below.

    Table 

    LocationDistinct ResponseGroupIDs
    6B2
    5LW1

     

    My question is, is this this the best way to do this? If I wanted to count how many distinct ResponseGroupIDs there were by person, do I have to build this out as a separate table too (I don't mind, just want to make sure I'm not missing a much easier way to approach this)?

     

    Thanks for reading and for your recommendations!

  • While this seemed to give the results I desired, there is no way to filter by any of the many dimensions so this won't really work.

     

    Still looking for some direction on this, thanks.

  • GrantSmith
    GrantSmith Indiana 🔴

    Hi @PhilD ,

    Are the only two fields you have listed the location and your count distinct calculated field? If that's the case then your locations might appear the same but aren't actually the same (perhaps a trailing space).

     

    Can you try wrapping your location in a new beast mode to strip any trailing spaces?

     

    TRIM(`Location`)

     

  • Thanks @GrantSmith 

     

    The locations come from a table in the app so they are the same.

     

    Funny thing, I actually created a new card from scratch and my first attempt worked (I thought that it should). I had both cards open side by side and verified everything was absolutely positively identical and the first on kept giving me the results I shared. After I recreated it fresh, the simple formula worked... not sure what was going here but I wish I could have back the hours I wasted ?

  • jaeW_at_Onyx
    jaeW_at_Onyx Budapest / Portland, OR 🟤

    @PhilD  instead of JOINING the aggregated values to your transactional data, make your dataset TALLER instead of wider.

     

    Create a section of data where ActivityType = Actual.  Then UNION your aggregation where ActivityType = GroupByResponseGroup (or whatever your appropriate group by clause is).  

     

    In your beast mode use 

    -- metrics in numerator

    sum( CASE WHEN ActivityType = Actual Then ...  ) / 

    --- group by in denominator

    sum( CASE WHEN activity TYpe = GroupByResponseGroup Then ...) 

     

    Try to avoid count distinct by reshaping your data to have a metric column with a 1 or a 0 to represent a summable count.

     

    If you provide a decent dataset, i can make a long-form tutorial on this b/c it is a recurring question in the Dojo.