Creating buckets for unique ids to fall into, based on the sum of donations of those unique ids

Capturing donor frequency:

We want to count the number of donations a donor has given per procedure. The amount of donations is the bucket I want donors to fall into. So, if we had 100 donors give 4 donations, we want those donors to fall into the bucket '4', and then we would count the unique donor ids.

The part I am struggling to noodle out is that they want to be able to filter by procedures and any combination of procedures. There are many procedures, so if I were to try a rank and window approach, I would have to think of every possible combination of procedures and create a rank and window for each one (right?)

I thought maybe I could try a "sum of donations over partition by donor id" beastmode, but it doesn't group by the sum of donations, I just get separate rows (example 10 rows of 2 donations), so that did not work for me.

Side note: the buckets would be 1-24, and then one that is 25+

I've attached an image of sample data (top section) along with a picture of the idea I am going for.

I appreciate any advise, thanks


  • GrantSmith
    GrantSmith Indiana 🥷

    You won't be able to do this within a beast mode as you're needing to aggregate (count number of records in the bucket) on top of an aggregate (counting the number of donations).

    I'd recommend using a Magic ETL to pre-aggregate your data to group by the donor id and procedure and count the number of donations. Then using that dataset you could use a beast mode to COUNT(DISTINCT `donor id`) but still allow the users to filter based on the different procedures.

    **Was this post helpful? Click Agree or Like below**
    **Did this solve your problem? Accept it as a solution!**
  • Jbrorby
    Jbrorby 🟡

    Thanks, I was definitely over-thinking this. I for some reason thought that way wouldn't work when I started this, but it did. Chalk it up to an airhead moment.