I have a dataset that is structured in the following
Columns: Date, Segment, # Units Sold, and Goal Units Sold.
Rows are by day and segment. (6 segments, 6 rows for each day)
It is joined with a calendar dataset containing a column for the count of business days.
My goal is a table metric, like the attached, that shows # of Units Sold per business day by segment, with a total showing the total Sum of the units sold per business day. What I'm getting with the attached beast mode calculation and table metric seems to be the right rows counts, but then the total is an average, rather than a sum total of the segments combined. How can I end up with the total row equaling the sum of the rows?
There's a lot to unpack here!
1) don't JOIN your date dimension to your transactions to get the day count. instead APPEND the two sets of tables and conform columns so the Transaction Date and CalendarDim_Date line up. Then assign an activity type "Transaction" and "Calendar Count" to differentiate the two datasets.
Why? B/c if you wanted to apply your design pattern to a non aggregated dataset (where you could have multiple transactions in a day, you have to do crazy math to get 'number of days per month'. Where as with my proposed model, you can use a CASE statement on Activity Type to differentiate between activity rows and calendar dimension rows.
I call that design pattern 'stacking' datasets and have an example in this tutorial video:
2) to calculate a percent of total that is partitioned by day you need to use a window function partitioned by day.
I have an example of that here:
You may need to ask your CSM to enable window functions in beast modes.
Hope that helps!
hit me up if you have further questions.