Comparing two variables and measuring variance
So, I am working with a dataset that contains questions from a form filled by two different users. Now I want to compare the results of these two fills based on the 'answer value' field  meaning what did user 1 select and what did user 2 select. My goal is to find the bias or variance between these two users in these questions and how they scored.
Here's an example  The interaction ID is my anchor point which tells me that it's the same evaluation these two evaluators looked at.
The output (or the Precision score) is what I am trying to get from my dataset.
Note: Orange is my expert evaluator and is evaluating if Apple is filling the forms correctly.
Using pivot tables I can construct a result like a table I want, but how to get the matched field?
My other thought is in beast mode do a calculation for Precision score like
Case when
Interaction id is the same
question id is the same
if the answer value is different  count distinct question id
then divide by
total number of distinct question id
Now if I select an evaluator can I see the precision score?  Like evaluator Apple's precision score is 60%. Note: Orange is my expert evaluator and is evaluating if Apple is filling the forms correctly.
Best Answers

I'd recommend using a magic ETL to pivot your data into the table format you have above and then use a formula tile (Magic 2.0) to calculate if the answers are the same. You could do this calculation as a beast mode on the card as well but since you're already in the ETL it's better as the card doesn't have to process your static calculation every time it's loaded. That's likely the easiest option.
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0 
Exactly.
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0
Answers

Hi @gospel
Are the two evaluators always the same two or will the number of evaluators be dynamic?
**Was this post helpful? Click Agree or Like below**
**Did this solve your problem? Accept it as a solution!**0 
@GrantSmith it will always be two. I have a primary evaluator and then someone who would randomly audit their work but it’s always two.
one evaluator and one who audits the evaluator randomly
0 
Ah ok  Let me look into that. So it will look something like this? @GrantSmith
0
Categories
 10.6K All Categories
 13 Getting Started in the Community
 31 Beastmode & Analytics
 2.1K Data Platform & Data Science
 59 Domo Everywhere
 2.7K Charting
 2.4K Ideas Exchange
 1.3K Connectors
 362 Workbench
 301 Use Cases & Best Practices
 500 APIs
 118 Apps
 48 News
 753 Onboarding
 1.1K 日本支部
 Private Company Board