What's In an Ovation?

United Vietussia

Allegedly Did Some Stuff
Citizen
1653682995090.png

What’s In an Ovation?
Introducing the Ovation Index
EDITOR'S NOTE: While this article will remain released in its original form, the discussion that follows afterwards regarding the model and potential fixes to it absolutely enhances the analysis from this article, and I highly suggest you continue reading it after finishing the article. Kudos to @North East Somerset in particular for his work.

Who deserves an Ovation? What qualifies somebody to receive an Ovation? When should the next Ovation be awarded?

I set out to answer these questions (and maybe a few others along the way) in crafting what I like to call the Ovation Index. Admittedly, I didn’t happen upon the idea through some epiphany on my own; rather, I took some inspiration from a recent Ministry of Pod episode with PhDre and GraV (timestamp: 13:00ish, quote cut for conciseness and clarity, though I highly suggest you go give the full episode a listen):

PhDre: So I was chatting with some folks about who’s the next Ovation-worthy member of Euro…and it came to mind–are you familiar with BaseballReference.com? [...] One page they have is a probability of making the Hall of Fame [...] it’s a model [that measures] the probability that you’ll be in the Hall of Fame given the observable characteristics. [...] What do you think about trying to come up with a probability of [receiving an] Ovation or an Ovation metric?

Simply put, I gave it a go. Let’s talk about the construction of my Ovation probability model and some takeaways from it.

Methodology

As you may recall, I recently conducted a poll asking Europeians to rate 9 different factors on a scale from 0-10 in terms of their importance to the awarding of an Ovation. I received 23 responses (one was thrown out due to marking 0 for all categories except Justice, which was rated as a 1; this response slightly skewed the means of all the factors, but the comments made by this respondent are still preserved within the relevant spreadsheet), and based on 22 responses, I received the following averages (rounded to the nearest thousandth for this write-up, but used without rounding in later calculation):

Presidential (or equivalent) Terms: 7.727
Minister (or equivalent) Terms: 7.045
Speaker Terms: 6.091
Presidential Medals: 5.955
Senator Terms: 5.773
Vice Presidential (or equivalent) Terms: 5.000
Chief Justice Terms: 4.318
Sapphire Stars: 3.864
Justice Terms: 3.545

Before proceeding, what can we gather from this? Simply put, Europeians place a heavy emphasis on Executive service in determining who deserves an Ovation; conversely, Europeians believe Judiciary service, at least relatively speaking, does not particularly matter all that much in terms of Ovations awarded (more on this later).

Back to the construction of this model, these means were re-scaled on a scale from 1 (for the most important factor) to .05 (for the least important factor; this number is not 0 as all factors were deemed at least somewhat relevant, so the least important factor will still hold some weight). Converted, we receive the following coefficients (again rounded to the nearest thousandth for write-up) that will now allow us to calculate probabilities.

Presidential (or equivalent) Terms: 1
Minister (or equivalent) Terms: 0.837
Speaker Terms: 0.609
Presidential Medals: 0.576
Senator Terms: 0.533
Vice Presidential (or equivalent) Terms: 0.348
Chief Justice Terms: 0.185
Sapphire Stars: 0.076
Justice Terms: 0.05

A candidate's frequencies are multiplied by these coefficients and summed together. Afterwards, an adjustment is made relative to an average Ovation recipient. The coefficient output for an average Ovation recipient (found by taking the means of each category among all Ovation recipients and then multiplying by the above coefficients) was calculated at about 24.23; this number is halved and subtracted from the output of a candidate. (Originally, a full subtraction of 24.23 resulted in an overly punishing model; halving it produced more realistic results.)

Finally, the candidate's adjusted coefficient output is plugged into the following equation (a Sigmoid function):
= 1 / (1 + exp(-(candidate's coefficient output)))

This number is converted into a percent for a candidate's probability of receiving an Ovation.

Now, let’s get into what you really came here for: the probability numbers.

1653682916296.png

Largely, most recipients received a 95-100% probability of receiving an Ovation from our model. This shouldn’t serve as a big surprise, but those that did not receive such high numbers should raise some questions about the process. Let’s talk about some of them.

The OnderKelkia Problem

When crunching these numbers, I was surprised to see OnderKelkia return one of the lowest probabilities in the model. I see two main reasons as to why this may have occurred (and it may shed some light into other potential candidates):

1) This model is not a fan of “specialist” citizens.

In a more broad sense, this model rewards civilian service in all three branches of government; OnderKelkia served exclusively in the judiciary, and with only some Presidential medals outside of his judiciary service, Onder hasn’t contributed in the same ways as almost all other Ovation recipients. I would say this serves as an example that Onder might be a surprising Ovation recipient relative to the more well-rounded resumes of most other recipients and that low probabilities do not necessarily correlate with unworthiness (in fact, I’ll touch on another metric illustrating this idea in a bit).

2) Europeians don’t particularly value judiciary service.

In polling Europeians, judiciary experience simply wasn’t that important in their view. Sapphire Stars were noted as slightly more important than Justice terms, and judiciary experience ranked near the bottom in terms of importance. In short, the model is simply reflecting the views of Europeians.

Should The Opinions of Europeians Matter?

I wrestled with this question a bit when crafting the model. Should I base my findings based on the accomplishments of past Ovation recipients, or should I base it on the opinions of current Europeians? I ultimately sided with the latter, and allow me to briefly explain why:

While it might seem more proper to compare Ovation candidates to those who have already received it, I found two main issues with this: 1) what Europeians value in an Ovation can and will shift over time, especially over the span of a decade plus, and 2) Ovations are awarded in a snapshot of time against the Ovation standards we have set in the present. What might have qualified for an Ovation a decade ago may not meet today’s standard, and as such, I opted to go with a more present day standard rather than looking into the past. Reasonable minds may disagree with me on this point (especially those more statistically minded and are likely screaming "run a regression!" right now, to which I say hello PhDre and GraV), and with this in mind, I will be making my model open-source at the end of this article. I absolutely wish for this to be refined over time as I know my presented product is not perfect, so please feel free to play around and tinker with it as much as you want.

What About the Admins?

So NES and Mousebumples, huh? I don’t necessarily think anybody would question their Ovations, yet they come in scoring somewhat low in the probability department. Simply, I don’t have a quantifiable way in the present to measure administrative work, and that may very well be a shortcoming of this model. As with the previous point, I invite any updates or changes you would make to my model.

Any Other Shortcomings, Mr. Model Man?

The biggest glaring issue here is a lack of confirmation surrounding Justice and Senator terms. At best, my tally is a generally rough estimate based on Legislative Records and past records of Justices in lieu of a definitive list of every Senator and Justice to have ever served. While I strove for accuracy in calculating the terms of each recipient, there is an obvious margin for error in calculating this. Until such time as a definitive list emerges, this model will always be subject to some error.

Now, with the questions about Ovation probability out of the way, let me introduce another metric that may alleviate some of the issues brought about by the probability’s calculation: Ovations Above Replacement.

OAR Methodology

Ovations Above Replacement (OAR) represents the amount of Ovations a particular candidate is expected to receive over the recipient who represents the 50th percentile.

To calculate Ovations Above Replacement, the number of frequencies for a particular column is summed up and divided by the sum of the medians for all ovation recipients. Afterwards, 1 is subtracted from the quotient, as 1 represents the average of the medians for all Ovation Recipients. An OAR of 0 indicates an average Ovation recipient (i.e. the 50th percentile); an OAR below 0 indicates a potentially underqualified ovation recipient; an OAR of 1 or higher indicates a highly qualified Ovation recipient (who may even be deserving of a second Ovation if such a thing were allowed).

When this was applied to the Ovation recipients, I received the following results:

1653682931216.png

Malashaan and Drecq stand above the pack with an OAR around 1.5. Aexnidaral, Writinglegend, Calvin Coolidge, and HEM also all stand as recipients who might be deserving of a second Ovation (if such a thing were allowable) with OARs around 1. This metric also resolves our OnderKelkia question from earlier, as the metric emphasizes his dominant status over the Judiciary for such a long period of time. That said, our admin problem still isn’t resolved (and may never be), something I hope stands as an example that this Index is not infallible and should be used as a tool rather than the be all, end all of who should and shouldn’t receive an Ovation.

A Note on OAR and OP%

The OAR for a particular person may not necessarily correlate to their OP%, as OAR rewards outperforming other candidates across categories, while OP% simply takes a holistic view of a single candidate and calculates their likelihood of receiving an Ovation based on previous awardees. Put another way, OAR measures a candidate's relative performance to others, while OP% merely uses the performance of others to establish a baseline for who should receive an Ovation.

We’ve spent a lot of time outlining how this model was constructed using past recipients, but let’s look ahead at some potential prospective candidates (NOTE: some people on this list received a Triumph but not an Ovation).
1653682952966.png
1653682961644.png

The two leading candidates here (CSP and Kraketopia) have both already received a Triumph; should they still receive an Ovation on top of that for their civilian service? This model might suggest so. If you don’t believe they should, look elsewhere to citizens both present and past: Lloenflys appears the most likely candidate on my radar for an Ovation, and McEntire and Notolecta both seem like worthy candidates in the view of the model. Prim might be a borderline candidate at the moment, and given he is still serving, it’s quite likely his resume will only continue to strengthen over time.

Do you have a citizen whose chances you’re curious about? I’ve linked all sources that I used on the spreadsheet, so feel free to pull their particular resume and crunch the numbers for yourself; I’ve only included a small selection of potentially noteworthy candidates for the purposes of this article, but it should in no way construe this group to somehow be more in line for an Ovation than anybody not mentioned. I simply couldn’t pore over every citizen in Europeian history and crunch their collective resumes.

Conclusion

Is this model perfect? No. Is it the final say on who should receive an Ovation? Also no. That said, I hope this model provides a starting point for some useful statistical insight into how we go about passing them out and serves as a tool for future leaders who may be considering awarding the next Ovation.

You can find the full model here. Feel free to make a copy and play with it to your heart’s content (and maybe even improve what I’ve started here).
 
Last edited:
There's definitely some interesting data here, thanks UV. A few things I wanted to note, or found interesting: Mousebumples didn't really get awarded their Ovation for their admin work, it was mostly their WA and Executive work that merited the Ovation. Also, did this track the resumes of everyone when they received their award, or in the modern day? Notolecta being a fringe candidate here is similar to what I found in my "How Many Medals Take it Take" article, so it's interesting to see that reinforced here. Lastly, I think having the numbers for what Europeians would value in an Ovation recipient is cool, but it's a shame you used the 1-10 scale, which is much more subject to variation than 1-5!
 
Mousebumples didn't really get awarded their Ovation for their admin work, it was mostly their WA and Executive work that merited the Ovation.
You are correct, and I actually forgot to include a mini-discussion on the WA in the article. Like admin work (which is likely why I forgot to include this initially), WA contribution is a bit tougher to track quantitatively, especially in Mouse's case as she served as Delegate at a time where the position was essentially held for life (changed by the repeal and replace of the WA Act in 2017). In theory, perhaps there's a way to go back and count up how many days these pre-2017 delegates served and divide by 180 (the standard term length of today)? Excellent point though, nonetheless, and a slight oversight of the article's discussion.

Also, did this track the resumes of everyone when they received their award, or in the modern day?
This tracks up to the modern day. In staying consistent with the theme of "comparing candidates to the present standard", I did decide to take the full resume into account rather than the one at the time of awarding. Again, reasonable minds may differ with me on this point and say we should instead compare at the time of awarding, and if so, I would certainly invite a recalculation using those adjusted numbers. That said, I've calculated this under the assumption that candidates will be compared to the full body of work of those who have already received an Ovation rather than poring back over old confirmations to determine what their resume used to be at the time of conferral.

Lastly, I think having the numbers for what Europeians would value in an Ovation recipient is cool, but it's a shame you used the 1-10 scale, which is much more subject to variation than 1-5!
Another excellent point, and perhaps it's worth a reassessment down the road using a slightly better scale.
 
This is really interesting! Almost tempted me to get back into active politics to push by OAR up higher :p
 
Interesting analysis and kudos for making your calculations public. But the model is fundamentally statistically flawed. Basically, what you are doing is counting the number of Presidential Medals and Ministerial terms. By that I mean those are the greatest inputs into the model by far, they are weighted too heavily because there is no numerical adjustment applied on them.

So, when we look at those two elements against the coefficient we can see this visually;

1653751731946.png


Essentially, you might as well just have counted Presidential Medals and Ministerial terms and ranked people based on that. But why is this occurring? Its because numerically those two things have much higher rates than other elements, combined with a decent scoring from the polling element. So in your coefficient, if someone has served the average number of Presidential terms for ovation recipients, which is 2.15, they get 2.15 pts (adjustment is 1.0x) - but if they have served the average number of Ministerial terms which is 11.7 they get 9.8pts from that (adjustment is 0.83x). So basically you are giving massively more weighting to Ministerial terms than serving as the President by virtue that they are more common - which needs to be compensated for in the model.

1653751956490.png


This is a representation visually of the relative weighting you are giving each category in your coefficient calculations. So again, like I said, Ministerial terms and Presidential medals are dominant, simply because they are more plentiful.

You could take a completely different approach to this and get a completely different outcome, which is that each role is assigned the points depending on the number of terms/medals they have compared to the average, multiplied by the weighting for public opinion on what is important. If you do that then you get this;

1653754071365.png

So that is to say someone has 5 Presidential terms, and the average is 2 - they are gaining 2.5pts from it (adjusted x1). If someone has 30 Ministerial terms, and the average is 10, they are gaining 3pts from it - adjusted by importance of role to 2.5 pts (x 0.83). So those two things are worth the same effectively in Model 3 above.

Whereas your model would give the Ministerial roles 21 pts, and the Presidential terms only 5 pts, so the Ministerial roles are worth over 4x as much.

So yeah... erm, just shows how the same input data can be presented many different ways through statistics...
 
Last edited:
Interesting analysis and kudos for making your calculations public. But the model is fundamentally statistically flawed. Basically, what you are doing is counting the number of Presidential Medals and Ministerial terms. By that I mean those are the greatest inputs into the model by far, they are weighted too heavily because there is no numerical adjustment applied on them.

So, when we look at those two elements against the coefficient we can see this visually;

1653751731946.png


Essentially, you might as well just have counted Presidential Medals and Ministerial terms and ranked people based on that. But why is this occurring? Its because numerically those two things have much higher rates than other elements, combined with a decent scoring from the polling element. So in your coefficient, if someone has served the average number of Presidential terms for ovation recipients, which is 2.15, they get 2.15 pts (adjustment is 1.0x) - but if they have served the average number of Ministerial terms which is 11.7 they get 9.8pts from that (adjustment is 0.83x). So basically you are giving massively more weighting to Ministerial terms than serving as the President by virtue that they are more common - which needs to be compensated for in the model.

1653751956490.png


This is a representation visually of the relative weighting you are giving each category in your coefficient calculations. So again, like I said, Ministerial terms and Presidential medals are dominant, simply because they are more plentiful.

You could take a completely different approach to this and get a completely different outcome, which is that each role is assigned the points depending on the number of terms/medals they have compared to the average, multiplied by the weighting for public opinion on what is important. If you do that then you get this;

1653754071365.png

So that is to say someone has 5 Presidential terms, and the average is 2 - they are gaining 2.5pts from it (adjusted x1). If someone has 30 Ministerial terms, and the average is 10, they are gaining 3pts from it - adjusted by importance of role to 2.5 pts (x 0.83). So those two things are worth the same effectively in Model 3 above.

Whereas your model would give the Ministerial roles 21 pts, and the Presidential terms only 5 pts, so the Ministerial roles are worth over 4x as much.

So yeah... erm, just shows how the same input data can be presented many different ways through statistics...
Smh you deleted me
 
And this is why I'm glad to have made it public, because somebody far more versed in statistics was bound to come along and fix this work... :p

As I understand it, the adjustment to a scale from .05 to 1 created a vastly unequal weighting between Pres/Minister terms and everything else; instead, the model should not be based on public opinion but instead adjust relative to public opinion and use an average Ovation recipient as the baseline. (If I'm misinterpreting what was said, please let me know.)

When you're calculating "each role is assigned the points depending on the number of terms/medals they have compared to the average", are you simply dividing an individual's number of terms/medals by the mean, or something more? Would love to hear more about this aspect of it.
 
Interesting analysis and kudos for making your calculations public. But the model is fundamentally statistically flawed. Basically, what you are doing is counting the number of Presidential Medals and Ministerial terms. By that I mean those are the greatest inputs into the model by far, they are weighted too heavily because there is no numerical adjustment applied on them.

So, when we look at those two elements against the coefficient we can see this visually;

1653751731946.png


Essentially, you might as well just have counted Presidential Medals and Ministerial terms and ranked people based on that. But why is this occurring? Its because numerically those two things have much higher rates than other elements, combined with a decent scoring from the polling element. So in your coefficient, if someone has served the average number of Presidential terms for ovation recipients, which is 2.15, they get 2.15 pts (adjustment is 1.0x) - but if they have served the average number of Ministerial terms which is 11.7 they get 9.8pts from that (adjustment is 0.83x). So basically you are giving massively more weighting to Ministerial terms than serving as the President by virtue that they are more common - which needs to be compensated for in the model.

1653751956490.png


This is a representation visually of the relative weighting you are giving each category in your coefficient calculations. So again, like I said, Ministerial terms and Presidential medals are dominant, simply because they are more plentiful.

You could take a completely different approach to this and get a completely different outcome, which is that each role is assigned the points depending on the number of terms/medals they have compared to the average, multiplied by the weighting for public opinion on what is important. If you do that then you get this;

1653754071365.png

So that is to say someone has 5 Presidential terms, and the average is 2 - they are gaining 2.5pts from it (adjusted x1). If someone has 30 Ministerial terms, and the average is 10, they are gaining 3pts from it - adjusted by importance of role to 2.5 pts (x 0.83). So those two things are worth the same effectively in Model 3 above.

Whereas your model would give the Ministerial roles 21 pts, and the Presidential terms only 5 pts, so the Ministerial roles are worth over 4x as much.

So yeah... erm, just shows how the same input data can be presented many different ways through statistics...
Smh you deleted me
You are dead to me now.

jk, not sure what happened there;

1653755116667.png
 
And this is why I'm glad to have made it public, because somebody far more versed in statistics was bound to come along and fix this work... :p

As I understand it, the adjustment to a scale from .05 to 1 created a vastly unequal weighting between Pres/Minister terms and everything else; instead, the model should not be based on public opinion but instead adjust relative to public opinion and use an average Ovation recipient as the baseline. (If I'm misinterpreting what was said, please let me know.)

When you're calculating "each role is assigned the points depending on the number of terms/medals they have compared to the average", are you simply dividing an individual's number of terms/medals by the mean, or something more? Would love to hear more about this aspect of it.
I dont think there is a right or wrong answer to any of this... I was just pointing out the bias in the original method of analysis towards areas with larger numbers of items/terms.

Yeah I'm just dividing the individual number of terms/medals by the mean, and then multiplying that by the "public opinion" factor between 0.05x (for Justice terms) and 1x (for President). This compensates for the earlier bias towards items/terms with higher numbers such as Ministerial terms and Presidential medals.
 
Maybe it helps to visualise it like this. This is my model 3;

1653755928621.png


This is your model as posted in the OP;

1653756844842.png


You can see how in your model it is dominated by just a few factors (Minister terms, Presidential medals, and Senator terms) - whereas my model weights more heavily towards the areas where the public opinion poll deemed as more important, compensating for the numerical differences in instances of an event. Can also see even with the changes in make-up, the overall result is not dissimilar in *most* cases.
 
Last edited:
I have lots of thoughts but am on mobile so will just flag that I think measuring current accomplishments (ie terms served and medals) is not right for most of the measures we want here.

Additionally I like doing approval weighted terms served (As NES did above). Raw terms serves seems inappropriate to gauge quality of service to the region.
 
Maybe it helps to visualise it like this. This is my model 3;

1653755928621.png


This is your model as posted in the OP;

1653756844842.png


You can see how in your model it is dominated by just a few factors (Minister terms, Presidential medals, and Senator terms) - whereas my model weights more heavily towards the areas where the public opinion poll deemed as more important, compensating for the numerical differences in instances of an event.
Agreed, your approach yields far more balanced results than my original attempt.

Recalculating OP% based on this new version yielded the following:

1653757976304.png

1653757986870.png


It does create a baseline where a citizen with zero accomplishments carries a 1.46% percent chance of receiving an Ovation (something I'm not quite sure how to clear out without adversely affecting the rest of this model, and perhaps that's a fundamental flaw with its construction), but overall, it does yield a more realistic result (at least in my view).
 
I really like this! Very fun model, and I think I understood it even more when reading this correction by NES.

One thing that's on my mind now - couldn't you poll people how important they consider the number of days as WA delegate? With the adjustment you just made the scaling doesn't matter so it'd be easy to include.

Other fun things could be recruitment telegrams (they're definitely somewhat important), number of articles written (they aren't that easy to count but maybe?), and days as Vice / Supreme Chancellor (people also wouldn't find those important but it will still be a non-zero number)
 
and days as Vice / Supreme Chancellor (people also wouldn't find those important but it will still be a non-zero number)
I am, of course, biased. But, I think the OSC, and service within it, is incredibly important. While it is one of the most thankless roles in the region, it is the institution chosen to safeguard our democracy by administering elections. Sure, that could always be done by someone else, but that’s not we’ve decided. We’ve decided to host that incredibly important duty within one of the most prestigious offices our region has.
 
Back
Top