What’s In an Ovation?
Introducing the Ovation Index
Who deserves an Ovation? What qualifies somebody to receive an Ovation? When should the next Ovation be awarded?
I set out to answer these questions (and maybe a few others along the way) in crafting what I like to call the Ovation Index. Admittedly, I didn’t happen upon the idea through some epiphany on my own; rather, I took some inspiration from a recent Ministry of Pod episode with PhDre and GraV (timestamp: 13:00ish, quote cut for conciseness and clarity, though I highly suggest you go give the full episode a listen):
PhDre: So I was chatting with some folks about who’s the next Ovation-worthy member of Euro…and it came to mind–are you familiar with BaseballReference.com? [...] One page they have is a probability of making the Hall of Fame [...] it’s a model [that measures] the probability that you’ll be in the Hall of Fame given the observable characteristics. [...] What do you think about trying to come up with a probability of [receiving an] Ovation or an Ovation metric?
Simply put, I gave it a go. Let’s talk about the construction of my Ovation probability model and some takeaways from it.
Methodology
As you may recall, I recently conducted a poll asking Europeians to rate 9 different factors on a scale from 0-10 in terms of their importance to the awarding of an Ovation. I received 23 responses (one was thrown out due to marking 0 for all categories except Justice, which was rated as a 1; this response slightly skewed the means of all the factors, but the comments made by this respondent are still preserved within the relevant spreadsheet), and based on 22 responses, I received the following averages (rounded to the nearest thousandth for this write-up, but used without rounding in later calculation):
Presidential (or equivalent) Terms: 7.727
Minister (or equivalent) Terms: 7.045
Speaker Terms: 6.091
Presidential Medals: 5.955
Senator Terms: 5.773
Vice Presidential (or equivalent) Terms: 5.000
Chief Justice Terms: 4.318
Sapphire Stars: 3.864
Justice Terms: 3.545
Minister (or equivalent) Terms: 7.045
Speaker Terms: 6.091
Presidential Medals: 5.955
Senator Terms: 5.773
Vice Presidential (or equivalent) Terms: 5.000
Chief Justice Terms: 4.318
Sapphire Stars: 3.864
Justice Terms: 3.545
Before proceeding, what can we gather from this? Simply put, Europeians place a heavy emphasis on Executive service in determining who deserves an Ovation; conversely, Europeians believe Judiciary service, at least relatively speaking, does not particularly matter all that much in terms of Ovations awarded (more on this later).
Back to the construction of this model, these means were re-scaled on a scale from 1 (for the most important factor) to .05 (for the least important factor; this number is not 0 as all factors were deemed at least somewhat relevant, so the least important factor will still hold some weight). Converted, we receive the following coefficients (again rounded to the nearest thousandth for write-up) that will now allow us to calculate probabilities.
Presidential (or equivalent) Terms: 1
Minister (or equivalent) Terms: 0.837
Speaker Terms: 0.609
Presidential Medals: 0.576
Senator Terms: 0.533
Vice Presidential (or equivalent) Terms: 0.348
Chief Justice Terms: 0.185
Sapphire Stars: 0.076
Justice Terms: 0.05
Minister (or equivalent) Terms: 0.837
Speaker Terms: 0.609
Presidential Medals: 0.576
Senator Terms: 0.533
Vice Presidential (or equivalent) Terms: 0.348
Chief Justice Terms: 0.185
Sapphire Stars: 0.076
Justice Terms: 0.05
A candidate's frequencies are multiplied by these coefficients and summed together. Afterwards, an adjustment is made relative to an average Ovation recipient. The coefficient output for an average Ovation recipient (found by taking the means of each category among all Ovation recipients and then multiplying by the above coefficients) was calculated at about 24.23; this number is halved and subtracted from the output of a candidate. (Originally, a full subtraction of 24.23 resulted in an overly punishing model; halving it produced more realistic results.)
Finally, the candidate's adjusted coefficient output is plugged into the following equation (a Sigmoid function):
= 1 / (1 + exp(-(candidate's coefficient output)))
This number is converted into a percent for a candidate's probability of receiving an Ovation.
Now, let’s get into what you really came here for: the probability numbers.
Largely, most recipients received a 95-100% probability of receiving an Ovation from our model. This shouldn’t serve as a big surprise, but those that did not receive such high numbers should raise some questions about the process. Let’s talk about some of them.
The OnderKelkia Problem
When crunching these numbers, I was surprised to see OnderKelkia return one of the lowest probabilities in the model. I see two main reasons as to why this may have occurred (and it may shed some light into other potential candidates):
1) This model is not a fan of “specialist” citizens.
In a more broad sense, this model rewards civilian service in all three branches of government; OnderKelkia served exclusively in the judiciary, and with only some Presidential medals outside of his judiciary service, Onder hasn’t contributed in the same ways as almost all other Ovation recipients. I would say this serves as an example that Onder might be a surprising Ovation recipient relative to the more well-rounded resumes of most other recipients and that low probabilities do not necessarily correlate with unworthiness (in fact, I’ll touch on another metric illustrating this idea in a bit).
2) Europeians don’t particularly value judiciary service.
In polling Europeians, judiciary experience simply wasn’t that important in their view. Sapphire Stars were noted as slightly more important than Justice terms, and judiciary experience ranked near the bottom in terms of importance. In short, the model is simply reflecting the views of Europeians.
Should The Opinions of Europeians Matter?
I wrestled with this question a bit when crafting the model. Should I base my findings based on the accomplishments of past Ovation recipients, or should I base it on the opinions of current Europeians? I ultimately sided with the latter, and allow me to briefly explain why:
While it might seem more proper to compare Ovation candidates to those who have already received it, I found two main issues with this: 1) what Europeians value in an Ovation can and will shift over time, especially over the span of a decade plus, and 2) Ovations are awarded in a snapshot of time against the Ovation standards we have set in the present. What might have qualified for an Ovation a decade ago may not meet today’s standard, and as such, I opted to go with a more present day standard rather than looking into the past. Reasonable minds may disagree with me on this point (especially those more statistically minded and are likely screaming "run a regression!" right now, to which I say hello PhDre and GraV), and with this in mind, I will be making my model open-source at the end of this article. I absolutely wish for this to be refined over time as I know my presented product is not perfect, so please feel free to play around and tinker with it as much as you want.
What About the Admins?
So NES and Mousebumples, huh? I don’t necessarily think anybody would question their Ovations, yet they come in scoring somewhat low in the probability department. Simply, I don’t have a quantifiable way in the present to measure administrative work, and that may very well be a shortcoming of this model. As with the previous point, I invite any updates or changes you would make to my model.
Any Other Shortcomings, Mr. Model Man?
The biggest glaring issue here is a lack of confirmation surrounding Justice and Senator terms. At best, my tally is a generally rough estimate based on Legislative Records and past records of Justices in lieu of a definitive list of every Senator and Justice to have ever served. While I strove for accuracy in calculating the terms of each recipient, there is an obvious margin for error in calculating this. Until such time as a definitive list emerges, this model will always be subject to some error.
Now, with the questions about Ovation probability out of the way, let me introduce another metric that may alleviate some of the issues brought about by the probability’s calculation: Ovations Above Replacement.
OAR Methodology
Ovations Above Replacement (OAR) represents the amount of Ovations a particular candidate is expected to receive over the recipient who represents the 50th percentile.
To calculate Ovations Above Replacement, the number of frequencies for a particular column is summed up and divided by the sum of the medians for all ovation recipients. Afterwards, 1 is subtracted from the quotient, as 1 represents the average of the medians for all Ovation Recipients. An OAR of 0 indicates an average Ovation recipient (i.e. the 50th percentile); an OAR below 0 indicates a potentially underqualified ovation recipient; an OAR of 1 or higher indicates a highly qualified Ovation recipient (who may even be deserving of a second Ovation if such a thing were allowed).
When this was applied to the Ovation recipients, I received the following results:
Malashaan and Drecq stand above the pack with an OAR around 1.5. Aexnidaral, Writinglegend, Calvin Coolidge, and HEM also all stand as recipients who might be deserving of a second Ovation (if such a thing were allowable) with OARs around 1. This metric also resolves our OnderKelkia question from earlier, as the metric emphasizes his dominant status over the Judiciary for such a long period of time. That said, our admin problem still isn’t resolved (and may never be), something I hope stands as an example that this Index is not infallible and should be used as a tool rather than the be all, end all of who should and shouldn’t receive an Ovation.
A Note on OAR and OP%
The OAR for a particular person may not necessarily correlate to their OP%, as OAR rewards outperforming other candidates across categories, while OP% simply takes a holistic view of a single candidate and calculates their likelihood of receiving an Ovation based on previous awardees. Put another way, OAR measures a candidate's relative performance to others, while OP% merely uses the performance of others to establish a baseline for who should receive an Ovation.
We’ve spent a lot of time outlining how this model was constructed using past recipients, but let’s look ahead at some potential prospective candidates (NOTE: some people on this list received a Triumph but not an Ovation).
The two leading candidates here (CSP and Kraketopia) have both already received a Triumph; should they still receive an Ovation on top of that for their civilian service? This model might suggest so. If you don’t believe they should, look elsewhere to citizens both present and past: Lloenflys appears the most likely candidate on my radar for an Ovation, and McEntire and Notolecta both seem like worthy candidates in the view of the model. Prim might be a borderline candidate at the moment, and given he is still serving, it’s quite likely his resume will only continue to strengthen over time.
Do you have a citizen whose chances you’re curious about? I’ve linked all sources that I used on the spreadsheet, so feel free to pull their particular resume and crunch the numbers for yourself; I’ve only included a small selection of potentially noteworthy candidates for the purposes of this article, but it should in no way construe this group to somehow be more in line for an Ovation than anybody not mentioned. I simply couldn’t pore over every citizen in Europeian history and crunch their collective resumes.
Conclusion
Is this model perfect? No. Is it the final say on who should receive an Ovation? Also no. That said, I hope this model provides a starting point for some useful statistical insight into how we go about passing them out and serves as a tool for future leaders who may be considering awarding the next Ovation.
You can find the full model here. Feel free to make a copy and play with it to your heart’s content (and maybe even improve what I’ve started here).
Last edited: