Later this month I'm going to come out with a new ranking of the Super Sophs that's a bit more systematic than my first one. For one thing, I've contacted the good people at Golfweek who run the Sagarin Performance Index, the main rival to the Rolex Rankings, for insights into the goals and assumptions of their ranking algorithm. For another, I'm going to look harder at 2007 stats beyond the money list and make some decisions about how to weigh the ups and downs all of the Super Sophs have gone through since January 2006. I won't be trying to come up with a rival formula of my own, of course; my days as a math-English double major in college are just too far away to even think about thinking about that. But I want to have a better basis for ranking my top 20 Super Sophs than I did in early May.
Now that the 2007 season is in full swing, I can compare stats such as scoring average, greens in regulation, fairways hit, putts per green in regulation, and birdies per round between players and also track a given player's development from this year to last. I can look at percentage of cuts made and top 10 (and better) finishes over the course of their careers. I can figure out how to weight the Golfweek/Sagarin Performance Index and the Rolex Rankings, which are quite different animals. The former keeps a rolling record of how well each individual did against the fields in the tournaments she played in over the previous 52 weeks (on the LPGA, JLPGA, European Tour, and Futures Tour) and the latter a rolling record of points earned in events played in over the previous 104 weeks, with those in the last 13 weighted higher than the rest and points distributed based on strength of field and finish in each event (on the KLPGA as well as on those covered in the former), with the exception of majors, which have fixed point distributions for everyone who makes the cut. Both systems have their bugs--it seems to me the Rolex rates JLPGA players too highly (based on their generally poor performance in LPGA events) and the Sagarin ranks certain individuals too highly (Beth Daniel was in their top 50 until this week--now she's gone; was it something I wrote to them? Now I'm wondering whether Lindsey Wright and Candie Kung really belong in the top 50). The Rolex may give too much credit for winning majors (should Morgan Pressel be ranked quite that highly?) while the Sagarin may not give enough credit for recent performances (from what I can tell, each week is weighted equally in their formula). Overall, my own "gut" ranking system is closer to the Sagarin than the Rolex, but I'm going to use both as checks on my subjectivity.
The thing about most any ranking system is that your initial calculation is quite significant. In both the above systems, whoever gets ranked #1 first only goes down if they've been consistently beaten by those behind them. Beating somebody higher-ranked than you counts more than beating someone lower-ranked than you. So the initial assumptions of their formulas are key. With the Super Sophs having less than two full seasons under their belts, I'm making my ranking a career-based one, but I'm going to weight this season's stats more heavily than last. My purpose is to look forward as much as to look back--based on past and recent performances, who can we expect to have the best season and career at the end of 2007?
If I had the time and if the book were easily available here in English, I'd learn from Moneyball, but that'll have to wait until I return to the States. This is neither quantum mechanics nor rocket science, but it strikes me as hard in its own way, given all the variables that go into one shot, much less one round, tournament, or career. But that's why I'm paying myself the big bucks to write for Mostly Harmless!
[Update: There must be something in the air. Hound Dog, who actually has his own formula, just came out with his June rankings. Also, in comments he recommended checking out his archives for his own reasoning; so go here for his Rolex Rankings critique, here for his take on what stats matter most, here for his own formula, and here for his comparison of the Super Sophs' rookie season with previous great classes' ones.]
1 comment:
I was past due a rankings update and with the major this week, it seemed a good time to do it. I'd say "great minds think alike" but I already know I don't qualify in that category!
If you're comparing a small group of players like the Sophs, I agree with incorporating as much data as you can get. Just be careful to weigh each element appropriately - I wouldn't give a player's sand-save stat equal weight to scoring average, for example.
If you haven't already, check out the post on my Player Rating System (in the Essay Archive section on my site). Although I've tweaked the weights and point scales a bit since I wrote that, the original theory is still the same.
Funny you mention Beth Daniel...she's a perfect example of a "ratings nightmare". I want to penalize players for missing the cut, so each player starts the season with 12 points and loses 2 with each MC. So Daniel plays one event and makes the cut (T29 to be exact). Her total score for 2007 is 12. Coupled with her 2006 score of 18, she currently ranks #38. Now if you look at her record closely, Beth has played pretty well the last two years. So the question becomes, is she NOT really the 38th best player on Tour? Is Michelle Wie NOT really #25? She's played even less than Beth and you'll find most people think Michelle's better than that.
Just an example of the kind of mental loop-de-loops you have to endure if you really care about this kind of stuff.
Post a Comment