Friday, January 25, 2008

An Exceedingly Obvious Observation

Forgive my simplicity, but I have what amounts to a very obvious observation to make, one that's been burning a metaphorical hole in my mind for some time.

It boils down, basically, to this: A contract worth X amount of money for N years is a better deal than a contract worth X amount of money for N-1 years.

That seems quite clear. In the second contract you're getting a "free" year tacked on to the end of the first, and if the player is hurt or otherwise has lost his effectiveness, you could simply release him and pay no more than you did in the first contract. But this concept is not so obvious when you consider what has lately become something of a sabermetric tenet: the rallying cry of "more money, fewer years." Contracts handed out that have a high average annual value over a short period are hailed as better deals than contracts that have a lower average annual value over a longer period. One-year deals, in particular, are especially lauded, regardless of the often-high salaries that those deals are attached to.

The theory behind this reasoning is sound. Free agents, by their very definition, are already six full years into their major league careers, and as such, in the vast majority of cases they hit the market either at or after their peak years. It makes sense to give these players shorter-term deals, because it is likely that in the last years of long contracts the players will be well past their primes. As big-names with high salaries, these players will continue to amass large numbers of at bats in starting roles despite their decline, damaging a team's roster flexibility and hampering the optimal team construction, perhaps by blocking a superior younger player or preventing the team from addressing what it views as a position already filled.

In practice, this is indeed usually the usage pattern of late-contract former free agents by major league teams, giving credence to the "more money, fewer years" mantra. But it is only so because of the intractability and backwardness of franchises. If teams were only able to see their players in the impartial light of pure rationality, without the influences of sunk-cost salary commitment and star power--influences that have absolutely nothing to do with winning baseball games--then they could optimize their roster construction despite the presence of aging players in the final years of their free-agent contract. Take, for example, a star center fielder who hits the market at age 30, and is offered two contracts: a three-year deal worth $40 million, or a four year deal worth $45. The performance analyst might lobby for the former, knowing that our star shortstop is already showing the signs of aging and will be a below-average starter before too long. But when viewed as being the first deal plus a 1 year, $5 million dollar commitment for the fourth year, the second deal clearly comes out as the superior one--that is, if the team in question can recognize that at that point in the contract, the star center fielder might need to be moved to an outfield corner or first base, play with a platoon partner, or be a strong bench bat and occasional starter. It's hard to argue that a player worthy of $13.3 million dollars a year wouldn't be worth $5 million in the fourth year, but easy to argue that that fourth year might be an albatross around the team's neck if it continues to utilize him in the exact same role for the entirety of the contract. By understanding aging curves, the gradual shift down the defensive spectrum (from center field to left field, for example), and utilizing its assets in their most optimal roles, regardless of off-the-field factors, money already spent, or a player's demands, a team can squeeze the most value out of long-term contracts and prevent those more lengthy commitments from being so frowned upon by the performance analysis community.

Joe Sheehan discussed this issue in 2005 regarding the signings of Rafael Furcal (3 years, $39 million) vs. Edgar Renteria (4 years, $40 million), and explained his reasoning for favoring contracts that offer more money over fewer years.

The Dodgers made the Rafael Furcal signing official. As I mentioned last week, I really like the pickup, as much for the structure of the contract--more money over fewer years--as its impact on the Dodgers.

I got a lot of negative feedback on that claim, so perhaps I should expound on it. I don’t know that I can argue that a three-year, $39-million contract makes more sense than a four-year, $40-million deal. As a number of readers mentioned, if you like the first, the second is essentially that deal and a $1 million salary for the fourth year, which should be a deal you’d make with any free agent worth signing. That’s an extreme example, where the team pays maximum price for the luxury of a shorter commitment, and as such, is probably not viable. If you use the Renteria contract as a gauge, you can’t see Furcal’s deal as a bargain.

There is definitely a sweet spot, however, where a portion of the money that would be assigned to a fourth year is spread amongst the first three, yielding a higher average annual salary but a lower total value, and limiting the commitment to three years. The attrition rates of ballplayers and their inherent unpredictability, as well as the negative impact of an eight-figure salary attached to a poor player, lead me to believe that this tradeoff point is higher than most people think it is, if not the 90% of fourth-year money implied by the Furcal contract, at least somewhere north of 50%. There’s more value in avoiding that dead year at the end of a deal than in the marginal N million dollars extra you pay in each season prior to it, at least for teams with reasonable capitalization and cash flow.

I can see where, as an industry, baseball would discourage this practice. After all, it’s annual salaries that become the comparison point in negotiations, be they with pre-arbitration players, arb cases or other free agents, and this kind of approach would tend to drive those figures up. Then again, what hurts teams more? Extra money paid out to productive players, or the back ends of ludicrious four- five- and six-year deals that drag down a roster and a payroll?

I’m not sure exactly where this sweet spot lies. I’m actually fairly sure it’s a moving target based a team’s cash flow and risk tolerance, as well as a player’s risk profile. If Furcal’s contract is an extreme example of the principle, and not economically sound, I can accept that, but I think the approach--fewer years, higher AAV--is sensible given the difficulty of projecting player performance beyond a short time horizon.

There is one more factor to consider in the debate between more expensive, shorter contracts versus cheaper longer ones. The "more money, fewer years" approach is superior with respect to when a player goes down with a season- or career-ending injury towards the back-end of a deal. This serves to complicate the whole debate, especially with regards to pitchers, who are far more likely to suffer devastating injuries than position players. Clearly, there are no hard-and-fast rules here, and every contract must be analyzed in its own context and unique environment. However, one should not dismiss long-term deals simply due to a belief that the more years a contract has, the worse it is.

No comments: