By Steve Novakovic, CAIA, CFA, Managing Director of Educational Programming, CAIA Association
In case you missed it, I recently published an article airing some of my grievances with benchmarking and performance analysis. As I warned in part 1, I’ve got a lot to say on the matter, so buckle up.
Factor 3: “Peer”-group analysis
LPs evaluate GPs by comparing them to their peers, so why not evaluate LPs by comparing them to peers? When done correctly, this can be a relevant exercise, but far too often this form of benchmarking is conducted sloppily at best. In evaluating private market GPs, investors create a peer group that is as similar as possible. Similar fund sizes, similar geographic scope, similar industry focus, etc. Investors don’t compare the performance of a mega-cap PE fund with a small-cap, industrial-focused PE fund.
So why is it so difficult to do the same for LPs? Should we really compare the performance of a $2 billion endowment to that of Harvard ($50 billion)? The fact that they both invest in alternatives doesn’t necessarily make them peers. But matching asset size alone is not nearly enough. Investors wouldn’t compare the performance of a $5 billion venture fund with a $5 billion real estate fund; they have completely different strategies. The same can be true for LPs. If Harvard has a different return objective than Notre Dame, does it actually make sense to compare their performance? Of course not. You’d expect their performance to be different because they’re targeting different outcomes.
I’m certainly in favor of a good and proper peer group analysis, but peer group selection is paramount for the analysis to be useful. To play into a stereotype, if a member of the board is comparing their institution’s performance to a friend of theirs who sits on another board. That’s an easy way to make a bad decision.
Factor 4: How independent was the investment team?
Does the board really delegate authority to the investment team? Is the investment team setting the asset allocation? If the board has any influence on the allocation of the portfolio (good or bad), that must be taken into consideration. Ideally, the board documents instances where recommendations were overruled, or disagreements encountered. Did the board wisely intervene, or was the original recommendation sound?
Relatedly, did the board or organization ever have to make business decisions at the expense of the portfolio? For example, during the 2008 Great Financial Crisis, many LPs had more issues than just that of portfolio performance. Their organizations were experiencing the crisis too. For some, the portfolio was a provider of liquidity of last resort. There were examples of boards/institutions pulling assets from the investment funds or at least increasing liquidity to prepare for an eventual withdrawal of funds.
Enhancing liquidity at the bottom of the market is not going to help performance by any means, but that decision was not made by the investment team. These types of decisions could make peer group comparisons irrelevant, as well as market-based benchmark comparisons.
Factor 5: What about alpha?
Good news: you met (or even exceeded) your portfolio objective. Does that mean you get to keep your job? Bad news: you fell short of your objective. Are you doomed? An essential follow-up question is: how did you get there? Essentially, LPs have three levers they can pull to influence returns. The most important and impactful lever is asset allocation (the beta decision). Academic research shows that, over the long-term, asset allocation plays the largest role in performance outcomes. It is very hard to overcome bad asset allocation decisions, while poor execution of the other two levers can mar acceptable asset allocations.
Lever two is market timing. For most LPs, this would be tactical asset allocation decisions (i.e. intentionally being over/under weight long-term asset allocation targets). For more sophisticated LPs, this may also be portfolio overlay decisions, or even outright market bets. These decisions are not taken lightly and can have a material impact on performance. One LP we spoke with estimated their tactical market timing decisions had a positive impact of 150-200 basis points per year!
Lever three is fund/security selection (simplistically, this is the alpha). For LPs in the alts world, this is why we’re here, right? We can’t buy a growth equity index and go home. Instead, we must apply our expertise and invest in who we believe to be the most skilled GPs. Given the well documented, wide dispersion of outcomes in most alternative investment strategies, there is plenty of opportunity for value creation (or value destruction) via fund selection.
With multiple levers contributing to performance, it’s essential that boards include performance attribution in their evaluation process. How did each of these levers contribute to returns? What if the portfolio exceeded the objective, but attribution suggests that fund selection detracted from returns? The team got the most important element right in asset allocation but failed in their day-to-day job of partnering with skilled managers.
What should be done if the portfolio objective was not met, but it was an issue with asset allocation? The team added value through fund selection, but that tends to be a smaller driver and was not enough to offset the asset allocation decision. Is this a team worth terminating?
Keep in mind, in all these scenarios, the assumption is that the board has delegated authority to the team, including asset allocation. When the board still owns asset allocation, the answers to these questions are certainly much more straightforward and further amplify the importance of performance attribution (as the team should be judged exclusively on levers two and three).
Factor 6: Is there nuance in performance analysis?
We all know there are an infinite number of ways to achieve a performance outcome. Does it matter how you got there and the “quality” and “efficiency” of those returns? Perhaps you underperformed your benchmark but did so with greatly reduced risk. Your Sharpe ratio was better, as were your Sortino and Treynor ratios. Your max drawdawn was far away from market-based benchmarks. Your upside-downside capture ratio was great. Perhaps you simply didn’t take enough risk (but the risk you did take captured returns very well). Should the team be fired for underperformance, or celebrated for return efficiencies? If hedge fund managers can raise loads of capital touting large Sharpe ratios accompanied by single-digit (net of fees) returns, shouldn’t that be enough?
Factor 7: What to do about risk-based objectives?
Following on from the prior factor, how do boards with risk-based objectives evaluate performance? Certainly, they would measure how closely the risk of the portfolio tracked relative to the risk target. But then what? Would there be expectations for a certain Sharpe ratio? Is there still some sort of performance comparison? Or does the realized return not matter so long as realized volatility remains in line with expectations?
I don’t have an answer to these questions. Nor do I believe there is one right answer. My initial thought would be to evaluate within the context of a Sharpe ratio or Sortino ratio, or some other measure that highlights the return earned relative to each unit of risk. Even then, there needs to be some context. Is a Sharpe ratio of 0.5 good or bad? Why? Which leads us back to market-based benchmarking. Don’t forget, we’re also still dealing with the questions from above: what time frame, how long, etc.
Factor 8: Is performance the only thing that matters?
Do we live in a black and white world where our value is distilled down to one number? Or are there other aspects that matter? Ultimately, that is for the board to decide and should be articulated in advance. Don’t forget, many LPs are associated with mission-driven organizations. The boards and the employees may very well have a connection with that mission. Perhaps the portfolio performance does not meet expectations, but the team embodies the mission and serves as exceptional stewards for the organization. How does that team compare to one that is indifferent to the mission but meets performance expectations? For some boards and organizations, having the right people and culture may be more important than performance (up to a point).
What do we do with all of this?
I recognize there are many more questions than answers in this post. But as the saying goes, if I knew the answer I’d be sitting on a beach somewhere. This is the beauty of the world of finance. It isn’t scientific, there isn’t a one-size-fits-all answer, and proper analysis is complicated. When it comes to performance evaluation, it is imperative to be an analyst, analyze using multiple tools, apply judgement, think critically, and make an informed and educated decision based on the complete (while perhaps fuzzy) picture.
Ultimately, my hope is that next time you see someone say, “performance A was x% and performance B was y%, which was way better,” that you understand the analysis was simplistic at best, and misleading or disingenuous at worst.
About the Contributor
Steve Novakovic, CAIA, CFA is Managing Director of Educational Programming for CAIA Association. He joined CAIA in 2022 and has been a Charterholder since 2011. Prior to CAIA Association, Steve was a faculty member at Ithaca College, where he taught a variety of finance courses. Steve started his career at his alma mater, Cornell University, (B.S. 2004, MPS 2006) in the Office of University Investments. In his time there, he invested across a variety of asset classes for the $6 billion endowment, generating substantial insight into endowment management and fund investing across the investment landscape.
Learn more about CAIA Association and how to become part of a professional network that is shaping the future of investing, by visiting https://caia.org/